On 17 January 2017 at 21:20, Eric Dumazet <eric.duma...@gmail.com> wrote:
> On Tue, 2017-01-17 at 20:27 +0200, Roman Yeryomin wrote:
>> On 17 January 2017 at 19:58, David Miller <da...@davemloft.net> wrote:
>> > From: Roman Yeryomin <leroi.li...@gmail.com>
>> > Date: Tue, 17 Jan 2017 19:32:36 +0200
>> >
>> >> Having larger ring sizes almost eliminates rx fifo overflow, thus 
>> >> improving performance.
>> >> This patch reduces rx overflow occurence by approximately 1000 times 
>> >> (from ~25k down to ~25 times per 3M frames)
>> >
>> > Those numbers don't mean much without full context.
>> >
>> > What kind of system, what kind of traffic, and over what kind of link?
>>
>> MIPS rb532 board, TCP iperf3 test over 100M link, NATed speed ~55Mbps.
>> I can do more tests and provide more precise numbers, if needed.
>
> Note that at 100M,  64 rx descriptors have a 8 ms max latency.
>
> Switching to 256 also multiply by 4 the latency -> 32 ms latency.
>
> Presumably switching to NAPI and GRO would avoid the latency increase
> and save a lot of cpu cycles for a MIPS board.
>

Eric, thanks for suggesting GRO, it gives huge performance gain when
receiving locally (55->95Mbps) and more than 25% gain for NAT
(55->70Mbps).
Also reading the datasheet more carefully I see that device rx
descriptor status flags are interpreted incorrectly. So will provide
an updated set.

Thanks for feedback!

Regards,
Roman

Reply via email to