On 10/31/18 3:57 PM, Paweł Staszewski wrote:
> Hi
> 
> So maybee someone will be interested how linux kernel handles normal
> traffic (not pktgen :) )
> 
> 
> Server HW configuration:
> 
> CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
> 
> NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
> 
> 
> Server software:
> 
> FRR - as routing daemon
> 
> enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to local numa
> node)
> 
> enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local numa node)
> 
> 
> Maximum traffic that server can handle:
> 
> Bandwidth
> 
>  bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>   input: /proc/net/dev type: rate
>   \         iface                   Rx Tx                Total
> ==============================================================================
> 
>        enp175s0f1:          28.51 Gb/s           37.24 Gb/s          
> 65.74 Gb/s
>        enp175s0f0:          38.07 Gb/s           28.44 Gb/s          
> 66.51 Gb/s
> ------------------------------------------------------------------------------
> 
>             total:          66.58 Gb/s           65.67 Gb/s         
> 132.25 Gb/s
> 
> 
> Packets per second:
> 
>  bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>   input: /proc/net/dev type: rate
>   -         iface                   Rx Tx                Total
> ==============================================================================
> 
>        enp175s0f1:      5248589.00 P/s       3486617.75 P/s 8735207.00 P/s
>        enp175s0f0:      3557944.25 P/s       5232516.00 P/s 8790460.00 P/s
> ------------------------------------------------------------------------------
> 
>             total:      8806533.00 P/s       8719134.00 P/s 17525668.00 P/s
> 
> 
> After reaching that limits nics on the upstream side (more RX traffic)
> start to drop packets
> 
> 
> I just dont understand that server can't handle more bandwidth
> (~40Gbit/s is limit where all cpu's are 100% util) - where pps on RX
> side are increasing.
> 
> Was thinking that maybee reached some pcie x16 limit - but x16 8GT is
> 126Gbit - and also when testing with pktgen i can reach more bw and pps
> (like 4x more comparing to normal internet traffic)
> 
> And wondering if there is something that can be improved here.

This is mainly a forwarding use case? Seems so based on the perf report.
I suspect forwarding with XDP would show pretty good improvement. You
need the vlan changes I have queued up though.

Reply via email to