W dniu 01.11.2018 o 11:55, Jesper Dangaard Brouer pisze:
On Wed, 31 Oct 2018 21:37:16 -0600 David Ahern <dsah...@gmail.com> wrote:

This is mainly a forwarding use case? Seems so based on the perf report.
I suspect forwarding with XDP would show pretty good improvement.
Yes, significant performance improvements.

Notice Davids talk: "Leveraging Kernel Tables with XDP"
  http://vger.kernel.org/lpc-networking2018.html#session-1
It will be rly interesting

It looks like that you are doing "pure" IP-routing, without any
iptables conntrack stuff (from your perf report data).  That will
actually be a really good use-case for accelerating this with XDP.
Yes pure IP routing
iptables used only for some local input filtering.



I want you to understand the philosophy behind how David and I want
people to leverage XDP.  Think of XDP as a software offload layer for
the kernel network stack. Setup and use Linux kernel network stack, but
accelerate parts of it with XDP, e.g. the route FIB lookup.

Sample code avail here:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/samples/bpf/xdp_fwd_kern.c
I can try some tests on same hw but testlab configuration - will give it a try :)



(I do warn, what we just found a bug/crash in setup+tairdown for the
mlx5 driver you are using, that we/mlx _will_ fix soon)
Ok



You need the vlan changes I have queued up though.
I know Yoel will be very interested in those changes too! I've
convinced Yoel to write an XDP program for his Border Network Gateway
(BNG) production system[1], and his is a heavy VLAN user.  And the plan
is to Open Source this when he have-something-working.

[1] https://www.version2.dk/blog/software-router-del-5-linux-bng-1086060

Ok - for now i need to split traffic into two separate 100G ports placed in two different x16 pciexpress slots to check if the problem is mainly caused by no more pciex x16 bandwidth available.


Reply via email to