On 08/10/2018 02:02 PM, Jesper Dangaard Brouer wrote: > Background: cpumap moves the SKB allocation out of the driver code, > and instead allocate it on the remote CPU, and invokes the regular > kernel network stack with the newly allocated SKB. > > The idea behind the XDP CPU redirect feature, is to use XDP as a > load-balancer step in-front of regular kernel network stack. But the > current sample code does not provide a good example of this. Part of > the reason is that, I have implemented this as part of Suricata XDP > load-balancer. > > Given this is the most frequent feature request I get. This patchset > implement the same XDP load-balancing as Suricata does, which is a > symmetric hash based on the IP-pairs + L4-protocol. > > The expected setup for the use-case is to reduce the number of NIC RX > queues via ethtool (as XDP can handle more per core), and via > smp_affinity assign these RX queues to a set of CPUs, which will be > handling RX packets. The CPUs that runs the regular network stack is > supplied to the sample xdp_redirect_cpu tool by specifying > the --cpu option multiple times on the cmdline. > > I do note that cpumap SKB creation is not feature complete yet, and > more work is coming. E.g. given GRO is not implemented yet, do expect > TCP workloads to be slower. My measurements do indicate UDP workloads > are faster.
Applied to bpf-next, thanks Jesper!