On Wed, 27 Mar 2019 12:02:13 +0000
Edward Cree <ec...@solarflare.com> wrote:

> On 26/03/2019 14:43, Jesper Dangaard Brouer wrote:
> > On Mon, 25 Mar 2019 13:42:39 +0000
> > Ioana Ciornei <ioana.cior...@nxp.com> wrote:
> >  
> >> Take advantage of the software Rx batching by using
> >> netif_receive_skb_list instead of napi_gro_receive.
> >>
> >> Signed-off-by: Ioana Ciornei <ioana.cior...@nxp.com>
> >> ---  
> > Nice to see more people/drivers using: netif_receive_skb_list()
> >
> > We should likely add a similar napi_gro_receive_list() function.  
>
> I had a patch series that did that; last posting was v3 back in
> November: https://marc.info/?l=linux-netdev&m=154221888012410&w=2
> However, Eric raised some issues, also some Mellanox folks privately
> reported that using it in their driver regressed performance, and
> I've been too busy since to make progress with it.  Since you seem
> to be much better than me at perf investigations, Jesper, maybe you
> could take over the series?

I'm hoping Florian Westphal might also have some cycles for this?

(We talked about doing this during NetDevConf-0x13, because if we can
make more driver use these SKB-lists, then it makes sense to let
iptables/nftables build a SKB-list of packets to drop, instead of doing
it individually, and then we leverage Felix'es work on bulk free in
kfree_skb_list).

I'm currently coding up use of netif_receive_skb_list() in CPUMAP
redirect.  As this makes is easier for e.g. Florian (and others) to
play with this API, as we no-longer depend on a device driver having
this (although we do depend on XDP_REDIRECT in a driver).

And the trick to get this as faster than GRO (that basically recycle the
same SKB) is to use the slub/kmem_cache bulk API for SKBs.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Reply via email to