On Wed, Jan 13, 2021 at 8:20 AM Eric Dumazet <eric.duma...@gmail.com> wrote: > > From: Eric Dumazet <eduma...@google.com> > > Both virtio net and napi_get_frags() allocate skbs > with a very small skb->head > > While using page fragments instead of a kmalloc backed skb->head might give > a small performance improvement in some cases, there is a huge risk of > under estimating memory usage. > > For both GOOD_COPY_LEN and GRO_MAX_HEAD, we can fit at least 32 allocations > per page (order-3 page in x86), or even 64 on PowerPC > > We have been tracking OOM issues on GKE hosts hitting tcp_mem limits > but consuming far more memory for TCP buffers than instructed in tcp_mem[2] > > Even if we force napi_alloc_skb() to only use order-0 pages, the issue > would still be there on arches with PAGE_SIZE >= 32768 > > This patch makes sure that small skb head are kmalloc backed, so that > other objects in the slab page can be reused instead of being held as long > as skbs are sitting in socket queues. > > Note that we might in the future use the sk_buff napi cache, > instead of going through a more expensive __alloc_skb() > > Another idea would be to use separate page sizes depending > on the allocated length (to never have more than 4 frags per page) > > I would like to thank Greg Thelen for his precious help on this matter, > analysing crash dumps is always a time consuming task. > > Fixes: fd11a83dd363 ("net: Pull out core bits of __netdev_alloc_skb and add > __napi_alloc_skb") > Signed-off-by: Eric Dumazet <eduma...@google.com> > Cc: Alexander Duyck <alexanderdu...@fb.com> > Cc: Paolo Abeni <pab...@redhat.com> > Cc: Michael S. Tsirkin <m...@redhat.com> > Cc: Greg Thelen <gthe...@google.com> > --- > net/core/skbuff.c | 9 +++++++-- > 1 file changed, 7 insertions(+), 2 deletions(-) > > diff --git a/net/core/skbuff.c b/net/core/skbuff.c > index > 7626a33cce590e530f36167bd096026916131897..3a8f55a43e6964344df464a27b9b1faa0eb804f3 > 100644 > --- a/net/core/skbuff.c > +++ b/net/core/skbuff.c > @@ -501,13 +501,17 @@ EXPORT_SYMBOL(__netdev_alloc_skb); > struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len, > gfp_t gfp_mask) > { > - struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); > + struct napi_alloc_cache *nc; > struct sk_buff *skb; > void *data; > > len += NET_SKB_PAD + NET_IP_ALIGN; > > - if ((len > SKB_WITH_OVERHEAD(PAGE_SIZE)) || > + /* If requested length is either too small or too big, > + * we use kmalloc() for skb->head allocation. > + */ > + if (len <= SKB_WITH_OVERHEAD(1024) || > + len > SKB_WITH_OVERHEAD(PAGE_SIZE) || > (gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) { > skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE); > if (!skb) > @@ -515,6 +519,7 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct > *napi, unsigned int len, > goto skb_success; > } > > + nc = this_cpu_ptr(&napi_alloc_cache); > len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); > len = SKB_DATA_ALIGN(len); >
The fix here looks good to me. Reviewed-by: Alexander Duyck <alexanderdu...@fb.com> I think at some point in the future we may need to follow up and do a rework of a bunch of this code. One thing I am wondering is if we should look at doing some sort of memory accounting per napi_struct. Maybe it is something we could work on tying into the page pool work that Jesper did earlier.