Instead of unconditional queueing of ready-to-consume skbuff_heads to flush_skb_cache, feed skb_cache with them instead if it's not full already. This greatly reduces the frequency of kmem_cache_alloc_bulk() calls.
Signed-off-by: Alexander Lobakin <aloba...@pm.me> --- net/core/skbuff.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 57a7307689f3..ba0d5611635e 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -904,6 +904,11 @@ static inline void _kfree_skb_defer(struct sk_buff *skb) /* drop skb->head and call any destructors for packet */ skb_release_all(skb); + if (nc->skb_count < NAPI_SKB_CACHE_SIZE) { + nc->skb_cache[nc->skb_count++] = skb; + return; + } + /* record skb to CPU local list */ nc->flush_skb_cache[nc->flush_skb_count++] = skb; -- 2.30.0