On hosts with many cpus we can observe a very verious contention
on spinlocks used in mm slab layer.

The following can happen quite often :

1) TX path
  sendmsg() allocates one (fclone) skb on CPU A, sends a clone.
  ACK is received on CPU B, and consumes the skb that was in the retransmit
  queue.

2) RX path
  network driver alocates skb on CPU C
  recvmsg() happens on CPU D, freeing the skb after it has been delivered
  to user space.

In both cases, we are hitting the asymetric alloc/free pattern
for which slab has to drain alien caches. At 8 Mpps per second,
this represents 16 Mpps alloc/free per second and has a huge penalty.

In an interesting experiment, I tried to use a single kmem_cache for all the 
skbs
(in skb_init() : skbuff_fclone_cache = skbuff_head_cache =
                  kmem_cache_create("skbuff_fclone_cache", sizeof(struct 
sk_buff_fclones),);
qnd most of the contention disappeared, since cpus could better use
their local slab per-cpu cache.

But we can do actually better, in the following patches.

TX : at ACK time, no longer free the skb but put it back in a tcp socket cache,
     so that next sendmsg() can reuse it immediately.

RX : at recvmsg() time, do not free the skb but put it in a tcp socket cache
   so that it can be freed by the cpu feeding the incoming packets in BH.

This increased the performance of small RPC benchmark by about 10 % on a host
with 112 hyperthreads.

Eric Dumazet (3):
  net: convert rps_needed and rfs_needed to new static branch api
  tcp: add one skb cache for tx
  tcp: add one skb cache for rx

 include/linux/netdevice.h  |  4 ++--
 include/net/sock.h         | 13 +++++++++-
 net/core/dev.c             | 10 ++++----
 net/core/net-sysfs.c       |  4 ++--
 net/core/sysctl_net_core.c |  8 +++----
 net/ipv4/af_inet.c         |  4 ++++
 net/ipv4/tcp.c             | 49 +++++++++++++++++---------------------
 net/ipv4/tcp_ipv4.c        | 11 +++++++--
 net/ipv6/tcp_ipv6.c        | 12 +++++++---
 9 files changed, 69 insertions(+), 46 deletions(-)

-- 
2.21.0.225.g810b269d1ac-goog

Reply via email to