In flood situations, keeping sk_rmem_alloc at a high value prevents producers from touching the socket.
It makes sense to lower sk_rmem_alloc only at the end of udp_rmem_release() after the thread draining receive queue in udp_recvmsg() finished the writes to sk_forward_alloc. Signed-off-by: Eric Dumazet <eduma...@google.com> --- net/ipv4/udp.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index 5a38faa12cde7fdcd5b6d86cdc0f4bc33de4..9ca279b130d51f6feaa97785b1c906775810 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1191,13 +1191,14 @@ static void udp_rmem_release(struct sock *sk, int size, int partial) } up->forward_deficit = 0; - atomic_sub(size, &sk->sk_rmem_alloc); sk->sk_forward_alloc += size; amt = (sk->sk_forward_alloc - partial) & ~(SK_MEM_QUANTUM - 1); sk->sk_forward_alloc -= amt; if (amt) __sk_mem_reduce_allocated(sk, amt >> SK_MEM_QUANTUM_SHIFT); + + atomic_sub(size, &sk->sk_rmem_alloc); } /* Note: called with sk_receive_queue.lock held. -- 2.8.0.rc3.226.g39d4020