On Thu, 2016-10-20 at 22:31 +0200, Paolo Abeni wrote:
> +
> +int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
> +{
> + struct sk_buff_head *list = &sk->sk_receive_queue;
> + int rmem, delta, amt, err = -ENOMEM;
> + int size = skb->truesize;
> +
> + /* try to avoid the costly atomic add/sub pair when the receive
> + * queue is full; always allow at least a packet
> + */
> + rmem = atomic_read(&sk->sk_rmem_alloc);
> + if (rmem && (rmem + size > sk->sk_rcvbuf))
> + goto drop;
> +
> + /* we drop only if the receive buf is full and the receive
> + * queue contains some other skb
> + */
> + rmem = atomic_add_return(size, &sk->sk_rmem_alloc);
> + if ((rmem > sk->sk_rcvbuf) && (rmem > size))
> + goto uncharge_drop;
> +
> + skb_orphan(skb);
Minor point :
UDP should already have orphaned skbs ? (it uses skb_steal_sock())