On Wed, 2016-09-28 at 18:42 -0700, Eric Dumazet wrote:
> On Wed, 2016-09-28 at 12:52 +0200, Paolo Abeni wrote:
> 
> > +static void udp_rmem_release(struct sock *sk, int partial)
> > +{
> > +   struct udp_sock *up = udp_sk(sk);
> > +   int fwd, amt;
> > +
> > +   if (partial && !udp_under_memory_pressure(sk))
> > +           return;
> > +
> > +   /* we can have concurrent release; if we catch any conflict
> > +    * we let only one of them do the work
> > +    */
> > +   if (atomic_dec_if_positive(&up->can_reclaim) < 0)
> > +           return;
> > +
> > +   fwd = __udp_forward(up, atomic_read(&sk->sk_rmem_alloc));
> > +   if (fwd < SK_MEM_QUANTUM + partial) {
> > +           atomic_inc(&up->can_reclaim);
> > +           return;
> > +   }
> > +
> > +   amt = (fwd - partial) & ~(SK_MEM_QUANTUM - 1);
> > +   atomic_sub(amt, &up->mem_allocated);
> > +   atomic_inc(&up->can_reclaim);
> > +
> > +   __sk_mem_reduce_allocated(sk, amt >> SK_MEM_QUANTUM_SHIFT);
> > +   sk->sk_forward_alloc = fwd - amt;
> > +}
> 
> 
> This is racy... all these atomics make me nervous...

Ah, perhaps I got it: if we have a concurrent memory scheduling, we
could end up with a value of mem_allocated below the real need. 

That mismatch will not drift: at worst we can end up with mem_allocated
being single SK_MEM_QUANTUM below what is strictly needed.

A possible alternative could be:

static void udp_rmem_release(struct sock *sk, int partial)
{
        struct udp_sock *up = udp_sk(sk);
        int fwd, amt, alloc_old, alloc;

        if (partial && !udp_under_memory_pressure(sk))
                return;

        alloc = atomic_read(&up->mem_allocated);
        fwd = alloc - atomic_read(&sk->sk_rmem_alloc);
        if (fwd < SK_MEM_QUANTUM + partial)
                return;

        amt = (fwd - partial) & ~(SK_MEM_QUANTUM - 1);
        alloc_old = atomic_cmpxchg(&up->mem_allocated, alloc, alloc - amt);
        /* if a concurrent update is detected, just do nothing; if said update
         * is due to another memory release, that release take care of
         * reclaiming the memory for us, too.
         * Otherwise we will be able to release on later dequeue, since
         * we will eventually stop colliding with the writer when it will
         * consume all the fwd allocated memory
         */
        if (alloc_old != alloc)
                return;

        __sk_mem_reduce_allocated(sk, amt >> SK_MEM_QUANTUM_SHIFT);
        sk->sk_forward_alloc = fwd - amt;
}

which is even more lazy in reclaiming but should never underestimate the
needed forward allocation, and under pressure should eventually free the
needed memory.



Reply via email to