While possibly in future we don't necessarily need to use
sk_buff_head.lock this is a rather larger change, as it affects the
af_unix fd garbage collector, diag and socket cleanups. This is too much
for a stable patch.

For the time being grab sk_buff_head.lock without disabling bh and irqs,
so don't use locked skb_queue_tail.

Fixes: 869e7c62486e ("net: af_unix: implement stream sendpage support")
Cc: Eric Dumazet <eduma...@google.com>
Signed-off-by: Hannes Frederic Sowa <han...@stressinduktion.org>
---
I think we don't have a bug report for this and it was found by code
inspection by Eric and myself?

 net/unix/af_unix.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index a8352db..955ec15 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -1813,8 +1813,11 @@ alloc_skb:
        skb->truesize += size;
        atomic_add(size, &sk->sk_wmem_alloc);
 
-       if (newskb)
+       if (newskb) {
+               spin_lock(&other->sk_receive_queue.lock);
                __skb_queue_tail(&other->sk_receive_queue, newskb);
+               spin_unlock(&other->sk_receive_queue.lock);
+       }
 
        unix_state_unlock(other);
        mutex_unlock(&unix_sk(other)->readlock);
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to