On Fri, 2016-04-01 at 04:01 +0200, Hannes Frederic Sowa wrote:
> I thought so first, as well. But given the double check for the
> spin_lock and the "mutex" we end up with the same result for the
> lockdep_sock_is_held check.
>
> Do you see other consequences?
Well, we release the spinlock in __release_sock()
So another thread could come and acquire the socket, then call
mutex_acquire() while the first thread did not call yet mutex_release()
So maybe lockdep will complain (but I do not know lockdep enough to
tell)
So maybe the following would be better :
(Absolutely untested, really I need to take a break)
diff --git a/include/net/sock.h b/include/net/sock.h
index 255d3e03727b..7d5dfa7e1918 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1327,7 +1327,13 @@ static inline void sk_wmem_free_skb(struct sock *sk,
struct sk_buff *skb)
static inline void sock_release_ownership(struct sock *sk)
{
- sk->sk_lock.owned = 0;
+ if (sk->sk_lock.owned) {
+ /*
+ * The sk_lock has mutex_unlock() semantics:
+ */
+ mutex_release(&sk->sk_lock.dep_map, 1, _RET_IP_);
+ sk->sk_lock.owned = 0;
+ }
}
/*
diff --git a/net/core/sock.c b/net/core/sock.c
index b67b9aedb230..c7ab98e72346 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -2429,10 +2429,6 @@ EXPORT_SYMBOL(lock_sock_nested);
void release_sock(struct sock *sk)
{
- /*
- * The sk_lock has mutex_unlock() semantics:
- */
- mutex_release(&sk->sk_lock.dep_map, 1, _RET_IP_);
spin_lock_bh(&sk->sk_lock.slock);
if (sk->sk_backlog.tail)