On Thu, 2018-01-25 at 20:08 +0800, Li RongQing wrote:
> Clean the sk_frag.page of new cloned socket, otherwise it will release
> twice wrongly since the reference count of this sk_frag page is not
> increased.
> 
> sk_clone_lock() is used to clone a new socket from sock which is in
> listening state and has not sk_frag.page, but a socket has sent data
> and can gets transformed back to a listening socket, will allocate an
> tcp_sock through sk_clone_lock() when a new connection comes in.
> 
> Signed-off-by: Li RongQing <lirongq...@baidu.com>
> Cc: Eric Dumazet <eduma...@google.com>
> ---
>  net/core/sock.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/net/core/sock.c b/net/core/sock.c
> index c0b5b2f17412..c845856f26da 100644
> --- a/net/core/sock.c
> +++ b/net/core/sock.c
> @@ -1738,6 +1738,8 @@ struct sock *sk_clone_lock(const struct sock *sk, const 
> gfp_t priority)
>               sk_refcnt_debug_inc(newsk);
>               sk_set_socket(newsk, NULL);
>               newsk->sk_wq = NULL;
> +             newsk->sk_frag.page = NULL;
> +             newsk->sk_frag.offset = 0;
>  
>               if (newsk->sk_prot->sockets_allocated)
>                       sk_sockets_allocated_inc(newsk);

Good catch.

I suspect this was discovered by some syzkaller/syzbot run ?

I would rather move that in tcp_disconnect() that only fuzzers use,
instead of doing this on every clone and slowing down normal users.


Reply via email to