On 07/07/20 - 21:51, Eric Dumazet wrote:
> On Tue, Jul 7, 2020 at 9:43 PM Eric Dumazet <eduma...@google.com> wrote:
> >
> 
> > Could this be done instead in tcp_disconnect() ?
> >
> 
> Note this might need to extend one of the change done in commit 4d4d3d1e8807d6
> ("[TCP]: Congestion control initialization.")
> 
> diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
> index 
> 3172e31987be4232af90e7b204742c5bb09ef6ca..62878cf26d9cc5c0ae44d5ecdadd0b7a5acf5365
> 100644
> --- a/net/ipv4/tcp_cong.c
> +++ b/net/ipv4/tcp_cong.c
> @@ -197,7 +197,7 @@ static void tcp_reinit_congestion_control(struct sock *sk,
>         icsk->icsk_ca_setsockopt = 1;
>         memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv));
> 
> -       if (sk->sk_state != TCP_CLOSE)
> +       if (!((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)))
>                 tcp_init_congestion_control(sk);
>  }

Yes, that would work as well. In tcp_disconnect() it would have to be a
tcp_cleanup_congestion_control() followed by the memset to 0. Otherwise we
end up leaking memory for those that use AF_UNSPEC on a connection that did
have CDG allocate the gradients.

Thanks for the suggestion, I will work on a v2.


Christoph

Reply via email to