From: Eric Dumazet <[email protected]>
Date: Fri, 19 Jul 2019 11:52:33 -0700
> Some applications set tiny SO_SNDBUF values and expect
> TCP to just work. Recent patches to address CVE-2019-11478
> broke them in case of losses, since retransmits might
> be prevented.
>
> We should allow these flows to make progress.
>
> This patch allows the first and last skb in retransmit queue
> to be split even if memory limits are hit.
>
> It also adds the some room due to the fact that tcp_sendmsg()
> and tcp_sendpage() might overshoot sk_wmem_queued by about one full
> TSO skb (64KB size). Note this allowance was already present
> in stable backports for kernels < 4.15
>
> Note for < 4.15 backports :
> tcp_rtx_queue_tail() will probably look like :
>
> static inline struct sk_buff *tcp_rtx_queue_tail(const struct sock *sk)
> {
> struct sk_buff *skb = tcp_send_head(sk);
>
> return skb ? tcp_write_queue_prev(sk, skb) : tcp_write_queue_tail(sk);
> }
>
> Fixes: f070ef2ac667 ("tcp: tcp_fragment() should apply sane memory limits")
> Signed-off-by: Eric Dumazet <[email protected]>
> Reported-by: Andrew Prout <[email protected]>
> Tested-by: Andrew Prout <[email protected]>
> Tested-by: Jonathan Lemon <[email protected]>
> Tested-by: Michal Kubecek <[email protected]>
> Acked-by: Neal Cardwell <[email protected]>
> Acked-by: Yuchung Cheng <[email protected]>
> Acked-by: Christoph Paasch <[email protected]>
> Cc: Jonathan Looney <[email protected]>
Applied and queued up for -stable.