From: Eric Dumazet <eduma...@google.com> Date: Fri, 19 Jul 2019 11:52:33 -0700
> Some applications set tiny SO_SNDBUF values and expect > TCP to just work. Recent patches to address CVE-2019-11478 > broke them in case of losses, since retransmits might > be prevented. > > We should allow these flows to make progress. > > This patch allows the first and last skb in retransmit queue > to be split even if memory limits are hit. > > It also adds the some room due to the fact that tcp_sendmsg() > and tcp_sendpage() might overshoot sk_wmem_queued by about one full > TSO skb (64KB size). Note this allowance was already present > in stable backports for kernels < 4.15 > > Note for < 4.15 backports : > tcp_rtx_queue_tail() will probably look like : > > static inline struct sk_buff *tcp_rtx_queue_tail(const struct sock *sk) > { > struct sk_buff *skb = tcp_send_head(sk); > > return skb ? tcp_write_queue_prev(sk, skb) : tcp_write_queue_tail(sk); > } > > Fixes: f070ef2ac667 ("tcp: tcp_fragment() should apply sane memory limits") > Signed-off-by: Eric Dumazet <eduma...@google.com> > Reported-by: Andrew Prout <apr...@ll.mit.edu> > Tested-by: Andrew Prout <apr...@ll.mit.edu> > Tested-by: Jonathan Lemon <jonathan.le...@gmail.com> > Tested-by: Michal Kubecek <mkube...@suse.cz> > Acked-by: Neal Cardwell <ncardw...@google.com> > Acked-by: Yuchung Cheng <ych...@google.com> > Acked-by: Christoph Paasch <cpaa...@apple.com> > Cc: Jonathan Looney <j...@netflix.com> Applied and queued up for -stable.