On Tue, 8 Dec 2020 17:19:10 +0800 Cambda Zhu wrote: > For each TCP zero window probe, the icsk_backoff is increased by one and > its max value is tcp_retries2. If tcp_retries2 is greater than 63, the > probe0 timeout shift may exceed its max bits. On x86_64/ARMv8/MIPS, the > shift count would be masked to range 0 to 63. And on ARMv7 the result is > zero. If the shift count is masked, only several probes will be sent > with timeout shorter than TCP_RTO_MAX. But if the timeout is zero, it > needs tcp_retries2 times probes to end this false timeout. Besides, > bitwise shift greater than or equal to the width is an undefined > behavior.
If icsk_backoff can reach 64, can it not also reach 256 and wrap? Adding Eric's address from MAINTAINERS to CC. > This patch adds a limit to the backoff. The max value of max_when is > TCP_RTO_MAX and the min value of timeout base is TCP_RTO_MIN. The limit > is the backoff from TCP_RTO_MIN to TCP_RTO_MAX. > diff --git a/include/net/tcp.h b/include/net/tcp.h > index d4ef5bf94168..82044179c345 100644 > --- a/include/net/tcp.h > +++ b/include/net/tcp.h > @@ -1321,7 +1321,9 @@ static inline unsigned long tcp_probe0_base(const > struct sock *sk) > static inline unsigned long tcp_probe0_when(const struct sock *sk, > unsigned long max_when) > { > - u64 when = (u64)tcp_probe0_base(sk) << inet_csk(sk)->icsk_backoff; > + u8 backoff = min_t(u8, ilog2(TCP_RTO_MAX / TCP_RTO_MIN) + 1, > + inet_csk(sk)->icsk_backoff); > + u64 when = (u64)tcp_probe0_base(sk) << backoff; > > return (unsigned long)min_t(u64, when, max_when); > }