On 11/28/2018 06:44 AM, Eric Dumazet wrote:
> 

> Because we can break of the loop if the current skb is not fully acked.
> 
> So your patch would add unnecessary overhead, since the extra sk_rb_next()
> could add more extra cache line misses.

I am testing the following optimization, since we can avoid the rb_next() call
when we reached snd_una

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 
f32397890b6dcbc34976954c4be142108efa04d8..6829e470f0c186a73c34dca414cd4a2b379baded
 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -3126,7 +3126,8 @@ static int tcp_clean_rtx_queue(struct sock *sk, u32 
prior_fack,
                if (!fully_acked)
                        break;
 
-               next = skb_rb_next(skb);
+               next = (scb->end_seq == tp->snd_una) ? NULL : skb_rb_next(skb);
+
                if (unlikely(skb == tp->retransmit_skb_hint))
                        tp->retransmit_skb_hint = NULL;
                if (unlikely(skb == tp->lost_skb_hint))

Reply via email to