Hello!
> At least for slow start it is safe, but experiments with atcp for
> netchannels showed that it is better not to send excessive number of
> acks when slow start is over,
If this thing is done from tcp_cleanup_rbuf(), it should not affect
performance too much.
Note, that with ABC and another pathological cases, which do not allow
to send more than a fixed amount of segments [ we have lots of them,
f.e. sending tiny segments, we can hit sndbuf limit ], we deal with case,
when slow start is _never_ over.
> instead we can introduce some tricky
> ack avoidance scheme and ack at least 2-3-4 packets or full MSS instead
> of two mss-sized frames.
One smart scheme was used at some stage (2000, probably never merged
in this form to mainstream): tcp counted amount of unacked small segments
in ack.rcv_small and kept threshold in ack.rcv_thresh.
+
+ /* If we ever saw N>1 small segments from peer, it has
+ * enough of send buffer to send N packets and does not nagle.
+ * Hence, we may delay acks more aggresively.
+ */
+ if (tp->ack.rcv_small > tp->ack.rcv_thresh+1)
+ tp->ack.rcv_thresh = tp->ack.rcv_small-1;
+ tp->ack.rcv_small = 0;
That was too much of trouble for such simple thing. So, eventually
it was replaced with much dumber scheme. Look at current tcp_cleanup_rbuf().
It forces ACK, each time when it sees, that some small segment was received.
It survived for 6 years, so that I guess it did not hurt anybody. :-)
What I would suggest to do now, is to replace:
(copied > 0 &&
(icsk->icsk_ack.pending & ICSK_ACK_PUSHED) &&
!icsk->icsk_ack.pingpong &&
!atomic_read(&sk->sk_rmem_alloc)))
time_to_ack = 1;
with:
(copied > 0 &&
(icsk->icsk_ack.unacked > 1 ||
(icsk->icsk_ack.pending & ICSK_ACK_PUSHED) && !icsk->icsk_ack.pingpong)
&&
!atomic_read(&sk->sk_rmem_alloc)))
time_to_ack = 1;
I would not hesitate even a minute, if variable "unacked" could be caluclated
using some existing state variables.
Alexey
--
VGER BF report: U 0.500017
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html