SANGTAE HA wrote:
On Jan 9, 2008 9:56 AM, John Heffner <[EMAIL PROTECTED]> wrote:
I also wonder how much of a problem this is (for now, with window sizes
of order 10000 packets.  My understanding is that the biggest problems
arise from O(N^2) time for recovery because every ack was expensive.
Have current tests shown the final ack to be a major source of problems?
Yes, several people have reported this.
I may have missed some of this.  Does anyone have a link to some recent
data?

I had some testing on this a month ago.
A small set of recent results with linux 2.6.23.9 are at
http://netsrv.csc.ncsu.edu/net-2.6.23.9/sack_efficiency
One of serious cases with a large number of packet losses (initial
loss is around 8000 packets) is at
http://netsrv.csc.ncsu.edu/net-2.6.23.9/sack_efficiency/600--TCP-TCP-NONE--400-3-1.0--1000-120-0-0-1-1-5-500--1.0-0.5-133000-73-3000000-0.93-150--3/

Also, there is a comparison among three Linux kernels (2.6.13,
2.6.18-rc4, 2.6.20.3) at
http://netsrv.csc.ncsu.edu/wiki/index.php/Efficiency_of_SACK_processing


If I'm reading this right, all these tests occur with large amounts of loss and tons of sack processing. What would be most pertinent to this discussion would be a test with a large window, with delayed ack and sack disabled, and a single loss repaired by fast retransmit. This would isolate the "single big ack" processing from other factors such as doubling the ack rate and sack processing.

I could probably set up such a test, but I don't want to duplicate effort if someone else already has done something similar.

Thanks,
  -John
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to