Hi Bruce,
On Jan 30, 2008 5:25 PM, Bruce Allen <[EMAIL PROTECTED]> wrote:
>
> In our application (cluster computing) we use a very tightly coupled
> high-speed low-latency network. There is no 'wide area traffic'. So it's
> hard for me to understand why any networking components or software laye
On Jan 9, 2008 9:56 AM, John Heffner <[EMAIL PROTECTED]> wrote:
> >> I also wonder how much of a problem this is (for now, with window sizes
> >> of order 1 packets. My understanding is that the biggest problems
> >> arise from O(N^2) time for recovery because every ack was expensive.
> >> Hav
Hi Gavin,
This is fixed in the current version of tcp_probe by Stephen. Please
see the below.
commit 662ad4f8efd3ba2ed710d36003f968b500e6f123
Author: Stephen Hemminger <[EMAIL PROTECTED]>
Date: Wed Jul 11 19:43:52 2007 -0700
[TCP]: tcp probe wraparound handling and other changes
Switch
Just a fix to correct the number of printl arguments. Now, srtt is logging
correctly.
Signed-off-by: Sangtae Ha <[EMAIL PROTECTED]>
---
net/ipv4/tcp_probe.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/net/ipv4/tcp_probe.c b/net/ipv4/tcp_probe.c
index 3
Hi Bill,
This is the small patch that has been applied to 2.6.22.
Also, there is "limited slow start", which is an experimental RFC
(RFC3742), to surmount this large increase during slow start.
But, your kernel might not have this. Please check there is a sysctl
variable "tcp_max_ssthresh".
Than
sive initial "slow start" behavior with the better performance
of bic or cubic during the subsequent steady state portion of the
TCP session.
I can of course achieve that objective by setting initial_ssthresh
to 0, but perhaps that should be made the default behavior.
Hi Bill,
At this time, BIC and CUBIC use a less aggressive slow start than
other protocols. Because we observed "slow start" is somewhat
aggressive and introduced a lot of packet losses. This may be changed
to standard "slow start" in later version of BIC and CUBIC, but, at
this time, we still us
Hi David,
I ran couple of testing to see the limited slow start for HSTCP.
For this testing, I set max_ssthresh value to 100.
With the slow start, it takes around 4sec to hit the cwnd of 21862
(more than 6000 packet drops for one rtt).
With the limited slow start, it takes 108sec to hit the cwnd
Hi all,
See the TCP testing results of net-2.6.22.git tree at
http://netsrv.csc.ncsu.edu/wiki/index.php/Intra_protocol_fairness_testing_with_net-2.6.22.git
In addition to Stephen's recent 1Mbit DSL result, the results include
the cases with four different bandwidths (10M/100M/200M/400M) and
the
Hi all,
See the TCP testing results of net-2.6.22.git tree at
http://netsrv.csc.ncsu.edu/wiki/index.php/Intra_protocol_fairness_testing_with_net-2.6.22.git
In addition to Stephen's recent 1Mbit DSL result, the results include
the cases with four different bandwidths (10M/100M/200M/400M) and
back
I also noticed this happening with 2.6.18 kernel version, but this was
not severe with linux 2.6.20.3. So, the short-term solution will be
upgrading to the latest kernel of FC-6.
A long black-out is mostly observed when a lot of packet losses
happened in slow start. You can prevent this by apply
Hi Baruch,
I would like to add some comments on your argument.
On 2/13/07, Baruch Even <[EMAIL PROTECTED]> wrote:
* David Miller <[EMAIL PROTECTED]> [070213 00:53]:
> From: Baruch Even <[EMAIL PROTECTED]>
> Date: Tue, 13 Feb 2007 00:12:41 +0200
>
> > The problem is that you actually put a mostl
12 matches
Mail list logo