Hi Bill,
I started musing if once one side's transmitter got the upper hand, it
might somehow defer the processing of received packets, causing the
resultant ACKs to be delayed and thus further slowing down the other
end's transmitter. I began to wonder if the txqueuelen could have an
affect
Hi Auke,
Based on the discussion in this thread, I am inclined to believe that
lack of PCI-e bus bandwidth is NOT the issue. The theory is that the
extra packet handling associated with TCP acknowledgements are pushing
the PCI-e x1 bus past its limits. However the evidence seems to show
otherw
Hi Bill,
I see similar results on my test systems
Thanks for this report and for confirming our observations. Could you
please confirm that a single-port bidrectional UDP link runs at wire
speed? This helps to localize the problem to the TCP stack or interaction
of the TCP stack with the e10
Hi Auke,
Important note: we ARE able to get full duplex wire speed (over 900
Mb/s simulaneously in both directions) using UDP. The problems occur
only with TCP connections.
That eliminates bus bandwidth issues, probably, but small packets take
up a lot of extra descriptors, bus bandwidth, CPU
Hi David,
Could this be an issue with pause frames? At a previous job I remember
having issues with a similar configuration using two broadcom sb1250 3
gigE port devices. If I ran bidirectional tests on a single pair of
ports connected via cross over, it was slower than when I gave each
dire
Hi Bill,
I see similar results on my test systems
Thanks for this report and for confirming our observations. Could you
please confirm that a single-port bidrectional UDP link runs at wire
speed? This helps to localize the problem to the TCP stack or interaction
of the TCP stack with the
Hi Andi!
Important note: we ARE able to get full duplex wire speed (over 900
Mb/s simulaneously in both directions) using UDP. The problems occur
only with TCP connections.
Another issue with full duplex TCP not mentioned yet is that if TSO is used
the output will be somewhat bursty and migh
Hi Sangtae,
Thanks for joining this discussion -- it's good to a CUBIC author and
expert here!
In our application (cluster computing) we use a very tightly coupled
high-speed low-latency network. There is no 'wide area traffic'. So
it's hard for me to understand why any networking componen
Hi Jesse,
It's good to be talking directly to one of the e1000 developers and
maintainers. Although at this point I am starting to think that the
issue may be TCP stack related and nothing to do with the NIC. Am I
correct that these are quite distinct parts of the kernel?
Yes, quite.
OK.
Hi Stephen,
Indeed, we are not asking to see 1000 Mb/s. We'd be happy to see 900
Mb/s.
Netperf is trasmitting a large buffer in MTU-sized packets (min 1500
bytes). Since the acks are only about 60 bytes in size, they should be
around 4% of the total traffic. Hence we would not expect to see
Hi Rick,
First off, thanks for netperf. I've used it a lot and find it an extremely
useful tool.
As asked in LKML thread, please post the exact netperf command used to
start the client/server, whether or not you're using irqbalanced (aka
irqbalance) and what cat /proc/interrupts looks like (y
Hi Jesse,
It's good to be talking directly to one of the e1000 developers and
maintainers. Although at this point I am starting to think that the
issue may be TCP stack related and nothing to do with the NIC. Am I
correct that these are quite distinct parts of the kernel?
The 82573L (a cl
Hi Ben,
Thank you for the suggestions and questions.
We've connected a pair of modern high-performance boxes with integrated
copper Gb/s Intel NICS, with an ethernet crossover cable, and have run some
netperf full duplex TCP tests. The transfer rates are well below wire
speed. We're reporti
Hi Stephen,
Thanks for your helpful reply and especially for the literature pointers.
Indeed, we are not asking to see 1000 Mb/s. We'd be happy to see 900
Mb/s.
Netperf is trasmitting a large buffer in MTU-sized packets (min 1500
bytes). Since the acks are only about 60 bytes in size, they s
Hi David,
Thanks for your note.
(The performance of a full duplex stream should be close to 1Gb/s in
both directions.)
This is not a reasonable expectation.
ACKs take up space on the link in the opposite direction of the
transfer.
So the link usage in the opposite direction of the transfer
me is here:
https://n0.aei.uni-hannover.de/networktest/node19-new20-noflow.jpg
Red shows transmit and green shows receive (please ignore other plots):
We're happy to do additional testing, if that would help, and very grateful for
any advice!
Bruce Allen
Carsten Aulbert
Henning Fehrmann
16 matches
Mail list logo