I haven't been able to get a TCP connection to saturate a 1Gbps link
in both directions simultaneously.  I *have* been able to fully saturate
2 pro/1000 NICs on the same machine using pktgen, so the NIC/driver can
support it if only TCP can run fast enough...

It isn't quite saturating, but:

loiter:/opt/netperf2_work# src/netperf -T 1, -H 192.168.3.212 -t TCP_RR -C -c -l 60 -- -b 6 -S 340K -s 340K -r 32K
bind_to_specific_processor: enter
WARNING! Enabling first burst without setting -D for NODELAY!
TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.3.212 (192.168.3.212) port 0 AF_INET : first burst 6
Local /Remote
Socket Size   Request Resp.  Elapsed Trans.   CPU    CPU    S.dem   S.dem
Send   Recv   Size    Size   Time    Rate     local  remote local   remote
bytes  bytes  bytes   bytes  secs.   per sec  % S    % S    us/Tr   us/Tr

270336 270336 32768   32768  60.00   3027.76  65.55  50.01  433.014  330.342
270336 270336

This is an example of the use of the first burst functionality in netperf2 - by using a "large enough" burst size, but not one so large as to fill the SO_SNDBUF one can use the netperf TCP_RR test to do a bidirectional bandwidth test on a single connection.

Don't mind the WARNING about not setting -D - it was germane in another context, but not here and I'll likely be yanking it from the later bits.

32768 byte request/response, times 3027.76 requests per second is 99213639 bytes per second each direction, or 793 Mbit/s in each direction. In the test above, the netserver was left floating (no CPU number after the ',' in the -T option) and I suspect it was running on the same CPU as was taking the interrupts from the NIC. The netperf side was bound to the CPU other than the one taking interrupts from the NIC.

If I let both netperf and netserver float:

loiter:/opt/netperf2_work# src/netperf -H 192.168.3.212 -t TCP_RR -C -c -l 60 -- -b 6 -S 340K -s 340K -r 32K
WARNING! Enabling first burst without setting -D for NODELAY!
TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.3.212 (192.168.3.212) port 0 AF_INET : first burst 6
Local /Remote
Socket Size   Request Resp.  Elapsed Trans.   CPU    CPU    S.dem   S.dem
Send   Recv   Size    Size   Time    Rate     local  remote local   remote
bytes  bytes  bytes   bytes  secs.   per sec  % S    % S    us/Tr   us/Tr

270336 270336 32768   32768  60.00   2974.81  49.82  48.17  334.915  323.829
270336 270336

and if I pin both to CPU one on their respective machines:

loiter:/opt/netperf2_work# src/netperf -T 1 -H 192.168.3.212 -t TCP_RR -C -c -l 60 -- -b 6 -S 340K -s 340K -r 32K
bind_to_specific_processor: enter
WARNING! Enabling first burst without setting -D for NODELAY!
TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.3.212 (192.168.3.212) port 0 AF_INET : first burst 6
Local /Remote
Socket Size   Request Resp.  Elapsed Trans.   CPU    CPU    S.dem   S.dem
Send   Recv   Size    Size   Time    Rate     local  remote local   remote
bytes  bytes  bytes   bytes  secs.   per sec  % S    % S    us/Tr   us/Tr

270336 270336 32768   32768  60.00   3546.67  73.67  74.00  415.439  417.273
270336 270336

Which becomes 929 Mbit/s in each direction. So, even that, strictly speaking isn't saturation each way, but it is rather close :)

So, it seems possible to saturate a GbE in each direction with a single TCP connection, so long as one has enough CPU and of course enough bus etc etc etc.

Until both netperf and netserver were bound to CPU's other than the interrupt CPUs the sautration of a single CPU in this two CPU system precluded hitting link-rate both ways.

rick jones

machine details:
both were HP rx1600's with 2x1.0 GHz Itanium2 CPUs (I forget the cache size)

the driver:
loiter:/opt/netperf2_work# ethtool -i eth2
driver: e1000
version: 5.2.52-k4
firmware-version: N/A
bus-info: 0000:40:01.0

loiter:/opt/netperf2_work# ethtool -k eth2
Offload parameters for eth2:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp segmentation offload: off

The MTU was the standard 1500 bytes:
loiter:/opt/netperf2_work# ifconfig eth2
eth2      Link encap:Ethernet  HWaddr 00:30:6E:5D:A3:8A
          inet addr:192.168.3.213  Bcast:192.168.3.255  Mask:255.255.255.0
          inet6 addr: fe80::230:6eff:fe5d:a38a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2000732684 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2004414464 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:201459466683 (187.6 GiB)  TX bytes:201757690154 (187.9 GiB)
          Base address:0x4040 Memory:98120000-98140000

And was from an add-on dual-port card HP brands the A9900A:

loiter:/opt/netperf2_work# lspci -vt
-+-[e0]-+-01.0  Hewlett-Packard Company Auxiliary Diva Serial Port
 |      +-01.1  Hewlett-Packard Company Diva Serial [GSP] Multiport UART
 |      \-02.0  ATI Technologies Inc Radeon RV100 QY [Radeon 7000/VE]
 +-[40]-+-01.0  Intel Corp. 82546GB Gigabit Ethernet Controller
 |      \-01.1  Intel Corp. 82546GB Gigabit Ethernet Controller
+-[20]-+-01.0 LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI | +-01.1 LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI
 |      \-02.0  Broadcom Corporation NetXtreme BCM5701 Gigabit Ethernet
 \-[00]-+-01.0  NEC Corporation USB
        +-01.1  NEC Corporation USB
        +-01.2  NEC Corporation USB 2.0
        +-02.0  Silicon Image, Inc. (formerly CMD Technology Inc) PCI0649
        \-03.0  Intel Corp. 82557/8/9 [Ethernet Pro 100]

and is based on the 82546GB

That is a PCI-X 1.0 slot. I'm not sure what the frequency happens to be, but it would be online somewhere. The rx1600 is a somewhat old system (replaced by the rx1620) but I suspect that stats may still be found via http://www.hp.com/go/rx1600)

The kernel:

loiter:/opt/netperf2_work# uname -a
Linux loiter 2.6.8-2-mckinley-smp #1 SMP Fri May 27 19:38:30 MDT 2005 ia64 
GNU/Linux

rick jones
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to