Without tearing apart the source for the driver and gettings some NIC's for my own testing (feel free to send me a batch so I can run my own tests!) I would at least echo Doug here. Drivers for a newer hardware might not be optimized, particularly concerning drivers whose source works across entire families of devices.

I presume you have searched the Internet for similar reports, or the documentation for OS device driver in question? I might also check the system logs to see if driver is logging anything useful.

-geoff



Am 24.09.2007, 05:43 Uhr, schrieb Douglas Eadline <[EMAIL PROTECTED]>:

Just a guess, but did you play with any of the driver
parameters like ITR and Flow Control. Out of the box
many of these are set to safe values.

Plus, there seems to be no data on Intel's website for the
80003ES2LAN. Maybe it is so new the driver development
is lagging (another guess)

 --
 Doug


Hi folks:

   Working on trying to figure out why the Intel NICs on these
motherboards we are working with are slow.  Ok, slow is a relative term.
  More along the lines of "not as fast as they could be" specifically
relative to a PCI-x 1000/MT adapter we plugged in.

   Scenario is trying to do some load testing.  I have 4 clients, all
with the same version of OS, pounding on our server (part of the load
test).  Gigabit, server does channel bonding.  Seeing good results.  But
.... on the nodes that use this beast:

04:00.0 Ethernet controller: Intel Corporation 80003ES2LAN Gigabit
Ethernet Controller (Copper) (rev 01)
         Subsystem: Super Micro Computer Inc Unknown device 0000
         Flags: bus master, fast devsel, latency 0, IRQ 1274
         Memory at c8200000 (32-bit, non-prefetchable) [size=128K]
         I/O ports at 2000 [size=32]
         Capabilities: [c8] Power Management version 2
         Capabilities: [d0] Message Signalled Interrupts: Mask- 64bit+
Queue=0/0 Enable+
         Capabilities: [e0] Express Endpoint IRQ 0
         Capabilities: [100] Advanced Error Reporting
Capabilities: [140] Device Serial Number f2-72-32-ff-ff-48-30-00

we get ~70-75 MB/s, while plugging a nice little

05:02.0 Ethernet controller: Intel Corporation 82546GB Gigabit Ethernet
Controller (rev 01)
Subsystem: Intel Corporation PRO/1000 MT Dual Port Server Adapter
         Flags: bus master, 66MHz, medium devsel, latency 52, IRQ 28
         Memory at c8340000 (64-bit, non-prefetchable) [size=128K]
         Memory at c8300000 (64-bit, non-prefetchable) [size=256K]
         I/O ports at 3000 [size=64]
         [virtual] Expansion ROM at c2000000 [disabled] [size=256K]
         Capabilities: [dc] Power Management version 2
         Capabilities: [e4] PCI-X non-bridge device

into a PCI-x slot gives us 92-98 MB/s for our load test (IOzone).  It
gives more than that, I am optically averaging.

Ok.  So the mystery is *why*.

First I note that the first unit, which is a motherboard NIC, has
"32-bit memory" at a particular address, while the second unit, the
1000/MT card in the PCI-x slot has "64-bit memory" at a different address.

Second, and this is counter intuitive, but the motherboard gigabit unit
is on PCI-e (x4 at that!)

[  115.246121] PCI: Setting latency timer of device 0000:04:00.0 to 64
[  115.261547] e1000: 0000:04:00.0: e1000_probe: (PCI
Express:2.5Gb/s:Width x4) 00:30:48:32:72:f2
[  115.290791] PM: Adding info for No Bus:eth0
[  115.290843] e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network
Connection
[  115.290868] ACPI: PCI Interrupt 0000:04:00.1[B] -> GSI 19 (level,
low) -> IRQ 19
[  115.290882] PCI: Setting latency timer of device 0000:04:00.1 to 64
[  115.306461] e1000: 0000:04:00.1: e1000_probe: (PCI
Express:2.5Gb/s:Width x4) 00:30:48:32:72:f3
[  115.342947] PM: Adding info for No Bus:eth1
[  115.342983] e1000: eth1: e1000_probe: Intel(R) PRO/1000 Network
Connection

while the PCI-x is on, well, PCI-x.  And it should be slower.

[  115.608773] e1000: 0000:05:02.0: e1000_probe: (PCI:33MHz:64-bit)
00:04:23:9e:36:ca
[  115.636072] PM: Adding info for No Bus:eth2
[  115.636105] e1000: eth2: e1000_probe: Intel(R) PRO/1000 Network
Connection
[  115.636129] ACPI: PCI Interrupt 0000:05:02.1[B] -> GSI 29 (level,
low) -> IRQ 29
[  115.902030] e1000: 0000:05:02.1: e1000_probe: (PCI:33MHz:64-bit)
00:04:23:9e:36:cb
[  115.928619] PM: Adding info for No Bus:eth3
[  115.928648] e1000: eth3: e1000_probe: Intel(R) PRO/1000 Network
Connection
[  115.928687] ACPI: PCI Interrupt 0000:06:00.0[A] -> GSI 24 (level,
low) -> IRQ 24

The driver is 7.3.20-k2-NAPI

[ 110.772712] Intel(R) PRO/1000 Network Driver - version 7.3.20-k2-NAPI
[  110.772717] Copyright (c) 1999-2006 Intel Corporation.

I know 7.6.5 is out, and I installed it on one of the machines, without
any impact.

Motherboard is a Supermicro X7DVA-i I think.  I am also seeing this on a
different Supermicro motherboard with dual cores. Same issue/performance.

Any thoughts?

--

Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: [EMAIL PROTECTED]
web  : http://www.scalableinformatics.com
        http://jackrabbit.scalableinformatics.com
phone: +1 734 786 8423
fax  : +1 866 888 3112
cell : +1 734 612 4615

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf

!DSPAM:46f725f6227831804284693!



--
Doug
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



--
-------------------------------
Geoff Galitz, [EMAIL PROTECTED]
Blankenheim, Deutschland

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to