At 02:02 AM 11/10/2006, john t wrote: >Hi, > >I got following readings in one of my experiments: > >Single 64-bit xeon machine (2 dual-core 3.2 GHz Intel CPUs, linux FC4, >OFED 1.0) with two Mellanox DDR (4x) HCAs (each having two ports and each >connected to a PCI x8 interface) is connected to a switch (all the 4 DDR >(4x) ports are connected to the switch). > >If I send data from mthca0-1 to mthca0-1 meaning from same port to the >same port i.e. same port doing send/recv (also same cable doing send/recv) >I get a BW of around 10 Gb/sec. > >Similarly, from mthca1-1 to mthca1-1 I get same i.e. around 10 Gb/sec. > >So, individual port-to-port gives 10 Gb/sec. > >But when I use them together i.e when I send the data from mthca0-1 to >mthca0-1 AND from mthca1-1 to mthca1-1 at the same time (simultaneously) I >get a BW of 6.7 Gb/sec on each port. This is less than 10 Gb/sec that is >expected. Note that mthca0 and mthca1 are connected to two different >PCI-x8 interfaces, so there is no question of bandwidth splitting. What >could be causing such a behaviour ?? > >Just to add if the same thing is done between two different hosts i.e. If >I send data from mthca0-0 and mthca1-1 of one host to mthca0-0 and >mthca1-1 of other host, I get expected BW i.e. 10 Gb/sec on each port/link. >
You have two links pounding on a shared PCIe Root Complex / memory controller. This sounds like a chipset issue not an IB / software issue when it is placed under load. Mike _______________________________________________ openib-general mailing list [email protected] http://openib.org/mailman/listinfo/openib-general To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
