Hallo Tom,


Freitag, 13. Juni 2008, meintest Du:


>

So you're concerned with the gap between the 2.63 us that OSU measured and your 3.07 us you measured.  I wouldn't be too concerned.


1st: i get a value of 2.96 with MVAPICH 1.0.0 - this is exactly the value that i find on the mvapich website ;-)


It is not about being concerned not to get "optimal performance" - i know that such micro-benchmarks are of limited use... but i have a customer requirement. And since it seems possible it would be helpfull to get there


>

 

MPI latency can be quite dependent on the systems you use.  OSU used dual-processor 2.8 Ghz processors.  Such as system has ~60 ns latency to local memory.  On your 4-socket Opteron system, your local memory latency is probably in the 90-100 ns range.  


Why? And how can i measure this?


According to the link i posted they used a 144 Port-Switch. This is 3 HOPs - i have just 1. If that is true the difference should be another 300 ns higher because of the latency of the IB switch silicon...


>

 

Assuming you are also using MVAPICH2, this is probably the main difference for the latency shortfall you are seeing.


MVAPICH2 1.03 and 1.02 tested. 


>

 

Another possibility is that the CPU you are running the MPI test on is not the closest CPU to the PCIe chipset.  Thus, you may be taking some HT hops on the way to the PCIe bus and adapter card.



The value is everytime the same. Shouldn't it be different then every run? And: how can i move the process? numactl or taskset just works on the local process i assume. How can i move the "remote process" on the other host?


Regards,

Jan


>

-Tom




From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jan Heichler

Sent: Thursday, June 12, 2008 2:28 PM

To: Beowulf Mailing List

Subject: [Beowulf] MVAPICH2 and osu_latency


Dear all!



I found this http://mvapich.cse.ohio-state.edu/performance/mvapich2/opteron/MVAPICH2-opteron-gen2-DDR.shtml as reference value for MPI-latency of Infiniband. I try to reproduce those numbers at the moment but i'm stuck with


# OSU MPI Latency Test v3.0

# Size            Latency (us)

0                         3.07

1                         3.17

2                         3.16

4                         3.15

8                         3.19


Equipment is two quadsocket Opteron Blades (Supermicro) with Mellanox Ex DDR cards. Single 24 port switch connects them.


Can anybody help with suggestions what i can do to lower the latency? 

  


Regards, Jan                          





Bye Jan                            

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to