Yes, in fact Score libraries had nice results in MPI over Ethernet in a
couple of clusters I've seen... in fact, by the way implemented by your
company, in collaboration with a HW vendor. As far as I remember an
efficiency of around 67-70% in HPL is manageable with Score MPI, if I
remember well.

Problem was, as far as I remember, only specific versions of x86/ x86_64
linux kernel were supported, don't know if this is gonna change in the next
future.



2007/7/13, John Hearns <[EMAIL PROTECTED]>:

Mark Hahn wrote:
>> Anyway, hop latency in Ethernet is most of times just peanuts in terms
of
>> latency compared to TCP/IP stack overhead...
>
> unfortunately - I'm still puzzled why we haven't seen any open,
> widely-used,
> LAN-tuned non-TCP implementation that reduces the latency.  it should be
> possible to do ~10 us vs ~40 for a typical MPI-over-Gb-TCP.

Well, the SCore impementation which we install on all our clusters does
just this.
www.pccluster.org

In fact, we have one 500 machine cluster which (at the time of install)
ranked 167 in the Top 500 and achieved a very high efficiency.
All connected with gigabit ethernet only.
http://www.streamline-computing.com/index.php?wcId=76&xwcId=72




--
     John Hearns
     Senior HPC Engineer
     Streamline Computing,
     The Innovation Centre, Warwick Technology Park,
     Gallows Hill, Warwick CV34 6UW
     Office: 01926 623130 Mobile: 07841 231235

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to