On Thu, Apr 08, 2010 at 04:13:21PM +0000, richard.wa...@comcast.net wrote:
> 
> What are the approaches and experiences of people interconnecting 
> clusters of more than128 compute nodes with QDR InfiniBand technology? 
> Are people directly connecting to chassis-sized switches? Using multi-tiered 
> approaches which combine 36-port leaf switches?

I would expect everyone to use a chassis at that size, because it's cheaper
than having more cables. That was true on day 1 with IB, the only question is
"are the switch vendors charging too high of a price for big switches?"

> I am looking for some real world feedback before making a decision on 
> architecture and vendor. 

Hopefully you're planning on benchmarking your own app -- both the
HCAs and the switch silicon have considerably different application-
dependent performance characteristics between QLogic and Mellanox
silicon.

-- greg



_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to