richard.wa...@comcast.net wrote:
> 
> All, 
> 
> 
> What are the approaches and experiences of people interconnecting 
> clusters of more than128 compute nodes with QDR InfiniBand technology? 
> Are people directly connecting to chassis-sized switches? Using multi-tiered 
> approaches which combine 36-port leaf switches? What are your experiences? 
> What products seem to be living up to expectations? 
> 
> 
> I am looking for some real world feedback before making a decision on 
> architecture and vendor. 
> 
> 

We have been telling our vendors to design a multi-level tree using
36 port switches that provides approximately 70% bisection bandwidth.
On a 448 node Nehalem cluster, this has worked well (weather, hurricane, and
some climate modeling).  This design (15 up/21 down) allows us to
scale the system to 714 nodes.

Craig






> Thanks, 
> 
> 
> rbw 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit 
> http://www.beowulf.org/mailman/listinfo/beowulf

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to