Re: [Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

2010-04-09 Thread Tom Ammon
Another thing to remember with chassis switches is that you can also build them in an oversubscribed model by removing spine cards. Most chassis' have at least 3 spine modules so you lose some granularity in oversubscription, but you can still cut costs. You don't have to go with fully nonblock

Re: [Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

2010-04-09 Thread Peter Kjellstrom
On Thursday 08 April 2010, Greg Lindahl wrote: > On Thu, Apr 08, 2010 at 04:13:21PM +, richard.wa...@comcast.net wrote: > > What are the approaches and experiences of people interconnecting > > clusters of more than128 compute nodes with QDR InfiniBand technology? > > Are people directly connec

Re: Fwd: [Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

2010-04-08 Thread Craig Tierney
richard.wa...@comcast.net wrote: > > On Thursday, April 8, 2010 2:42:49 PM Craig Tierney wrote: > > > > >> >> We have been telling our vendors to design a multi-level tree using >> >> 36 port switches that provides approximately 70% bisection bandwidth. >> >> On a 448 node Nehalem cluster, this has

Fwd: [Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

2010-04-08 Thread richard . walsh
- Forwarded Message - From: "richard walsh" To: "Craig Tierney" Sent: Thursday, April 8, 2010 5:19:14 PM GMT -05:00 US/Canada Eastern Subject: Re: [Beowulf] QDR InfiniBand interconnect architectures ... approaches ... On Thursday, April 8, 2010 2:42:49 PM

Re: [Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

2010-04-08 Thread richard . walsh
On Thursday, April 8, 2010 2:14:11 PM Greg Lindahl wrote: >> What are the approaches and experiences of people interconnecting >> clusters of more than128 compute nodes with QDR InfiniBand technology? >> Are people directly connecting to chassis-sized switches? Using multi-tiered >> approac

Re: [Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

2010-04-08 Thread Mike Davis
We are DDR but we use a flat switching model for our Infiniband cluster. Thus far most work is MD and QC and scaling has been good. ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or

Re: [Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

2010-04-08 Thread Craig Tierney
richard.wa...@comcast.net wrote: > > All, > > > What are the approaches and experiences of people interconnecting > clusters of more than128 compute nodes with QDR InfiniBand technology? > Are people directly connecting to chassis-sized switches? Using multi-tiered > approaches which combine

Re: [Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

2010-04-08 Thread Greg Lindahl
On Thu, Apr 08, 2010 at 04:13:21PM +, richard.wa...@comcast.net wrote: > > What are the approaches and experiences of people interconnecting > clusters of more than128 compute nodes with QDR InfiniBand technology? > Are people directly connecting to chassis-sized switches? Using multi-tiered

[Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

2010-04-08 Thread richard . walsh
All, What are the approaches and experiences of people interconnecting clusters of more than128 compute nodes with QDR InfiniBand technology? Are people directly connecting to chassis-sized switches? Using multi-tiered approaches which combine 36-port leaf switches? What are your experience