Another thing to remember with chassis switches is that you can also
build them in an oversubscribed model by removing spine cards. Most
chassis' have at least 3 spine modules so you lose some granularity in
oversubscription, but you can still cut costs. You don't have to go with
fully nonblock
On Thursday 08 April 2010, Greg Lindahl wrote:
> On Thu, Apr 08, 2010 at 04:13:21PM +, richard.wa...@comcast.net wrote:
> > What are the approaches and experiences of people interconnecting
> > clusters of more than128 compute nodes with QDR InfiniBand technology?
> > Are people directly connec
richard.wa...@comcast.net wrote:
> > On Thursday, April 8, 2010 2:42:49 PM Craig Tierney wrote:
> >
> >
>> >> We have been telling our vendors to design a multi-level tree using
>> >> 36 port switches that provides approximately 70% bisection bandwidth.
>> >> On a 448 node Nehalem cluster, this has
- Forwarded Message -
From: "richard walsh"
To: "Craig Tierney"
Sent: Thursday, April 8, 2010 5:19:14 PM GMT -05:00 US/Canada Eastern
Subject: Re: [Beowulf] QDR InfiniBand interconnect architectures ... approaches
...
On Thursday, April 8, 2010 2:42:49 PM
On Thursday, April 8, 2010 2:14:11 PM Greg Lindahl wrote:
>> What are the approaches and experiences of people interconnecting
>> clusters of more than128 compute nodes with QDR InfiniBand technology?
>> Are people directly connecting to chassis-sized switches? Using multi-tiered
>> approac
We are DDR but we use a flat switching model for our Infiniband cluster.
Thus far most work is MD and QC and scaling has been good.
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or
richard.wa...@comcast.net wrote:
>
> All,
>
>
> What are the approaches and experiences of people interconnecting
> clusters of more than128 compute nodes with QDR InfiniBand technology?
> Are people directly connecting to chassis-sized switches? Using multi-tiered
> approaches which combine
On Thu, Apr 08, 2010 at 04:13:21PM +, richard.wa...@comcast.net wrote:
>
> What are the approaches and experiences of people interconnecting
> clusters of more than128 compute nodes with QDR InfiniBand technology?
> Are people directly connecting to chassis-sized switches? Using multi-tiered
All,
What are the approaches and experiences of people interconnecting
clusters of more than128 compute nodes with QDR InfiniBand technology?
Are people directly connecting to chassis-sized switches? Using multi-tiered
approaches which combine 36-port leaf switches? What are your experience