It would be nice to have non-blocking communication within the entire system
but the critical part is the 36-node complex to be connected to the main
cluster.

On Mon, Feb 9, 2009 at 1:33 AM, Gilad Shainer <shai...@mellanox.com> wrote:

>  Do you plan to have full not blocking communications between the next
> systems and the core switch?
>
>  ------------------------------
> *From:* beowulf-boun...@beowulf.org [mailto:beowulf-boun...@beowulf.org] *On
> Behalf Of *Ivan Oleynik
> *Sent:* Sunday, February 08, 2009 8:20 PM
> *To:* beowulf@beowulf.org
> *Subject:* [Beowulf] Connecting two 24-port IB edge switches to core
> switch:extra switch hop overhead
>
> I am purchasing 36-node cluster that will be integrated to already existing
> system. I am exploring the possibility to use two 24 4X port IB edge
> switches in core/leaf design that have maximum capability of 960Gb
> (DDR)/480Gb (SDR). They would be connected to the main Qlogic Silverstorm
> switch.
>
> I would appreciate receiving some info regarding the communication overhead
> incurred by this setup. I am trying to minimize the cost of IB communication
> hardware. It looks like buying single 48-port switch is really an expensive
> option.
>
> Thanks,
>
> Ivan
>
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to