I inherited a few big racks of unused Dell PowerEdge servers. There are 2 1GB switches in each rack, and each node in the cluster has at least two ethernet connections.
I have a head node and some compute nodes up and running in Rocks Cluster 5.2, but in my haste to see that the hardware actually works, I've only only cut and patched enough cables to use one switch for the internal network. What to do with the other switch? As far as I understand it, I have 2 options (aside from selling the extra switch on Ebay :)) Option 1. Create 2 separate internal networks. In wiring drawings for clusters, I often see one administrative network and one for computations (mpi, and so forth). The downside for that is that I don't yet understand how user programs are supposed to differentiate the 2 internal networks and send messages through the computation network. I've not had the luxury of 2 ethernet cards before. Option 2. Try "channel bonding" to try to increase the throughput on a single ethernet node. That has some appeal because the users just see one network. I'm not aiming for the "high availability" approach of using the second switch as a fallback. Rather, I'm aiming for the fastest & widest connection possible in and out of each compute node. My head node still has just one 1GB ethernet connection going into it, and if I could believe that channel bonding would actually improve throughput, I suppose I could get another line (or even 2 more) going into the head node. The headnode has 4 ethernet jacks, so I suppose I could double up inside and outside. I'd be glad to hear your thoughts. -- Paul E. Johnson Professor, Political Science 1541 Lilac Lane, Room 504 University of Kansas _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf