We have a small cluster of 16 nodes (single socket) with Intel S3210SH

this is an LGA775 board (dual core xeon 3[0123]00, dual-channel ddr2/800) with two builtin Gb ports.

motherboards. Does it fully support to connect these nodes with infiniband
switch and also installing the relevant iniband host adapter/interface
cards?

infiniband is fairly forgiving, though you are unlikely to be able to drive
recent IB cards at anything close to peak performance.

Is it worth to add such an high speed interconnect for such a general
purpose cluster?

it depends what you mean by "general purpose". if you have applications which currently struggle with the Gb interconnect, it's hard to see how even SDR-generation IB would not help a lot. on the other hand, these are really quite slow nodes by today's standards, and adding IB is not going to make them competitive with even entry-level modern nodes.

If there is no support possible with infiniband then can we plan for any
other high speed interconnect technology like Myrinet, Quadrics etc.

really?  I'd be curious to know how much Myrinet and Quadrics is still
operating, world-wide.  I still run a large Quadrics cluster, but it feels
very much like a curation project and a rather desperate one.

offhand, I'd recommend making sure your Gb is operating well, before sinking
money into an older cluster like this.  for instance, perhaps your Gb switch
isn't all that great (higher latency? not full-bandwidth?). it would probably be worth checking into openmx, for instance. if you're not
currently using both Gb links, that might be a good idea as well.

regards, mark hahn.
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to