Prentice Bisbal wrote:
[...]
My new cluster, which is still in labor, will have InfiniBand for MPI,
and we have 10 Gb ethernet switches for management/NFS, etc. The nodes
only have 1 Gb ethernet, so it will be effectively a 1 Gb network.

I'm also curious as to whether the dual networks are overkill, and if
using a slower network for I/O will cause the system to be slower than
doing all traffic over IB, since I/O will be slower and cause the nodes
to wait longer for these ops to finish.

Hello, Prentice and Alan.

I've built a Beowulf based on EPCC BOBCAT:

        http://bioinformatics.rri.sari.ac.uk/bobcat/

What attracted me to BOBCAT is that it used two completely separate network fabrics: One for 'system' and one for 'applications'. Before I used this approach, it was very easy to lose control of the Beowulf cluster because it's easy to saturate the 'application' IPC, but if you have completely separate network fabrics, you can still control the Beowulf even when the 'application' network is saturated. This works extremely well in practice on our openMosix Beowulf with 88 PXE-booted nodes using NFSROOT over 100Mb and 'application' IPC over Gb ethernet.

        Tony.
--
Dr. A.J.Travis, University of Aberdeen, Rowett Institute of Nutrition
and Health, Greenburn Road, Bucksburn, Aberdeen AB21 9SB, Scotland, UK
tel +44(0)1224 712751, fax +44(0)1224 716687, http://www.rowett.ac.uk
mailto:[EMAIL PROTECTED], http://bioinformatics.rri.sari.ac.uk/~ajt
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to