2009/2/17 Robert G. Brown <r...@phy.duke.edu>: >> > Right now the short answer for a vanilla starter cluster is: Connect > your systems via a switched network on a private IP subnet, setup a > shared common account space and remote mounted (e.g. NFS) working > directories from a shared server.
Just to amplify on what Bob says. To build a vanilla Beowulf, you need to connect all eight of these boxes (for want of a better term) to an ethernet switch via patch cables. This is your own switch, not one controlled and run by your campus network people (but see below). This is the 'private network'. Allocate one box as the 'cluster head node' or 'master node'. This should have TWO network interfaces - if it is a modern box, it will have two network sockets for those RJ45 patch cables already on the back of it. If not, you will have to get an additional network card and slot it in. Actually, you could perfectly well run a computational cluster over the campus mamaged network. Just install them with campus 'public' IP addresses, and install the appropriate MPI libraries and scheduling systems. However, the advantage of a private network/subnet is that you can (a) control how your boxes boot up - ie reinstallation and network booting (b) you have the full banmdwidth of your private switch for the communications between the nodes - both the parallel messages the programs send and the storage traffic to the NFS server on the head node. ps. that reminds me - if any one box is more powerful, or has more disk space then that's the one to choose for the head node. Now, once you have neatly piled up those boxes, and have done the mains and patch cables neatly (think of the future - one day it is you that has to fix a failed box! - get those cables tied neatly in bunches), email the list and we can help from there. _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf