John Hearns wrote: > On Mon, 2008-05-19 at 18:42 -0400, Mark Hahn wrote: > >>> 1. Is having 10 GbE and Inifiniband in the same cluster overkill, or at >>> least unorthodox? This cluster will be used by a variety of users >>> >> I would say so - if you've got IB, why add another interface? >> I'm not suggesting getting rid of gigabit, since its cost is >> near-zero and ethernet _is_ the network. OTOH, if there were a >> form of ethernet that competed with IB in price/latency/bandwidth, >> there would be no reason to go IB. >> > I back up what Mark says. > > If you want to use 10Gig, put a 10Gig interface in your NFS server and > choose a switch with a suitable 10Gig port (or ports) plus 1gig ports. > Use the existing onboard 1gig interfaces on your nodes for the NFS > traffic. > You don't say what size, in number of compute nodes, your cluster will > be. It is very simplistic to think of simply throwing bandwidth at an > NFS solution and thinking this will solve everything - it won't. > Do you REALLY expect to saturate 1gig interfaces? > The NFS Server will be a NetApp product with a 10 Gb interface, and our switches will also have 10 GbE ports. The cluster will be 48-64 nodes. No, I do not exepct to saturate the 1 Gb interfaces. I was not the one who spec'd the system, but I will be responsible for configuring and supporting it.
Prentice _______________________________________________ Beowulf mailing list, [email protected] To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
