Dear Mark and the List,

The head node is about a  terabyte of raid10, with home directories and 
application directories NFS mounted to the cluster.  I am still tuning 
NFS,  16 daemons now) and, of course, the head node had 1Gb link to my 
intranet for remote cluster access. 

The 10Gb link to the switch uses cx4 cable.  It did not cost too much and 
I only needed two meters of it. 

10 Gb is very nice and makes me lust for inexpensive low latency 10Gb 
switches... but I'll wait for the marketplace to develop. 

Engineering calculations (Fluent, Abacus) can fill the 10 Gb link for 5 to 
15 minutes.  But that is better than the 20-40 minutes they used to use. I 
suspect they are checkpointing and restarting their MPI iterations. 
------
Sincerely,

   Tom Pierce
 



Mark Hahn <[EMAIL PROTECTED]> 
02/21/2007 09:48 AM

To
Thomas H Dr Pierce <[EMAIL PROTECTED]>
cc
Beowulf Mailing List <beowulf@beowulf.org>
Subject
Re: [Beowulf] anyone using 10gbaseT?






> I have been using the MYRICOM 10Gb card in my NFS server (head node) for
> the Beowulf cluster. And it works well.  I have a inexpensive 3Com 
switch
> (3870) with 48 1Gb ports   that has a 10Gb port in it and I connect the
> NFS server to that port. The switch does have small fans in it.

that sounds like a smart, strategic use.  cx4, I guess.  is the head 
node configured with a pretty hefty raid (not that saturating a single
GB link is that hard...)

thanks, mark hahn.

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to