Rahul Nabar wrote:
I'm thinking of having multiple 10GigE uplinks between the switch and the NFS server. The actual storage is planned to reside on a box of SAS disks. Approx 15 disks. THe NFS server is planned with at least two RAID cards with multiple SAS connections to the box.
ugh ... Why are you designing it ahead of time? Why not take your requirements and needs and use that to dictate the design?
But that's just my planning. The question is do people have numbers. What I/O throughputs are your NFS devices giving? I want to get a
Depending upon workload, you can get performance ranging from 100MB/s through GB+/s.
feel for what my I/O performance envelope should be like. What kind of I/O gurrantees are available? Any vendors around want to comment?
You want a guarantee of I/O performance? For an arbitrary I/O pattern and load? So if you suddenly start random seeking with 4kB reads, you still want to hit 1+GB/s with these 4kB random seek and reads?
Not sure if anyone would be willing to guarantee a particular rate for any workload. We have found well known benchmark codes (bonnie++ 1.0x and some of 1.9x) doing not so good I/O (long OS based pauses) where other codes seem fine.
We use our io-bm code, fio, and a few others to bang on our systems. fio lets us model per unit workloads fairly nicely, io-bm lets us create a system/cluster-wide I/O hammer.
On the other hand just multiplying NFS clients by their peak bandwidth (300 x 1 GB) is an overkill. THat is a very unlikely situation. What
Each 1Gb interface can move about 120MB/s best case. So 300x 120MB/s => 3.6E+4 MB/s . This is likely to be overkill, as you report your highest IO utilization is about 10% of CPU (need to get what that translates to in MB/s, I'd suggest installing iftop on that machine and measuring when it is doing its 10% time in IO).
are typical workloads like? Given x NFS mounts in a computational environment with a y GB uplink each what's the factor on the net loading of the central storage? Any back of the envelope numbers?
In the distant past, we used 8 nodes per GbE port for a port on the NFS server. This allowed us to serve up to 32 nodes with 4GbE ports, and the NFS servers weren't badly loaded.
This ratio is a function of utilization of the links, the I/O duty cycle, etc.
-- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: land...@scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/jackrabbit phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf