On 15/02/17 17:03, John Hanks wrote: > When we were looking at a possible GPFS client license purchase we ran > the client on our nodes and did some basic testing. The client did give > us a bit of a boost in performance over NFS, but still we could tip GPFS > over with a small fraction of our available nodes.
Wow that's odd, how large are your clusters? We were hitting ours with 2 Intel clusters (1,000+ cores each) and 4 racks of BlueGene/Q (65,5535 cores, 4096 nodes). However, we do have our GPFS metadata on an SSD array connected to 2 dedicated NSD servers (active/active) and our SFA10K's are frontended by 4 NSD servers each (again active/active pairs to give redundancy). cheers, Chris -- Christopher Samuel Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: sam...@unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.org.au/ http://twitter.com/vlsci _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf