Hi John, On 15/02/17 17:33, John Hanks wrote:
> So "clusters" is a strong word, we have a collection of ~22,000 cores of > assorted systems, basically if someone leaves a laptop laying around > unprotected we might try to run a job on it. And being bioinformatic-y, > our problem with this and all storage is metadata related. The original > procurement did not include dedicated NSD servers (or extra GPFS server > licenses) so we run solely off the SFA12K's. Ah right, so these are the embedded GPFS systems from DDN. Interesting as our SFA10K's hit EOL in 2019 and so (if our funding continues beyond 2018) we'll need to replace them. > Could we improve with dedicated NSD frontends and GPFS clients? Yes, > most certainly. But again, we can stand up a PB or more of brand new > SuperMicro storage fronted by BeeGFS that performs as well or better > for around the same cost, if not less. Very nice - and for what you're doing it sounds like just what you need. > I don't have enough of an > emotional investment in GPFS or DDN to convince myself that suggesting > further tuning that requires money and time is worthwhile for our > environment. It more or less serves the purpose it was bought for, we > learn from the experience and move on down the road. I guess I'm getting my head around how other sites GPFS performs given I have a current sample size of 1 and that was spec'd out by IBM as part of a large overarching contract. :-) I guess I assuming that because that was what we had it was how most sites did it, apologies for that! All the best, Chris -- Christopher Samuel Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: sam...@unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.org.au/ http://twitter.com/vlsci _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf