On 29/09/16 00:34, Mikhail Kuzminsky wrote:

> I worked always w/very small HPC clusters and built them manually
> (each server). But what is reasonable to do for clusters  containing
> some tens or hundred of nodes ?

As Tim and Craig have mentioned there are lots of ways to deal with
this, we use xCAT for our systems but it does tend to assume that you
have decent hardware with IPMI adapters so it can do remote power
control, console logging, etc.

I've got to say helping out another group who went the Puppet way for
their cluster I *really* miss those features, along with xCAT's default
of making clients syslog back to the management node.

There's also OpenHPC now which is uses Warewulf for its cluster management.

We run diskless nodes which means every compute node boots the same
osimage so we know that all our compute nodes on the same image are
identical.

All the best!
Chris
-- 
 Christopher Samuel        Senior Systems Administrator
 VLSCI - Victorian Life Sciences Computation Initiative
 Email: sam...@unimelb.edu.au Phone: +61 (0)3 903 55545
 http://www.vlsci.org.au/      http://twitter.com/vlsci
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to