Any number of approaches will work.  When I used to do this years ago (I've 
long since passed on the technical side) I'd PXE boot, partition the hard disk 
and set up a provisioning network and base OS install using the Debian FAI 
(Fully Automated Install) system, and then use cfengine to configure the 
machine once it had come in that minimal state.  This approach was used across 
the board for all of our Linux boxes, from Linux desktops to database servers 
to HPC compute nodes.

These days the team uses tools like cobbler and ansible to achieve the same 
thing.  There are lots of ways to do it, but the principle is the same.

Tim

--
Head of Scientific Computing
Wellcome Trust Sanger Institute

On 28/09/2016, 15:34, "Beowulf on behalf of Mikhail Kuzminsky" 
<beowulf-boun...@beowulf.org<mailto:beowulf-boun...@beowulf.org> on behalf of 
mikk...@mail.ru<mailto:mikk...@mail.ru>> wrote:

I worked always w/very small HPC clusters and built them manually (each server).
But what is reasonable to do for clusters  containing some tens or hundred of 
nodes ?
Of course w/modern Xeon (or Xeon Phi KNL) and IB EDR, during the next year for 
example.
There are some automatic systems like OSCAR or even ROCKS.

But it looks that ROCKS don't support modern interconnects, and there may be 
problems
w/OSCAR versions for support of systemd-based distributives like CentOS 7. For 
next year -
is it reasonable to wait new OSCAR version or something else ?

Mikhail Kuzminsky,
Zelinsky Institute of Organic Chemistry RAS,
Moscow





-- 
 The Wellcome Trust Sanger Institute is operated by Genome Research 
 Limited, a charity registered in England with number 1021457 and a 
 company registered in England with number 2742969, whose registered 
 office is 215 Euston Road, London, NW1 2BE.
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to