Hi Lance,I am curious about how do you encapsulate the job in the right cgroups 
in slurm. Could you please give us some details ?Comcerning Docker as a login 
node (or other "service" nodes) how do you manage its deployment ? Basically, 
with registry and pull, with swarm/mesos/kubernetes ?
@John: I know there is a tight integration of Docker in HTCondor (see Docher 
HTcondor universe application); it could be modified easily to submit 
singularity jobs (I know some ppl are doing this). On our clusters we are using 
singularity like any other apps with environment modules+SGE (without 
cgroups).Concerning LXD we used to deploy some containers of that type to some 
external machines. Then, these containers were connected to the rest of the 
cluster (that is an easy way to make our cluster bigger). However, even if it 
works well within an experimental environment, it did not gave us full 
satisfaction in production (problems with isolating the container from the 
host).For the service nodes perhaps you can also look at proxmox to create LXC 
containers and manage these ones more easily.
Best regardsRemy


Envoyé depuis mon appareil Samsung

-------- Message d'origine --------
De : Lance Wilson <lance.wil...@monash.edu> 
Date : 16/06/2017  01:30  (GMT+01:00) 
À : John Hearns <hear...@googlemail.com> 
Cc : Beowulf Mailing List <beowulf@beowulf.org> 
Objet : Re: [Beowulf] LXD containers for cluster services and cgroups? 

Hi John,In regards to your Singularity question we are using cgroups for the 
containers. Mostly the containers are used in Slurm jobs which creates the 
appropriate cgroups. We are also using the gpu driver passthrough functionality 
of Singularity now for our machine learning and cryoem processing containers 
which have the cgroups applied to gpus.
Back to your systems containers questions many of our systems have been put 
into docker containers as they run on same/similar operating system and still 
need root to function correctly. Pretty much every new system thing we do is 
scripted and put into a container so that we can recover quickly in an outage 
scenario and move around things as part of our larger cloud (private and 
public) strategy.
Cheers,

Lance
--
Dr Lance Wilson
Senior HPC ConsultantPh: 03 99055942 (+61 3 99055942Mobile: 0437414123 (+61 4 
3741 4123)Multi-modal Australian ScienceS Imaging and Visualisation Environment
(www.massive.org.au)
Monash University


On 15 June 2017 at 20:06, John Hearns <hear...@googlemail.com> wrote:
I'm not sure this post is going to make a lot of sense. But please bear with 
me!For applications containers are possible using Singularity or Docker of 
course.
In HPC clusters we tend to have several 'service node' activities, such as the 
cluster management/ head node, perhaps separate provisioning nodes to spread 
the load, batch queue system masters, monitoring setups, job submission and 
dedicated storage nodes.
These can all of course be run on a single cluster head node in a small setup 
(with the exception of the storage nodes).  In a larger setup you can run these 
services in virtual machines.
What I am asking is anyone using technologies such as LXD containers to run 
these services?I was inspired by an Openstack talk by James Page at Canonical, 
where all the Opestack services were deployed by Juju charms onto LXD 
containers.So we pack all the services into containers on physical server(s) 
which makes moving them or re-deploying things very 
flexible.https://www.youtube.com/watch?v=5orzBITR3X8
While I'm talking abotu containers, is anyone deploying singularity containers 
in cgroups, and limiting the resources they can use (I'm specifically thinking 
of RDMA here).


ps. I have a terrible sense of deja vu here... I think I asked the Singularity 
question a month ago.I plead insanity m'lord



_______________________________________________

Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing

To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf




_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to