Hi John,
For singularity containers there isn't any need to integrate with the
scheduler as the containers run as normal user programs. They are different
to docker containers as they don't have/need root to run. The cluster
itself does need to have singularity installed as it runs a setuid binary
to run the container. They are a super convenient way of getting around all
the software dependency issues on our Centos cluster.

Cheers,

Lance
--
Dr Lance Wilson
Senior HPC Consultant
Ph: 03 99055942 (+61 3 99055942
Mobile: 0437414123 (+61 4 3741 4123)
Multi-modal Australian ScienceS Imaging and Visualisation Environment
(www.massive.org.au)
Monash University

On 17 June 2017 at 00:14, John Hearns <hear...@googlemail.com> wrote:

> Thanks Josh.  Am I familiar with modifying Python code and PBS hook
> scripts?
> Yes - I have had my head under the hood of PBS hooks for a long time.
> Hence the pronounced stutter and my predelection to randomly scream out
> loud in public places.
>
>
>
>
> On 16 June 2017 at 15:48, Josh Catana <jcat...@gmail.com> wrote:
>
>> I know they have a canned scheduler hook to run docker. If you're
>> familiar with python modifying their code to run singularity shouldn't be
>> difficult. I rewrote their hook to operate in my environment pretty easily.
>>
>> On Jun 16, 2017 4:29 AM, "John Hearns" <hear...@googlemail.com> wrote:
>>
>>> Lance, thankyou very much for the reply. I will look at Docker for those
>>> 'system' type tasks also.
>>>
>>> Regarding Singularity does anyone know much about Singularity
>>> integration with PBSPro?
>>> I guess I could actually ask Altair....
>>>
>>>
>>> On 16 June 2017 at 01:30, Lance Wilson <lance.wil...@monash.edu> wrote:
>>>
>>>> Hi John,
>>>> In regards to your Singularity question we are using cgroups for the
>>>> containers. Mostly the containers are used in Slurm jobs which creates the
>>>> appropriate cgroups. We are also using the gpu driver passthrough
>>>> functionality of Singularity now for our machine learning and cryoem
>>>> processing containers which have the cgroups applied to gpus.
>>>>
>>>> Back to your systems containers questions many of our systems have been
>>>> put into docker containers as they run on same/similar operating system and
>>>> still need root to function correctly. Pretty much every new system thing
>>>> we do is scripted and put into a container so that we can recover quickly
>>>> in an outage scenario and move around things as part of our larger cloud
>>>> (private and public) strategy.
>>>>
>>>> Cheers,
>>>>
>>>> Lance
>>>> --
>>>> Dr Lance Wilson
>>>> Senior HPC Consultant
>>>> Ph: 03 99055942 (+61 3 99055942 <+61%203%209905%205942>
>>>> Mobile: 0437414123 (+61 4 3741 4123)
>>>> Multi-modal Australian ScienceS Imaging and Visualisation Environment
>>>> (www.massive.org.au)
>>>> Monash University
>>>>
>>>> On 15 June 2017 at 20:06, John Hearns <hear...@googlemail.com> wrote:
>>>>
>>>>> I'm not sure this post is going to make a lot of sense. But please
>>>>> bear with me!
>>>>> For applications containers are possible using Singularity or Docker
>>>>> of course.
>>>>>
>>>>> In HPC clusters we tend to have several 'service node' activities,
>>>>> such as the cluster management/ head node, perhaps separate provisioning
>>>>> nodes to spread the load, batch queue system masters, monitoring setups,
>>>>> job submission and dedicated storage nodes.
>>>>>
>>>>> These can all of course be run on a single cluster head node in a
>>>>> small setup (with the exception of the storage nodes).  In a larger setup
>>>>> you can run these services in virtual machines.
>>>>>
>>>>> What I am asking is anyone using technologies such as LXD containers
>>>>> to run these services?
>>>>> I was inspired by an Openstack talk by James Page at Canonical, where
>>>>> all the Opestack services were deployed by Juju charms onto LXD 
>>>>> containers.
>>>>> So we pack all the services into containers on physical server(s)
>>>>> which makes moving them or re-deploying things very flexible.
>>>>> https://www.youtube.com/watch?v=5orzBITR3X8
>>>>>
>>>>> While I'm talking abotu containers, is anyone deploying singularity
>>>>> containers in cgroups, and limiting the resources they can use (I'm
>>>>> specifically thinking of RDMA here).
>>>>>
>>>>>
>>>>>
>>>>> ps. I have a terrible sense of deja vu here... I think I asked the
>>>>> Singularity question a month ago.
>>>>> I plead insanity m'lord
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin
>>>>> Computing
>>>>> To change your subscription (digest mode or unsubscribe) visit
>>>>> http://www.beowulf.org/mailman/listinfo/beowulf
>>>>>
>>>>>
>>>>
>>>
>>> _______________________________________________
>>> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
>>> To change your subscription (digest mode or unsubscribe) visit
>>> http://www.beowulf.org/mailman/listinfo/beowulf
>>>
>>>
>
> _______________________________________________
> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
>
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to