Ah, thanks so much.  I'm still a slurm newbie and I've barely used srun.  I'm 
not sure how long it would have taken me to find and understand those 
parameters from the docs.  Thanks!

________________________________
From: slurm-users <slurm-users-boun...@lists.schedmd.com> on behalf of Jeffrey 
T Frey <f...@udel.edu>
Sent: Wednesday, February 8, 2023 10:01 AM
To: Slurm User Community List <slurm-users@lists.schedmd.com>
Subject: Re: [slurm-users] slurm and singularity

You don't often get email from f...@udel.edu. Learn why this is 
important<https://aka.ms/LearnAboutSenderIdentification>
You may need srun to allocate a pty for the command.  The 
InteractiveStepOptions we use (that are handed to srun when no explicit command 
is given to salloc) are:


--interactive --pty --export=TERM


E.g. without those flags a bare srun gives a promptless session:


[(it_nss:frey)@login00.darwin ~]$ salloc -p idle srun 
/opt/shared/singularity/3.10.0/bin/singularity shell 
/opt/shared/singularity/prebuilt/postgresql/13.2.simg
salloc: Granted job allocation 3953722
salloc: Waiting for resource configuration
salloc: Nodes r1n00 are ready for job
ls -l
total 437343
-rw-r--r--  1 frey it_nss      180419 Oct 26 16:56 amd.cache
-rw-r--r--  1 frey it_nss          72 Oct 26 16:52 amd.conf
-rw-r--r--  1 frey everyone       715 Nov 12 23:39 anaconda-activate.sh
drwxr-xr-x  2 frey everyone         4 Apr 11  2022 bin
   :


With the --pty flag added:


[(it_nss:frey)@login00.darwin ~]$ salloc -p idle srun --pty 
/opt/shared/singularity/3.10.0/bin/singularity shell 
/opt/shared/singularity/prebuilt/postgresql/13.2.simg
salloc: Granted job allocation 3953723
salloc: Waiting for resource configuration
salloc: Nodes r1n00 are ready for job
Singularity>



On Feb 8, 2023, at 09:47 , Groner, Rob <rug...@psu.edu<mailto:rug...@psu.edu>> 
wrote:

I tried that, and it says the nodes have been allocated, but it never comes to 
an apptainer prompt.

I then tried doing them in separate steps.  Doing salloc works, I get a prompt 
on the node that was allocated.  I can then run "singularity shell <sif>" and 
get the apptainer prompt.  If I prefix that command with "srun", then it just 
hangs and I never get the prompt.  So that seems to be the sticking point.  
I'll have to do some experiments running singularity with srun.

From: slurm-users 
<slurm-users-boun...@lists.schedmd.com<mailto:slurm-users-boun...@lists.schedmd.com>>
 on behalf of Jeffrey T Frey <f...@udel.edu>
Sent: Tuesday, February 7, 2023 6:16 PM
To: Slurm User Community List <slurm-users@lists.schedmd.com>
Subject: Re: [slurm-users] slurm and singularity

You don't often get email from f...@udel.edu. Learn why this is important
The remaining issue then is how to put them into an allocation that is actually 
running a singularity container.  I don't get how what I'm doing now is 
resulting in an allocation where I'm in a container on the submit node still!

Try prefixing the singularity command with "srun" e.g.


salloc <salloc-parameters> srun <srun-parameters> /usr/bin/singularity shell 
<path to sif>

Reply via email to