Re: [slurm-users] salloc with bash scripts problem

2019-01-06 Thread Chris Samuel
On 3/1/19 12:23 am, Mahmood Naderan wrote: [mahmood@rocks7 ~]$ srun --spankx11 ./run_qemu.sh srun doesn't look inside whatever you pass to it as it can be a binary, that's why the directives are called #SBATCH as only sbatch will look at those. So you need to give srun those same arguments

Re: [slurm-users] gres with docker problem

2019-01-06 Thread Chris Samuel
On 6/1/19 8:26 pm, 허웅 wrote: I could find out the reason. That's really good that you not only figured it out but posted the solution too! -- Chris Samuel : http://www.csamuel.org/ : Berkeley, CA, USA

Re: [slurm-users] gres with docker problem

2019-01-06 Thread 허웅
I agree with Chris's opinion. I could find out the reason. As Chris said, the problem is cgroup. when I request a job to slurm that using 1 gres:gpu, slurm assign the job to the node who can have enough resource. when slurm assign a job to the node, slurm gives resource information to node

Re: [slurm-users] gres with docker problem

2019-01-06 Thread Chris Samuel
On 4/1/19 5:48 am, Marcin Stolarek wrote: I think that the main reason is the lack of access to some /dev "files" in your docker container. For singularity nvidia plugin is required, maybe there is something similar for docker... That's unlikely, the problem isn't that nvidia-smi isn't workin

Re: [slurm-users] Fwd: Using srun ends ssh sessions

2019-01-06 Thread Chris Samuel
On 5/1/19 12:17 am, Tom Smith wrote: Novice question: When I use srun, it closes my SSH sessions to compute nodes. Is this intended behaviour by design?  If so, I may need need to know more about how slurm is intended to be used.  If unexpected, how do I start troubleshooting? Sounds like