On Mon, Sep 24, 2018 at 3:53 PM "Eli V" wrote:
>I'm not using the :no_consume syntax, simply Gres=name:#,y:z,...
>Of course after changes copy gres & slurm.conf to all nodes and scontrol 
>reconfigure works great for me.

We are using ":no_consume" because we don't care how Slurm processes use/share 
the GPU memory/cores, but just want to be able to specify minimum counts for 
these, as in "sbatch --gres=gpu:2,gpu_mem:6G [...]"

Also, we have a node-specific gres.conf on each node that has generic 
resources, but I think that's OK.

The end goal is to be able to specify minimum resource levels on GPU nodes, 
which are GPU memory and CUDA cores, and have the scheduler select only 
appropriate nodes for the job.

Reply via email to