Hi Abhiram,
Glad to help, but it turns out I was wrong :-)
We also didn't have ConstrainDevices=yes set, so nvidia-smi always
showed all the GPUs.
Thanks to Ryan and Samuel for putting me straight on that.
Regards
Loris
Abhiram Chintangal writes:
> Loris,
>
> You are correct! Instead of
Loris,
You are correct! Instead of using nvidia-smi as a check, I confirmed the
GPU allocation by printing out
the environment variable, CUDA_VISIBILE_DEVICES, and it was as expected.
Thanks for your help!
On Thu, Jan 14, 2021 at 12:18 AM Loris Bennett
wrote:
> Hi Abhiram,
>
> Abhiram Chintang
Hi Abhiram,
Abhiram Chintangal writes:
> Hello,
>
> I recently set up a small cluster at work using Warewulf/Slurm. Currently, I
> am not able to get the scheduler to
> work well with GPU's (Gres).
>
> While slurm is able to filter by GPU type, it allocates all the GPU's on the
> node. See
Hello,
I recently set up a small cluster at work using Warewulf/Slurm. Currently,
I am not able to get the scheduler to
work well with GPU's (Gres).
While slurm is able to filter by GPU type, it allocates all the GPU's on
the node. See below:
[abhiram@whale ~]$ srun --gres=gpu:p100:2 -n 1 --part