Hi,

I have defined a partition for each GPU type we have in the cluster. This was 
mainly because I have different Node types for each GPU type and I want to set 
`DefCpuPerGPU` `DefMemPerGPU` for each of them. Unfortunately one can't set 
them per node but can do that per partition.

Now sometimes people don't care about the GPU type and would like any of the 
partitions to pick up the job. The `--partition` in `sbatch` does allow 
specifying multiple partitions and this works fine when I'm not specifying 
`--gpu`. However when I add do something like `sbatch -p A,B --gpus 1 
script.sh` then I get "srun: job 6279 queued and waiting for resources" even 
though partition B does have a GPU to offer. Strangely if the first partition 
specified (i.e. A) had a free GPU it would allocate the GPU and run the job.
Is this a bug? Perhaps related to this: 
https://groups.google.com/g/slurm-users/c/UOUVfkajUBQ

-- 
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com

Reply via email to