Re: [slurm-users] Problem with configuration CPU/GPU partitions

2020-03-03 Thread Pavel Vashchenkov
I've found that this question arised two years ago: https://bugs.schedmd.com/show_bug.cgi?id=4717 And it's still unsolved :( -- Pavel Vashchenkov 02.03.2020 17:28, Pavel Vashchenkov пишет: > 28.02.2020 20:53, Renfro, Michael пишет: >> When I made similar queues, and only wanted my GPU jobs to

Re: [slurm-users] Problem with configuration CPU/GPU partitions

2020-03-02 Thread Pavel Vashchenkov
28.02.2020 20:53, Renfro, Michael пишет: > When I made similar queues, and only wanted my GPU jobs to use up to 8 cores > per GPU, I set Cores=0-7 and 8-15 for each of the two GPU devices in > gres.conf. Have you tried reducing those values to Cores=0 and Cores=20? Yes, I've tried to do it. Unfo

Re: [slurm-users] Problem with configuration CPU/GPU partitions

2020-02-28 Thread Renfro, Michael
When I made similar queues, and only wanted my GPU jobs to use up to 8 cores per GPU, I set Cores=0-7 and 8-15 for each of the two GPU devices in gres.conf. Have you tried reducing those values to Cores=0 and Cores=20? > On Feb 27, 2020, at 9:51 PM, Pavel Vashchenkov wrote: > > External Email

[slurm-users] Problem with configuration CPU/GPU partitions

2020-02-27 Thread Pavel Vashchenkov
Hello, I have a hybrid cluster with 2 GPUs and 2 20-cores CPUs on each node. I created two partitions: - "cpu" for CPU-only jobs which are allowed to allocate up to 38 cores per node - "gpu" for GPU-only jobs which are allowed to allocate up to 2 GPUs and 2 CPU cores. Respective sections in slur