Okay, thanks for the hint that I should use cgroups.
With cgroups the behaviour is as expected.
:)
Max
> Il 10/10/20 18:53, Renfro, Michael ha scritto:
>
> > * Do you want to ensure that one job requesting 9 tasks (and 1 CPU per
> > task) cant overstep its reservation and take
Il 10/10/20 18:53, Renfro, Michael ha scritto:
> * Do you want to ensure that one job requesting 9 tasks (and 1 CPU per
> task) can’t overstep its reservation and take resources away from
> other jobs on those nodes? Cgroups [1] should be able to confine the
> job to its 9 CPUs, and
, October 10, 2020 at 6:06 AM
To:
Subject: [slurm-users] sbatch overallocation
Dear slurm-users,
I built a slurm system consisting of two nodes (Ubuntu 20.04.1, slurm 20.02.5):
# COMPUTE NODES
GresTypes=gpu
NodeName=lsm[216-217] Gres=gpu:tesla:1 CPUs
Hi;
You can submit each pimplefoam as a seperate job. or if you realy submit
as a single job, you can use a program to run each of them as much as
cpu count such as gnu parallel:
https://www.gnu.org/software/parallel/
regards;
Ahmet M.
10.10.2020 14:05 tarihinde Max Quast yazdı:
Dear sl
Dear slurm-users,
I built a slurm system consisting of two nodes (Ubuntu 20.04.1, slurm
20.02.5):
# COMPUTE NODES
GresTypes=gpu
NodeName=lsm[216-217] Gres=gpu:tesla:1 CPUs=64
RealMemory=192073 Sockets=2 CoresPerSocket=16 ThreadsPerCore=2 St