We are pleased to announce the availability of Slurm release candidate
version 21.08.0rc1.
This is the first release candidate version of the upcoming 21.08
release series, and represents the end of development for the release
cycle, and a finalization of the RPC and state file formats.
If a
Slurm doesn’t seem to have a way to pass Matching-OR constraints between
different components of a hetergeneous job. Consequently, I can’t set up a
hetereogeneous job to constrain the components to all have the same feature
from a set of features, unless I directly specify the feature. In other
Hi Carsten, thank you very much for pointing me in the right direction. I think this is what I'm looking for and I will try it out. Best regardsPeter 29.07.2021, 15:13, "Carsten Beyer" :Hi Peter,you could create a reservation with scontrol and put only the root useror any other testuser(s) in the '
Hi Peter,
you could create a reservation with scontrol and put only the root user
or any other testuser(s) in the 'users' section, e.g.
scontrol create reservation=test nodes= starttime=
duration= users=root
Then you need to put the reservation name to your sbatch definition or
commandline
Hello everyone, I have a Slurm GPU cluster that I'm administrating and from time to time I need to run test jobs. The issue is that my users allocate all GPUs as soon as they become available, which makes testing for me impossible. I could drain a node and wait until all jobs are finished, but as s
Hello
I have two logs, one in /var/log/slurm/log
and another in /var/log/slurmctld.log
For the second I had to enter the restart in the lines you gave me.
For /var/log/slurm/log do i have to write the same lines?
Thank you
Felix
On 7/27/2021 3:55 PM, Sean Crosby wrote:
/var/log/slurm/