as
M. Payerle
Sent: 06 October 2020 18:50
To: Slurm User Community List
Subject: Re: [slurm-users] Controlling access to idle nodes
We use a scavenger partition, and although we do not have the policy you
describe, it could be used in your case.
Assume you have 6 nodes (node-[0-5]) and two gro
We use a scavenger partition, and although we do not have the policy you
describe, it could be used in your case.
Assume you have 6 nodes (node-[0-5]) and two groups A and B.
Create partitions
partA = node-[0-2]
partB = node-[3-5]
all = node-[0-6]
Create QoSes normal and scavenger.
Allow normal Q
We set up a partition that underlies all our hardware that is
preemptable by all higher priority partitions. That way it can grab
idle cycles while permitting higher priority jobs to run. This also
allows users to do:
#SBATCH -p primarypartition,requeuepartition
So that the scheduler will se
Hello David,
I'm still relatively new at Slurm, but one way we handle this is that for
users/groups who have "bought in" to the cluster, we use a QOS to provide
them preemptible access to the set of resources provided by, e.g., a set
number of nodes, but not the nodes themselves. That is, in one e