If you set up a higher priority partition with Preemption OFF on the lower priority partition you should be able to accomplish this.  If you have preemption turned off for the specific partitions in question Slurm will not preempt but will schedule jobs from the higher priority partition first regardless of current fairshare scores. See:

*PreemptMode*
   Mechanism used to preempt jobs or enable gang scheduling for this
   partition when *PreemptType=preempt/partition_prio* is configured.
   This partition-specific *PreemptMode* configuration parameter will
   override the cluster-wide *PreemptMode* for this partition. It can
   be set to OFF to disable preemption and gang scheduling for this
   partition. See also *PriorityTier* and the above description of the
cluster-wide *PreemptMode* parameter for further details.
This is at least how we manage that.

-Paul Edmon-

On 12/1/2021 11:32 AM, Sean McGrath wrote:
Hi,

Apologies for having to ask such a basic question.

We want to be able to give some users preferential access to some
nodes. They bought the nodes which are currently in a 'long' partition
as their jobs need a longer walltime.

When the purchasing users group is not using the nodes I would like
other users to be able to run jobs on those nodes but when the owners
group submit jobs I want those jobs to be queued as soon as currently
running jobs on those nodes are finished. My understanding is that
preemption won't work in these circumstances as it will either cancel or
suspend currently running jobs, I want the currently running jobs to
finish before the preferential ones start.

I'm wondering if QOS could do what we need here. Can the following be
sanity checked please.

Put the specific nodes in both the long and the compute (standard)
partition. Then restrict access to the long partition to specified users
so that all users can access them in the compute queue but only a subset
of users can use the longer wall time queue.

$ scontrol update PartitionName=long Users=user1,user2

We currently don't have QOS enabled so change that in slurm.conf and
restart the slurmctld.
-PriorityWeightQOS=0
+PriorityWeightQOS=1

Then create a qos and modify its priority
$ sacctmgr add qos boost
$ sacctmgr modify qos boost set priority=10
$ sacctmgr modify user user1 set qos=boost

Will that do what I expect please?

Many thanks and again apologies for the basic question.

Sean

Reply via email to