Hello,

I have defined a partition and corresponding QOS in Slurm. This is the serial 
queue to which we route jobs that require up to (and including) 20 cpus. The 
nodes controlled by serial are shared. I've set the QOS like so..

[djb1@cyan53 slurm]$ sacctmgr show qos serial format=name,maxtresperuser
      Name     MaxTRESPU
---------- -------------
    serial       cpu=120

The max cpus/user is set high to try to ensure (as often as possible) that the 
nodes are all busy and not in mixed states. Obviously this cannot be the case 
all the time -- depending upon memory requirements, etc.

I noticed that a number of jobs were pending with the reason 
QOSMaxNodePerUserLimit. I've tried firing test jobs to the queue myself and 
noticed that I can never have more than 32 jobs running (each requesting 1 cpu) 
and the rest are pending as per the reason above. Since the QOS cpu/user limit 
is set to 120 I would expect to be able to run more jobs -- given that some 
serial nodes are still not fully occupied. Furthermore, I note that other users 
appear not to be able to use more then 32 cpus in the queue.

The 32 limit does make a degree of sense. The "normal" QOS is set to 
cpus/user=1280, nodes/user=32. It's almost like the 32 cpus in the serial queue 
are being counted as nodes -- as per the pending reason.

Could someone please help me understand this issue and how to avoid it?

Best regards,
David

Reply via email to