I look at it this way (so it makes sense):

It goes into a pending state because it is possible for the time to become available (you could just run a command that increases the timelimit) so it is waiting for that to happen. This is useful because you may have some users have a job that does indeed need to go that long, but they have to let you know to allow it to happen.

If you do not want ANY jobs to queue up if they are asking for more time than is available, you can add some code to the job_submit.lua

Here is a snippet from mine:

   if time_limit > part_max_time then
       slurm.log_info("job from uid %d with request for more than max_time: Denying.",job_desc.user_id)        slurm.log_user("You cannot request more than %s minutes in partition %s!!", part_max_time, partition)
       return slurm.ESLURM_INVALID_TIME_LIMIT
   end

The time_limit, part_max_time and partition variables are mapped from job_desc and part_list


Brian Andrus

On 12/2/2021 6:01 AM, mercan wrote:
Hi;

The EnforcePartLimits parameter in slurm.conf, should be set to ALL or ANY to enforce time limit for partition.

Regards.

Ahmet M.


2.12.2021 16:18 tarihinde Gestió Servidors yazdı:

Hello,

I’m going a problema I have detected in my SLURM cluster. If I configure a partition with a “TimeLimit” of, for example, 15 minutes and, later, a user submits a job in which he/she apply a “TimeLimitt” bigger (for example, 20 minutes), job remains in PENDING state because TimeLimit requested by user is bigger that configured in the queue. My question is: is there any way to force to the partition TimeLimit from the queue if user request a bigger value?

Thanks.



Reply via email to