[slurm-users] Re: How can we put limits on interactive jobs?

2025-04-25 Thread Ole Holm Nielsen via slurm-users
Hi all, Thanks for the great suggestions! It seems that the Slurm job_submit.lua script is the most flexible way to check for interactive jobs, and change job parameters such as QOS, time_limit etc. I've added this Lua function to our job_submit.lua script and it seems to work fine: -- Ch

[slurm-users] Re: Setting QoS with slurm 24.05.7

2025-04-25 Thread Patrick Begou via slurm-users
Yes Michael!  With this setup it does the job. There are so many tuning possibilities in Slurm I had missed this one. Thank you very much. Patrick Le 22/04/2025 à 16:30, Michael Gutteridge a écrit : WHoops, my mistake, sorry.  Is this closer to what you want: MaxTRESRunMinsPU MaxTRESRunMinsPe

[slurm-users] Re: How can we put limits on interactive jobs?

2025-04-25 Thread René Sitt via slurm-users
Hello, we also do it this way, by checking if job_desc.script is empty. I have no idea if this is foolproof in any way (and use cases like, say, someone starting a Jupyter or RStudio instance via script are not covered), but hopefully, users who are inventive enough to find ways around this a

[slurm-users] Re: How can we put limits on interactive jobs?

2025-04-25 Thread Loris Bennett via slurm-users
Hi Ole, Ole Holm Nielsen via slurm-users writes: > We would like to put limits on interactive jobs (started by salloc) so > that users don't leave unused interactive jobs behind on the cluster > by mistake. > > I can't offhand find any configurations that limit interactive jobs, > such as enforci

[slurm-users] Re: How can we put limits on interactive jobs?

2025-04-25 Thread Ewan Roche via slurm-users
Hello Ole, the way I identify interactive jobs is by checking that the script is empty in job_submit.lua. If it's the case then they're assigned to an interactive QoS that limits the time and resources as well as only allowing one job per user. if job_desc.script == nil or job_desc.scr

[slurm-users] How can we put limits on interactive jobs?

2025-04-25 Thread Ole Holm Nielsen via slurm-users
We would like to put limits on interactive jobs (started by salloc) so that users don't leave unused interactive jobs behind on the cluster by mistake. I can't offhand find any configurations that limit interactive jobs, such as enforcing a timelimit. Perhaps this could be done in job_submit