Hi all,
Thanks for the great suggestions! It seems that the Slurm job_submit.lua
script is the most flexible way to check for interactive jobs, and change
job parameters such as QOS, time_limit etc.
I've added this Lua function to our job_submit.lua script and it seems to
work fine:
-- Ch
Yes Michael! With this setup it does the job.
There are so many tuning possibilities in Slurm I had missed this one.
Thank you very much.
Patrick
Le 22/04/2025 à 16:30, Michael Gutteridge a écrit :
WHoops, my mistake, sorry. Is this closer to what you want:
MaxTRESRunMinsPU
MaxTRESRunMinsPe
Hello,
we also do it this way, by checking if job_desc.script is empty. I have
no idea if this is foolproof in any way (and use cases like, say,
someone starting a Jupyter or RStudio instance via script are not
covered), but hopefully, users who are inventive enough to find ways
around this a
Hi Ole,
Ole Holm Nielsen via slurm-users
writes:
> We would like to put limits on interactive jobs (started by salloc) so
> that users don't leave unused interactive jobs behind on the cluster
> by mistake.
>
> I can't offhand find any configurations that limit interactive jobs,
> such as enforci
Hello Ole,
the way I identify interactive jobs is by checking that the script is empty in
job_submit.lua.
If it's the case then they're assigned to an interactive QoS that limits the
time and resources as well as only allowing one job per user.
if job_desc.script == nil or job_desc.scr
We would like to put limits on interactive jobs (started by salloc) so
that users don't leave unused interactive jobs behind on the cluster by
mistake.
I can't offhand find any configurations that limit interactive jobs, such
as enforcing a timelimit.
Perhaps this could be done in job_submit