Hello,

we also do it this way, by checking if job_desc.script is empty. I have no idea if this is foolproof in any way (and use cases like, say, someone starting a Jupyter or RStudio instance via script are not covered), but hopefully, users who are inventive enough to find ways around this are also receptive enough to accept more reasonable and robust solutions for their workflows. Aside from setting a reasonable time limit, I'd say the most important limitation to steer users away from overusing interactive jobs is enforcing (either via partition or via QoS) that only one interactive job per user  can be running at any given time.

Cheers,
René

Am 25.04.25 um 11:37 schrieb Ewan Roche via slurm-users:
Hello Ole,
the way I identify interactive jobs is by checking that the script is empty in 
job_submit.lua.

If it's the case then they're assigned to an interactive QoS that limits the 
time and resources as well as only allowing one job per user.


         if job_desc.script == nil or job_desc.script == '' then

             slurm.log_info("slurm_job_submit: jobscript is missing, assuming 
interactive job")
             slurm.log_user("Launching an interactive job")

             if job_desc.partition == "gpu" then
                 job_desc.qos = "gpu_interactive"
             end

             if job_desc.partition == "cpu" then
                 job_desc.qos = "cpu_interactive"
             end

             return slurm.SUCCESS

Thanks

Ewan


-----Original Message-----
From: Ole Holm Nielsen via slurm-users <slurm-users@lists.schedmd.com>
Sent: Freitag, 25. April 2025 11:15
To: slurm-us...@schedmd.com
Subject: [slurm-users] How can we put limits on interactive jobs?

We would like to put limits on interactive jobs (started by salloc) so that 
users don't leave unused interactive jobs behind on the cluster by mistake.

I can't offhand find any configurations that limit interactive jobs, such as 
enforcing a timelimit.

Perhaps this could be done in job_submit.lua, but I couldn't find any job_desc 
parameters in the source code which would indicate if a job is interactive or 
not.

Question: How do people limit interactive jobs, or identify orphaned jobs and 
kill them?

Thanks a lot,
Ole

--
Ole Holm Nielsen
PhD, Senior HPC Officer
Department of Physics, Technical University of Denmark

--
slurm-users mailing list -- slurm-users@lists.schedmd.com To unsubscribe send 
an email to slurm-users-le...@lists.schedmd.com

--
Dipl.-Chem. René Sitt
Hessisches Kompetenzzentrum für Hochleistungsrechnen
Philipps-Universität Marburg
Hans-Meerwein-Straße
35032 Marburg

Tel. +49 6421 28 23523
si...@hrz.uni-marburg.de
www.hkhlr.de

Attachment: smime.p7s
Description: Kryptografische S/MIME-Signatur

-- 
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com

Reply via email to