Hello Ole,
the way I identify interactive jobs is by checking that the script is empty in 
job_submit.lua.

If it's the case then they're assigned to an interactive QoS that limits the 
time and resources as well as only allowing one job per user.


        if job_desc.script == nil or job_desc.script == '' then

            slurm.log_info("slurm_job_submit: jobscript is missing, assuming 
interactive job")
            slurm.log_user("Launching an interactive job")

            if job_desc.partition == "gpu" then
                job_desc.qos = "gpu_interactive"
            end

            if job_desc.partition == "cpu" then
                job_desc.qos = "cpu_interactive"
            end

            return slurm.SUCCESS

Thanks

Ewan


-----Original Message-----
From: Ole Holm Nielsen via slurm-users <slurm-users@lists.schedmd.com> 
Sent: Freitag, 25. April 2025 11:15
To: slurm-us...@schedmd.com
Subject: [slurm-users] How can we put limits on interactive jobs?

We would like to put limits on interactive jobs (started by salloc) so that 
users don't leave unused interactive jobs behind on the cluster by mistake.

I can't offhand find any configurations that limit interactive jobs, such as 
enforcing a timelimit.

Perhaps this could be done in job_submit.lua, but I couldn't find any job_desc 
parameters in the source code which would indicate if a job is interactive or 
not.

Question: How do people limit interactive jobs, or identify orphaned jobs and 
kill them?

Thanks a lot,
Ole

--
Ole Holm Nielsen
PhD, Senior HPC Officer
Department of Physics, Technical University of Denmark

--
slurm-users mailing list -- slurm-users@lists.schedmd.com To unsubscribe send 
an email to slurm-users-le...@lists.schedmd.com

-- 
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com

Reply via email to