Hi David,
David Carlson writes:
> Hi SLURM users,
>
> I work on a cluster, and we recently transitioned to using SLURM on
> some of our nodes. However, we're currently having some difficulty
> limiting the number of jobs that a user can run simultaneously in
> particular partitions. Here are the
Hi SLURM users,
I work on a cluster, and we recently transitioned to using SLURM on some of
our nodes. However, we're currently having some difficulty limiting the
number of jobs that a user can run simultaneously in particular
partitions. Here are the steps we've taken:
1. Created a new QOS a
Hi All,
We created a slurm job script archiver which you may find handy. We initially
attempted to do this through slurm with a slurmctld prolog but it really bogged
the scheduler down. This new solution is a custom c++ program that uses inotify
to watch for job scripts and environment files to
Tried deleting the existing reservations today, and got this:
-
root@captain1:~# scontrol show res
ReservationName=res17-pc2 StartTime=2019-02-25T14:58:40
EndTime=2029-02-22T14:58:40 Duration=3650-00:00:00
Nodes=res17-pc2 NodeCnt=1 CoreCnt=6 Features=(null) PartitionName=desktops
Flags=SP