Thanks Thomas,
That's helpful and a bit more tenable than what I thought was
going to be required. I have a few additional questions. Based on my reading of
the docs, it seems that GrpTRESmin is set on the account and then each user
needs to have the partition set there. This bri
Slurm accounting is based on the notion of "associations". An association
is a set of cluster, partition, allocation account, and user. I think most
sites do the accounting so that it is a single limit applied to all
partitions, etc. but you can use sacctmgr to apply limits at any
association lev
Hi all,
I'm new to slurm. I've used PBS extensively and have set up an
accounting system that gives groups/account a fixed number of hours per month
on a per queue/partition basis. It decrements that time allocation with every
job run and then resets it to the original value at t
SelectTypeParameters=CR_LLN will do this automatically for all jobs
submitted to the cluster. Not sure if that's an acceptable solution for you.
On Wed, Dec 12, 2018 at 11:54 AM Roger Moye wrote:
>
>
> I have a user who wants to control how job arrays are allocated to
> nodes. He wants to mimi
I have a user who wants to control how job arrays are allocated to nodes. He
wants to mimic a cyclic distribution, basically round-robin assignment of each
job within the array. That is, array element 1 as assigned to node 1, element
2 to node 2, and so on until there is an element running
Hello,
I wondered if someone could please help us to understand why the
PrologFlags=contain flag is causing jobs to fail and draining compute nodes. We
are, by the way, using slurm 18.08.0. Has anyone else seem this behaviour?
I'm currently experimenting with PrologFlags=contain. I've found tha