Re: [slurm-users] Quickly throttling/limiting a specific user's jobs

2020-09-23 Thread Sebastian T Smith
mail: stsm...@unr.edu<mailto:stsm...@unr.edu> website: http://rc.unr.edu<http://rc.unr.edu/> From: slurm-users on behalf of Paul Edmon Sent: Tuesday, September 22, 2020 5:01 PM To: slurm-users@lists.schedmd.com Subject: Re: [slurm-users] Quickly thro

Re: [slurm-users] Quickly throttling/limiting a specific user's jobs

2020-09-22 Thread Paul Edmon
I would look at: /MaxJobs/= Maximum number of jobs each user is allowed to run at one time in this association. This is overridden if set directly on a user. Default is the cluster's limit. To clear a previously set value use the modify command with a new value of -1. Which is Assoc

Re: [slurm-users] Quickly throttling/limiting a specific user's jobs

2020-09-22 Thread Brian Andrus
Well, I know of no way to 'throttle' running jobs. Once they are out the gate, you can't stop them from leaving.. That said, your approach of setting arraytaskthrottle is just what you want for any pending jobs. As a preventative measure, I imagine you could set the default to 1 and then cha

[slurm-users] Quickly throttling/limiting a specific user's jobs

2020-09-22 Thread Ransom, Geoffrey M.
Hello We had a user post a large number of array jobs with a short actual run time (20-80 seconds, but mostly to the low end) and slurmctld was falling behind on RPC calls trying to handle the jobs. It was a bit awkward trying to slap arraytaskthrottle=5 on each of the queued array jobs whil