t; resources for everyone:
>
> hpcshell ()
> {
> srun --partition=interactive $@ --pty $SHELL -i
> }
> ------
> *From:* slurm-users on behalf of
> Jaekyeom Kim
> *Sent:* Tuesday, August 4, 2020 5:35 AM
> *To:* slurm-us...@schedmd.com
&g
Hi,
I'd like to prevent my Slurm users from taking up resources with dummy
shell process jobs left unaware/intentionally.
To that end, I simply want to put a tougher maximum time limit for srun
only.
One possible way might be to wrap the srun binary.
But could someone tell me if there is any prope
Hi,
I'm running a GPU cluster, and I would like to know if there is a way to
allocate resource for jobs without causing GPU fragmentation.
Currently, I'm using
> SelectType=select/cons_res
>
> SelectTypeParameters=CR_Core,CR_CORE_DEFAULT_DIST_BLOCK,CR_ONE_TASK_PER_CORE
and over-subscribing of C
Hi,
I made two QOSes in Slurm to define two levels of priorities and preemption.
And, I can limit, for instance, the maximum number of high-priority jobs
running or pending submitted by each user, to 5 by setting
MaxSubmitJobsPerUser=5 for the high-priority QOS.
But if I want to give more quota