Sounds like you’re sort of the poster-child for this section of the
documentation:
https://slurm.schedmd.com/high_throughput.html — note that it’s possible for
this to be version specific, so look for this file in the “archive” section of
the website if you need other than 20.02.
--
|| \\
Hey,
you can use the 'defer' scheduler parameter:
https://slurm.schedmd.com/sched_config.html if you don't require immediate
start of jobs.
best regards
Maciej Pawlik
pt., 28 sie 2020 o 12:32 navin srivastava
napisał(a):
> Hi Team,
>
> facing one issue. several users submitting 2 job in a
Seems if they are really that short, it would be better to have a single
job run through them all, or 10 jobs run through 2000 each kind of thing.
Such short jobs take more time for setup/teardown than the job itself,
making this approach inefficient. The amount of resources used just to
sched
Hi Team,
facing one issue. several users submitting 2 job in a single batch job
which is very short jobs( says 1-2 sec). so while submitting more job
slurmctld become unresponsive and started giving message
ending job 6e508a88155d9bec40d752c8331d7ae8 to queue.
sbatch: error: Batch job submiss