That would make sense, as slurm would not be aware of anything else.
Slurmd does not report any ongoing status of resources. It is slurmctld
that keeps track of what it has allocated.
If you truly want something like this, you could have a wrapper script
look at available nodes, pick a random one and set the job to use that node.
Brian Andrus
On 12/1/2021 12:06 PM, Benjamin Nacar wrote:
Based on some quick experiments, that doesn't do what I'm looking for.
I set LLN=YES for the default partition and ran my test job several
times, waiting each time for it to finish before submitting it again
(so that all compute nodes were idle), and it still ended up on the
same (first in the file) node every time.
(The documentation is ambiguous on this, but my reading of LLN is that
it measures "least loaded" according to how many CPUs Slurm itself has
allocated, not by the actual load average according to "uptime" or
some other reporting tool. Experiments seem to bear this out - I was
watching and comparing the load average on the different available
compute nodes in between running my test jobs.)
~~ bnacar
On 12/1/21 2:18 PM, Guillaume COCHARD wrote:
Hello,
I think you are looking for the LLN option (Least Loaded Nodes):
https://slurm.schedmd.com/slurm.conf.html#OPT_LLN
Guillaume
----- Mail original -----
De: "Benjamin Nacar" <benjamin_na...@brown.edu>
À: slurm-users@lists.schedmd.com
Envoyé: Mercredi 1 Décembre 2021 20:07:23
Objet: [slurm-users] random allocation of resources
Hi,
Is there a scheduling option such that, when there are multiple nodes
that are equivalent in terms of available and allocated resources, Slurm
would select randomly from among those nodes?
I've noticed that if no other jobs are running, and I submit a single
job via srun, with no parameters to specify anything other than the
defaults, the job *always* runs on the first node in slurm.conf. This
seems like it would lead to some hosts getting overused and others
getting underused. I'd like the stress on our hardware to be reasonably
evenly distributed.
Thanks,
~~ bnacar