Based on some quick experiments, that doesn't do what I'm looking for. I set LLN=YES for the default partition and ran my test job several times, waiting each time for it to finish before submitting it again (so that all compute nodes were idle), and it still ended up on the same (first in the file) node every time.

(The documentation is ambiguous on this, but my reading of LLN is that it measures "least loaded" according to how many CPUs Slurm itself has allocated, not by the actual load average according to "uptime" or some other reporting tool. Experiments seem to bear this out - I was watching and comparing the load average on the different available compute nodes in between running my test jobs.)

~~ bnacar

On 12/1/21 2:18 PM, Guillaume COCHARD wrote:
Hello,

I think you are looking for the LLN option (Least Loaded Nodes): 
https://slurm.schedmd.com/slurm.conf.html#OPT_LLN

Guillaume

----- Mail original -----
De: "Benjamin Nacar" <benjamin_na...@brown.edu>
À: slurm-users@lists.schedmd.com
Envoyé: Mercredi 1 Décembre 2021 20:07:23
Objet: [slurm-users] random allocation of resources

Hi,

Is there a scheduling option such that, when there are multiple nodes
that are equivalent in terms of available and allocated resources, Slurm
would select randomly from among those nodes?

I've noticed that if no other jobs are running, and I submit a single
job via srun, with no parameters to specify anything other than the
defaults, the job *always* runs on the first node in slurm.conf. This
seems like it would lead to some hosts getting overused and others
getting underused. I'd like the stress on our hardware to be reasonably
evenly distributed.

Thanks,
~~ bnacar


--
Benjamin Nacar
Systems Programmer
Computer Science Department
Brown University
401.863.7621

Reply via email to