I think it's more related to your configuration than general slurm capabilities. For example if you have quite long prolog/epilog scripts it may be good idea to discourage users from submitting huge job arrays (with very short tasks?).
In my case it's quite common to see users submitting arrays with 150-200k of jobs, slurmctl+slurmdbd+mysql runs on the same server- 32GB, never had issues with lack of free memory. cheers, Marcin 2017-11-22 15:42 GMT+01:00 Loris Bennett <loris.benn...@fu-berlin.de>: > Hi, > > In the documentation on job arrays > > https://slurm.schedmd.com/job_array.html > > it says > > Be mindful about the value of MaxArraySize as job arrays offer an easy > way for users to submit large numbers of jobs very quickly. > > How much do I have to worry about this, if I am using fairshare > scheduling, since at some point the user's shares will have been > consumed and new jobs will only start running after a certain period has > elapsed? Or is it referring to the amount of memory the scheduler might > need in order to manage an enormous queue? For our standard QOS we > currently use neither MaxJobs nor MaxSubmitJobs. > > Cheers, > > Loris > > -- > Dr. Loris Bennett (Mr.) > ZEDAT, Freie Universität Berlin Email loris.benn...@fu-berlin.de > >