As I said I am not sure, but it depends on the algorithm and the code
structure of the slurm(no chance to dig into...). My imagination
is(for the way slurm works...):

Check limits on b1, ok,b2: ok: b3,ok; then b4, nook...(or any order by slurm)

If it works with the EnforcePartLimits=ANY or NO,  yeah it's a surprise...

(This use case might not be included in the original design of slurm, I guess)

"NOTE: The partition limits being considered are its configured
MaxMemPerCPU, MaxMemPerNode, MinNodes, MaxNodes, MaxTime, AllocNodes,
AllowAccounts, AllowGroups, AllowQOS, and QOS usage threshold."

Best,

Feng

On Thu, Sep 21, 2023 at 11:48 AM Bernstein, Noam CIV USN NRL (6393)
Washington DC (USA) <noam.bernst...@nrl.navy.mil> wrote:
>
> On Sep 21, 2023, at 11:37 AM, Feng Zhang <prod.f...@gmail.com> wrote:
>
> Set slurm.conf parameter: EnforcePartLimits=ANY or NO may help this, not sure.
>
>
> Hmm, interesting, but it looks like this is just a check at submission time. 
> The slurm.conf web page doesn't indicate that it affects the actual queuing 
> decision, just whether or not a job that will never run (at all, or just on 
> some of the listed partitions) can be submitted.  If it does help then I 
> think that the slurm.conf description is misleading.
>
> Noam

Reply via email to