You can try command as: scontrol update partition mypart Nodes=node[1-90],ab,ac #exclude the one you want to remove
"Changing the Nodes in a partition has no effect upon jobs that have already begun execution." Best, Feng On Fri, Aug 4, 2023 at 10:47 AM Pacey, Mike <m.pa...@lancaster.ac.uk> wrote: > > Hi folks, > > > > We’re currently moving our cluster from Grid Engine to SLURM, and I’m having > trouble finding the best way to perform a specific bit of partition > maintenance. I’m not sure if I’m simply missing something in the manual or if > I need to be thinking in a more SLURM-centric way. My basic question: is it > possible to ‘disable’ specific partition/node combinations rather than whole > nodes or whole partitions? Here’s an example of the sort of thing I’m looking > to do: > > > > I have node ‘node1’ with two partitions ‘x’ and ‘y’. I’d like to remove > partition ‘y’, but there are currently user jobs in that partition on that > node. With Grid Engine, I could disable specific queue instances (ie, I could > just run “qmod -d y@node1’ to disable queue/partition y on node1 and wait for > the jobs to complete and then remove the partition. That would be the least > disruptive option because: > > Queue/partition ‘y’ on other nodes would be unaffected > User jobs for queue/partition ‘x’ would still be able to launch on node1 the > whole time > > > > I can’t seem to find a functional equivalent of this in SLURM: > > I can set the whole node to Drain > I can set the whole partition to Inactive > > > > Is there some way to ‘disable’ partition y just on node1? > > > > Regards, > > Mike