You could try holding the job and the releasing it. I've inquired of
SchedMD about this before and this is the response they gave:
https://bugs.schedmd.com/show_bug.cgi?id=8069
-Paul Edmon-
On 3/23/2020 8:05 AM, Sefa Arslan wrote:
Hi,
Due to lack of source in a partition, I updated the job to another
partition and increased the priority to top value. Although there are
enough source for the job to be started, updated jobs have not
started yet. When I looked using "scontrol check jobid", I saw the
SchedNodeList value is not updated, and still pointing a nodes from
the earlier partition. Is there a way to reset/clear the SchedNodeList
value? Or force slurmctdl to start the job immediately?
Regards,