Dear Loris,
Yes it is indeed a bit odd. At least now I know that this is how SLURM
behaves and not something that has to do with our configuration.
Regards,
Thekla
On 9/12/21 1:04 μ.μ., Loris Bennett wrote:
Dear Thekla,
Yes, I think you are right. I have found a similar job on my system a
Dear Thekla,
Yes, I think you are right. I have found a similar job on my system and
this does seem to be the normal, slightly confusing behaviour. It looks
as if the pending elements of the array get assigned a single node,
but then start on other nodes:
$ squeue -j 8536946 -O jobid,jobarray
Dear Loris,
Thank you for your reply. I don't believe that there is something wrong
with the job configuration or the node configuration to be honest.
I have just submitted a simple sleep script:
#!/bin/bash
sleep 10
as below:
sbatch --array=1-10 --ntasks-per-node=40 --time=09:00:00 test.s
Dear Thekla,
Thekla Loizou writes:
> Dear Loris,
>
> There is no specific node required for this array. I can verify that from
> "scontrol show job 124841" since the requested node list is empty:
> ReqNodeList=(null)
>
> Also, all 17 nodes of the cluster are identical so all nodes fulfill the jo
Dear Loris,
There is no specific node required for this array. I can verify that
from "scontrol show job 124841" since the requested node list is empty:
ReqNodeList=(null)
Also, all 17 nodes of the cluster are identical so all nodes fulfill the
job requirements, not only node cn06.
By "sav
Hi Thekla,
Thekla Loizou writes:
> Dear all,
>
> I have noticed that SLURM schedules several jobs from a job array on the same
> node with the same start time and end time.
>
> Each of these jobs requires the full node. You can see the squeue output
> below:
>
> JOBID PARTITION S
Dear all,
I have noticed that SLURM schedules several jobs from a job array on the
same node with the same start time and end time.
Each of these jobs requires the full node. You can see the squeue output
below:
JOBID PARTITION ST START_TIME NODES
SCHEDNODES NOD