Hello,
"Mccall, Kurt E. (MSFC-EV41)" writes:
> MPICH uses the PMI 1 interface by default, but for our 20.02.3 Slurm
> installation, “srun –mpi=list yields”
>
>
>
> $ srun --mpi=list
>
> srun: MPI types are...
>
> srun: cray_shasta
>
> srun: pmi2
>
> srun: none
>
>
>
> PMI 2 is there, but no
Antony,
My apologies – another Slurm expert confirmed your answer.
Kurt
From: slurm-users On Behalf Of Antony
Cleave
Sent: Tuesday, December 28, 2021 6:15 PM
To: Slurm User Community List
Subject: [EXTERNAL] Re: [slurm-users] Slurm and MPICH don't play well together
(salloc)
Hi
I
On Behalf Of Antony
Cleave
Sent: Tuesday, December 28, 2021 6:15 PM
To: Slurm User Community List
Subject: [EXTERNAL] Re: [slurm-users] Slurm and MPICH don't play well together
(salloc)
Hi
I've not used mpich for years but I think I see the problem. By asking for 24
CPUs pe
Hi
I've not used mpich for years but I think I see the problem. By asking for
24 CPUs per task and specifying 2 tasks you are asking slurm to allocate 48
CPUs per node.
Your nodes have 24 CPUs in total so you don't have any nodes that can
service this request
Try asking for 24 tasks. I've only e