Re: [slurm-users] Slurm and MPICH

2022-01-12 Thread Roger Mason
Hello, "Mccall, Kurt E. (MSFC-EV41)" writes: > MPICH uses the PMI 1 interface by default, but for our 20.02.3 Slurm > installation, “srun –mpi=list yields” > > > > $ srun --mpi=list > > srun: MPI types are... > > srun: cray_shasta > > srun: pmi2 > > srun: none > > > > PMI 2 is there, but no

Re: [slurm-users] Slurm and MPICH don't play well together (salloc)

2022-01-03 Thread Mccall, Kurt E. (MSFC-EV41)
Antony, My apologies – another Slurm expert confirmed your answer. Kurt From: slurm-users On Behalf Of Antony Cleave Sent: Tuesday, December 28, 2021 6:15 PM To: Slurm User Community List Subject: [EXTERNAL] Re: [slurm-users] Slurm and MPICH don't play well together (salloc) Hi I&#

Re: [slurm-users] Slurm and MPICH don't play well together (salloc)

2021-12-29 Thread Mccall, Kurt E. (MSFC-EV41)
On Behalf Of Antony Cleave Sent: Tuesday, December 28, 2021 6:15 PM To: Slurm User Community List Subject: [EXTERNAL] Re: [slurm-users] Slurm and MPICH don't play well together (salloc) Hi I've not used mpich for years but I think I see the problem. By asking for 24 CPUs pe

Re: [slurm-users] Slurm and MPICH don't play well together (salloc)

2021-12-28 Thread Antony Cleave
Hi I've not used mpich for years but I think I see the problem. By asking for 24 CPUs per task and specifying 2 tasks you are asking slurm to allocate 48 CPUs per node. Your nodes have 24 CPUs in total so you don't have any nodes that can service this request Try asking for 24 tasks. I've only e