Hello,
is writing the --with-pmi flag sufficient or do I have to write it in the
form --with-pmi=, where it points to a directory, and if
so, where? Slightly confused by the syntax provided in the documentation.
[sakshamp.phy20.itbhu@login2]$ srun --mpi=list
srun: MPI types are...
srun: cray_shast
Thank you for responding.
The output of ompi_info regarding configuration is
Configure command line: '--build=x86_64-redhat-linux-gnu'
'--host=x86_64-redhat-linux-gnu'
'--program-prefix='
'--disable-dependency-tracking'
Hi,
I am not sure if this related to GPUs. I rather think the issue has to do with
how your OpenMPI has been built.
What does ompi_info command show? Look for "Configure command line" in
the output. Does this include '--with-slurm' and '--with-pmi' flags?
To my very best knowledge, both flags
Hi everyone,
I am trying to run a simulation software on slurm using openmpi-4.1.1 and
cuda/11.1.
On executing, I get the following error:
srun --mpi=pmi2 --nodes=1 --ntasks-per-node=5 --partition=gpu --gres=gpu:1
--time=02:00:00 --pty bash -i
./
```._
Hi everyone,
I am trying to run a simulation software on slurm using openmpi-4.1.1 and
cuda/11.1.
On executing, I get the following error:
srun --mpi=pmi2 --nodes=1 --ntasks-per-node=5 --partition=gpu --gres=gpu:1
--time=02:00:00 --pty bash -i
./
```._