Beowulfers,

[Note: this was also posted to the OpenMPI mailing list. Posting here, too, to get some 'neutral' responses ]

We use OpenMPI with Slurm as our scheduler, and a user has asked me this: should they use mpiexec/mpirun or srun to start their MPI jobs through Slurm?

My inclination is to use mpiexec, since that is the only method that's (somewhat) defined in the MPI standard and therefore the most portable, and the examples in the OpenMPI FAQ use mpirun. However, the Slurm documentation on the schedmd website say to use srun with the --mpi=pmi option. (See links below)

What are the pros/cons of using these two methods, other than the portability issue I already mentioned? Does srun+pmi use a different method to wire up the connections? Some things I read online seem to indicate that. If slurm was built with PMI support, and OpenMPI was built with Slurm support, does it really make any difference?

https://www.open-mpi.org/faq/?category=slurm
https://slurm.schedmd.com/mpi_guide.html#open_mpi

--
Prentice Bisbal
Lead Software Engineer
Princeton Plasma Physics Laboratory
http://www.pppl.gov

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to