On 19/12/17 09:20, Prentice Bisbal wrote:
What are the pros/cons of using these two methods, other than the portability issue I already mentioned? Does srun+pmi use a different method to wire up the connections? Some things I read online seem to indicate that. If slurm was built with PMI support, and OpenMPI was built with Slurm support, does it really make any difference?
Benchmark. In (much) older versions of Slurm we would find that NAMD built with OpenMPI would scale better with mpirun, but in more recent versions (from 15.x onwards I believe) we found srun scaled better instead.
Whether that's because of differences in wireup or other issues I'm not sure, but currently I recommend people use srun instead. You will also get better accounting information.
With mpirun OMPI will start orted on the each node (via srun) and then that will launch the MPI ranks. With srun slurmd will launch the MPI ranks itself.
Hope this helps! Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf