Hi Lance, For single node jobs MPI can be run with the MPI binary from the container > with native performance for the shared memory type messages. This has > worked without issue since the very early days of Singularity. The only > tricky part has been multi-node and multi-container. >
Thanks for the reply - I guess I'm curious where the 'tricky' bits are at this point. For cross-node, container-per-rank jobs, I think the ABI compatibility stuff ensures (even if not done automagically) that you get 'native' performance, but the same-node, container-per-rank stuff is where I'm still unsure what happens. In theory, with the run being just a process, it *should* be doable, but I don't know if there's some glue that needs to happen, or has already happened. If nobody knows offhand, it's on my to-do list to test this, I just haven't found the time yet. I'll do so and update the list once I'm able. Cheers, - Brian
_______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowulf