Hi Howard,

thanks, but my Slurm 24.05 definitely has pmix support (visible in "srun –mpi=list") and it uses it through "MpiDefault=pmix" in slurm.conf. The mentioned problem also appears if I use a container with OpenMPI compiled against same pmix as Slurm 24.05 (which is Ubuntu 24.04 package libpmix2t64 in this case).

In the meantime I found this bug report: https://github.com/open-mpi/ompi/issues/12146, which sounds very similar. I haven't completely worked through it, they use a different container solution and things seem to be COMPLICATED...but still...

Also I found that my Slurm 24.05 works with OpenMPI outside of containers (with ompi examples or python mpi4py).

Matthias

Am 27.03.25 um 15:46 schrieb Pritchard Jr., Howard:
HI Matthias,

It looks like the Open MPI in the containers was not built with PMI1 or PMI2 support, so its defaulting to using PMIx.

You are seeing this error message because the call within Open MPI 4.1.x’s runtime system to PMIx_Init returned an error.

Namely that there was no PMIx server to connect to.

Not sure why the behavior would have changed between your SLURM variants.

If you run

srun –mpi=list

does it show a pmix option?

If not you need to rebuild slurm with the –with-pmix config option.  You may want to check what pmix library is installed in the containers and if possible use that version of PMIx when rebuilding SLURM.

Howard

*From: *Davide DelVento via slurm-users <slurm-users@lists.schedmd.com>
*Reply-To: *Davide DelVento <davide.quan...@gmail.com>
*Date: *Thursday, March 27, 2025 at 7:41 AM
*To: *Matthias Leopold <matthias.leop...@meduniwien.ac.at>
*Cc: *Slurm User Community List <slurm-users@lists.schedmd.com>
*Subject: *[EXTERNAL] [slurm-users] Re: [EXTERN] Re: Slurm 24.05 and OpenMPI

Hi Matthias,

I see. It does not freak me out. Unfortunately I have very little experience working with MPI-in-containers, so I don't know the best way to debug this.

What I do know is that some ABIs in Slurm change with Slurm major versions and dependencies need to be recompiled with newer versions of the latter. So maybe trying to recompile the OpenMPI-inside-the- container against the version of Slurm you are utilizing is the first I would try if I were in your shoes

Best,

Davide

On Thu, Mar 27, 2025 at 4:19 AM Matthias Leopold <matthias.leop...@meduniwien.ac.at <mailto:matthias.leop...@meduniwien.ac.at>> wrote:

    Hi Davide,

    thanks for reply.
    In my clusters OpenMPI is not present on the compute nodes. The
    application (nccl-tests) is compiled inside the container against
    OpenMPI. So when I run the same container in both clusters it's
    effectively the exact same OpenMPI version. I hope you don't freak out
    hearing this, but this worked with Slurm 21.08. I tried using a newer
    container version and another OpenMPI (first it was Ubuntu 20.04 with
    OpenMPI 4.1.7 from NVIDIA repo, second is Ubuntu 24.04 with Ubuntu
    OpenMPI 4.1.6), but the error is the same when running the container in
    Slurm 24.05.

    Matthias

    Am 26.03.25 um 21:24 schrieb Davide DelVento:
     > Hi Matthias,
     > Let's take the simplest things out first: have you compiled OpenMPI
     > yourself, separately on both clusters, using the specific drivers
    for
     > whatever network you have on each? In my experience OpenMPI is quite
     > finicky about working correctly, unless you do that. And when I
    don't, I
     > see exactly that error -- heck sometimes I see that even when
    OpenMPI is
     > (supposed?) to be compiled and linked correctly and in such cases I
     > resolve it by starting jobs with "mpirun --mca smsc xpmem -n $tasks
     > whatever-else-you-need" (which obviously may or may not be
    relevant for
     > your case).
     > Cheers,
     > Davide
     >
     > On Wed, Mar 26, 2025 at 12:51 PM Matthias Leopold via slurm-users
     > <slurm-users@lists.schedmd.com <mailto:slurm-
    us...@lists.schedmd.com> <mailto:slurm-users@lists.schedmd.com
    <mailto:slurm-users@lists.schedmd.com>>>
     > wrote:
     >
     >     Hi,
     >
     >     I built a small Slurm 21.08 cluster with NVIDIA GPU hardware
    and NVIDIA
     >     deepops framework a couple of years ago. It is based on
    Ubuntu 20.04
     >     and
     >     makes use of the NVIDIA pyxis/enroot container solution. For
     >     operational
     >     validation I used the nccl-tests application in a container.
    nccl-tests
     >     is compiled with MPI support (OpenMPI 4.1.6 or 4.1.7) and I
    used it
     >     also
     >     for validation of MPI jobs. Slurm jobs use "pmix" and tasks are
     >     launched
     >     via srun (not mpirun). Some of the GPUs can talk to each
    other via
     >     Infiniband, but MPI is rarely used at our site and I'm fully
    aware that
     >     my MPI knowledge is very limited. Still it worked with Slurm
    21.08.
     >
     >     Now I built a Slurm 24.05 cluster based on Ubuntu 24.04 and
    started to
     >     move hardware there. When I run my nccl-tests container (also
    with
     >     newer
     >     software) I see error messages like this:
     >
     >     [node1:21437] OPAL ERROR: Unreachable in file ext3x_client.c
    at line 111
>  --------------------------------------------------------------------------
     >     The application appears to have been direct launched using
    "srun",
     >     but OMPI was not built with SLURM's PMI support and therefore
    cannot
     >     execute. There are several options for building PMI support under
     >     SLURM, depending upon the SLURM version you are using:
     >
     >         version 16.05 or later: you can use SLURM's PMIx support.
    This
     >         requires that you configure and build SLURM --with-pmix.
     >
     >         Versions earlier than 16.05: you must use either SLURM's
    PMI-1 or
     >         PMI-2 support. SLURM builds PMI-1 by default, or you can
    manually
     >         install PMI-2. You must then build Open MPI using --with-pmi
     >     pointing
     >         to the SLURM PMI library location.
     >
     >     Please configure as appropriate and try again.
>  --------------------------------------------------------------------------
     >     *** An error occurred in MPI_Init
     >     *** on a NULL communicator
     >     *** MPI_ERRORS_ARE_FATAL (processes in this communicator will
    now abort,
     >     ***    and potentially your MPI job)
     >     [node1:21437] Local abort before MPI_INIT completed completed
     >     successfully, but am not able to aggregate error messages,
    and not able
     >     to guarantee that all other processes were killed!
     >
     >     One simple question:
     >     Is this related to https://github.com/open-mpi/ompi/
    issues/12471 <https://urldefense.com/v3/__https:/github.com/open-
    mpi/ompi/issues/12471__;!!Bt8fGhp8LhKGRg!
    HbDrlb62ejeR1sQXdPbyKWMgWXxLYYaShWrhQ7F2zfXYudPXia0kOaOmWAp-
    bgj1LUQ5qYPmxmh9MuZD3Z7HigijM60$>
     >     <https://github.com/open-mpi/ompi/issues/12471 <https://
    urldefense.com/v3/__https:/github.com/open-mpi/ompi/
    issues/12471__;!!Bt8fGhp8LhKGRg!
    HbDrlb62ejeR1sQXdPbyKWMgWXxLYYaShWrhQ7F2zfXYudPXia0kOaOmWAp-
    bgj1LUQ5qYPmxmh9MuZD3Z7HigijM60$>>?
     >     If so: is there some workaround?
     >
     >     I'm very grateful for any comments. I know that a lot of detail
     >     information is missing, but maybe someone can still already
    give me a
     >     hint where to look.
     >
     >     Thanks a lot
     >     Matthias
     >
     >
     >     --
     >     slurm-users mailing list -- slurm-users@lists.schedmd.com
    <mailto:slurm-users@lists.schedmd.com>
     >     <mailto:slurm-users@lists.schedmd.com <mailto:slurm-
    us...@lists.schedmd.com>>
     >     To unsubscribe send an email to slurm-users-
    le...@lists.schedmd.com <mailto:slurm-users-le...@lists.schedmd.com>
     >     <mailto:slurm-users-le...@lists.schedmd.com <mailto:slurm-
    users-le...@lists.schedmd.com>>
     >



--
Medizinische Universität Wien

Matthias Leopold

IT Services & strategisches Informationsmanagement
Enterprise Technology & Infrastructure

Spitalgasse 23, 1090 Wien
T: +43 1 40160 21241

matthias.leop...@meduniwien.ac.at
https://www.meduniwien.ac.at


--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com

Reply via email to