Re: [slurm-users] PMIx and Slurm

2017-11-28 Thread r...@open-mpi.org
Very true - one of the risks with installing from packages. However, be aware that slurm 17.02 doesn’t support PMIx v2.0, and so this combination isn’t going to work anyway. If you want PMIx v2.x, then you need to pair it with SLURM 17.11. Ralph > On Nov 28, 2017, at 2:32 PM, Philip Kovacs wr

Re: [slurm-users] PMIx and Slurm

2017-11-28 Thread r...@open-mpi.org
My apologies - I guess we hadn’t been tracking it that way. I’ll try to add some clarification. We presented a nice table at the BoF and I just need to find a few minutes to post it. I believe you do have to build slurm against PMIx so that the pmix plugin is compiled. You then also have to spe

Re: [slurm-users] PMIx and Slurm

2017-11-28 Thread r...@open-mpi.org
mix.so library. If you favor using the pmix versions of > pmi/pmi2, sounds like you'll get better performance > when using pmi/pmi2, but as mentioned, you would want to test every mpi > variant listed to make sure everything works. > > > On Tuesday, November 28, 2017 9:

Re: [slurm-users] OpenMPI & Slurm: mpiexec/mpirun vs. srun

2017-12-18 Thread r...@open-mpi.org
Repeated here from the OMPI list: We have had reports of applications running faster when executing under OMPI’s mpiexec versus when started by srun. Reasons aren’t entirely clear, but are likely related to differences in mapping/binding options (OMPI provides a very large range compared to sru

Re: [slurm-users] OpenMPI & Slurm: mpiexec/mpirun vs. srun

2017-12-18 Thread r...@open-mpi.org
. > On Dec 18, 2017, at 5:23 PM, Christopher Samuel wrote: > > On 19/12/17 12:13, r...@open-mpi.org wrote: > >> We have had reports of applications running faster when executing under >> OMPI’s mpiexec versus when started by srun. > > Interesting, I know that

Re: [slurm-users] [17.11.1] no good pmi intention goes unpunished

2017-12-20 Thread r...@open-mpi.org
On Dec 20, 2017, at 6:21 PM, Philip Kovacs wrote: > > > -- slurm.spec: move libpmi to a separate package to solve a conflict with > > the > >version provided by PMIx. This will require a separate change to PMIx as > >well. > > I see the intention behind this change since the pmix 2.0+

Re: [slurm-users] [17.11.1] no good pmi intention goes unpunished

2017-12-21 Thread r...@open-mpi.org
2 code since it is compiled > directly into the plugin. > > > On Wednesday, December 20, 2017 10:47 PM, "r...@open-mpi.org" > wrote: > > > On Dec 20, 2017, at 6:21 PM, Philip Kovacs <mailto:pkde...@yahoo.com>> wrote: >> >> > --

Re: [slurm-users] [17.11.1] no good pmi intention goes unpunished

2017-12-21 Thread r...@open-mpi.org
s wrote: > > >(they are nothing more than symlinks to libpmix) > > This is very helpful to know. > > > On Thursday, December 21, 2017 3:28 PM, "r...@open-mpi.org" > wrote: > > > Hmmm - I think there may be something a little more subtle here. If you bui

[slurm-users] Using PMIx with SLURM

2018-01-03 Thread r...@open-mpi.org
Hi folks There have been some recent questions on both this and the OpenMPI mailing lists about PMIx use with SLURM. I have tried to capture the various conversations in a “how-to” guide on the PMIx web site: https://pmix.org/support/how-to/slurm-support/

[slurm-users] Fabric manager interactions: request for comments

2018-02-05 Thread r...@open-mpi.org
I apologize in advance if you received a copy of this from other mailing lists -- Hello all The PMIx community is starting work on the next phase of defining support for network interactions, looking specifically at things we might want to obtain and/or control v

Re: [slurm-users] Allocate more memory

2018-02-07 Thread r...@open-mpi.org
I’m afraid neither of those versions is going to solve the problem here - there is no way to allocate memory across nodes. Simple reason: there is no way for a process to directly address memory on a separate node - you’d have to implement that via MPI or shmem or some other library. > On Feb

Re: [slurm-users] Allocate more memory

2018-02-07 Thread r...@open-mpi.org
Afraid not - since you don’t have any nodes that meet the 3G requirement, you’ll just hang. > On Feb 7, 2018, at 7:01 AM, david vilanova wrote: > > Thanks for the quick response. > > Should the following script do the trick ?? meaning use all required nodes to > have at least 3G total memory