This may be due because of this commit :
https://github.com/SchedMD/slurm/commit/ee2813870fed48827aa0ec99e1b4baeaca710755
It seems that the behavior was changed from a fatal error to something
different when requesting cgroup devices on in cgroup.conf without the
proper conf file.
If you do not r
On Friday, 2 November 2018 11:06:11 PM AEDT Martijn Kruiten wrote:
> We pinpointed it to `ConstrainDevices=yes` in cgroup.conf. The solution
> was to set `/dev/*` in cgroup_allowed_devices_file.conf. We did not
> have anything there. We're now looking into the specific device that is
> needed by p
We pinpointed it to `ConstrainDevices=yes` in cgroup.conf. The solution
was to set `/dev/*` in cgroup_allowed_devices_file.conf. We did not
have anything there. We're now looking into the specific device that is
needed by pmi2.
Martijn Kruiten
On Thu, 2018-11-01 at 18:48 +0100, Bas van der Vlies
Oke if we change:
* TaskPlugin=task/affinity,task/cgroup
to:
* TaskPlugin=task/affinity
The pmi2 interface works. Investigating this further
On 31/10/2018 08:26, Bas van der Vlies wrote:
I am busy with migrating from Torque/Moab to SLURM.
I have installed slurm 18.03 and trying to run an mp
I am busy with migrating from Torque/Moab to SLURM.
I have installed slurm 18.03 and trying to run an mpi program woth the
pmi2 interface.
{{{
~/mpitest> srun --mpi=list
srun: MPI types are...
srun: none
srun: openmpi
srun: pmi2
}}}
The none and openmpi interface works but the pmi2 interface