Re: [slurm-users] Inconsistent cpu bindings with cpu-bind=none

2020-02-20 Thread Boden, Marcus Vincent
slurm-users on behalf of Donners, John Sent: Tuesday, February 18, 2020 10:41:29 PM To: slurm-users@lists.schedmd.com Subject: Re: [slurm-users] Inconsistent cpu bindings with cpu-bind=none Hi all, I have a few more remarks about this question (I have been in contact with Marcus about this): -

Re: [slurm-users] Inconsistent cpu bindings with cpu-bind=none

2020-02-18 Thread Donners, John
Hi all, I have a few more remarks about this question (I have been in contact with Marcus about this): - the idea of the jobscript is that SLURM does not do any binding and leaves binding up to mpirun. - this works fine on the first node, where SLURM does not bind the processes (so

Re: [slurm-users] Inconsistent cpu bindings with cpu-bind=none

2020-02-17 Thread Chris Samuel
On 17/2/20 12:48 am, Marcus Boden wrote: I am facing a bit of a weird issue with CPU bindings and mpirun: I think if you want Slurm to have any control over bindings you'll be wanting to use srun to launch your MPI program, not mpirun. All the best, Chris -- Chris Samuel : http://www.csa

[slurm-users] Inconsistent cpu bindings with cpu-bind=none

2020-02-17 Thread Marcus Boden
Hi everyone, I am facing a bit of a weird issue with CPU bindings and mpirun: My jobscript: #SBATCH -N 20 #SBATCH --tasks-per-node=40 #SBATCH -p medium40 #SBATCH -t 30 #SBATCH -o out/%J.out #SBATCH -e out/%J.err #SBATCH --reservation=root_98 module load impi/2019.4 2>&1 export I_MPI_DEBUG=6 exp