The agenda from SLUG looks very interesting.
Are there plans to publish presentations to the SchedMD website as has been
done for past meetings?
Trevor Cooper, M.Sc.
HPC Systems Programmer
San Diego Supercomputer Center
GPG Fingerprint: 2CA999800D11C5946C9DBFEE52364D7BBCEB35B8
GPG Key: download
Hello,
I would set "--ntasks"= number of cpus you want use for your job and remove
"--cpus-per-task" which would be set to 1 by default.
From: slurm-users On Behalf Of Selch,
Brigitte (FIDD)
Sent: Friday, September 22, 2023 7:58 AM
To: slurm-us...@schedmd.com
Subject: [EXT] [slurm-users] Submi
You might also try swapping to use srun instead of mpiexec as that way
slurm can give more direction as to what cores have been allocated to
what. I've found it in the past that mpiexec will ignore what Slurm
tells it.
-Paul Edmon-
On 9/22/23 8:24 AM, Lambers, Martin wrote:
Hello,
for this
Hello,
for this setup it typically helps to disable MPI process binding with
"mpirun --bind-to none ..." (or similar) so that OpenMP can use all cores.
Best,
Martin
On 22/09/2023 13:57, Selch, Brigitte (FIDD) wrote:
Hello,
one of our applications need hybrid OpenMPI and OpenMP Job-Submit.
Hello,
one of our applications need hybrid OpenMPI and OpenMP Job-Submit.
Only one task is allowed on one node, but this task should use all cores of the
node.
So, for example I made:
#!/bin/bash
#SBATCH --nodes=5
#SBATCH --ntasks=5
#SBATCH --cpus-per-task=44
#SBATCH --export=ALL
export OMP_NU
Hello again, all!
I'm having another issue. It seems there's something not working
correctly with reservations when it comes to accounting.
The reservations have been created and are being enforced by Slurm. But
"sreport reservation utilization" returns an empty table.
I noticed that there'