Slurm supports a l3_cache_as_socket [1] parameter in recent releases. That 
would make an Epyc system, for example, appear to have many more sockets than 
physically exist, and that should help ensure threads in a single task share a 
cache.

You’d want to run slurmd -C on a node with that setting enabled to generate the 
new NodeName parameters, and replace the old entries in the overall slurm.conf 
with the updates values.

[1] https://slurm.schedmd.com/slurm.conf.html#OPT_l3cache_as_socket

On Mar 13, 2022, at 1:43 PM, vicentesmith <vicentesm...@protonmail.com> wrote:



External Email Warning

This email originated from outside the university. Please use caution when 
opening attachments, clicking links, or responding to requests.

________________________________
Hello,
I'm performing some tests (CPU-only systems) in order to compare MPI versus 
hybrid setup. The system is running OpenMPIv4.1.2 so that a job submission 
reads:
      mpirun -np 48 foo.exe
or
      export OMP_NUM_THREADS=8
      mpirun -np 6 foo.exe
In our system, the latter runs slightly faster (about 5 to 10%) but any 
performance gain/loss will depend on the system & app.
In the same system and for the same app, the first SLURM script reads:
      #!/bin/bash
      #SBATCH --job-name=***
      #SBATCH --output=*
      #SBATCH --ntasks=48
      mpirun foo.exe
This script runs fine. Then, and for the hybrid job, the script reads:
      #!/bin/bash
      #SBATCH --job-name=***hybrid
      #SBATCH --output=***
      #SBATCH --ntasks=6
      #SBATCH --cpus-per-task=8
      export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
      mpirun foo.exe
However, this runs much slower and it seems to slow down even more as it moves 
forward. Something is obviously not clicking correctly for the latter case. My 
only explanation is that the threads are not forked out correctly (by this I 
mean that the 8 threads are not assigned to the cores sharing the same L3). 
OpenMPI is supposed to choose the path of least resistance but I was wondering 
if I might need to recompile OpenMPI with some extra flags or modify the SLURM 
script somehow.
Thanks.

Reply via email to