Hi Paul,

On 10/02/2022 14:33, Paul Brunk wrote:

Now we see a problem in which the OOM killer is in some cases

predictably killing job steps who don't seem to deserve it.  In some

cases these are job scripts and input files which ran fine before our

Slurm upgrade.  More details follow, but that's it the issue in a

nutshell.

I'm not sure if this is the case but it might help to keep in mind the 
difference between mpirun and srun.

With srun you let slurm create tasks with the appropriate mem/cpu etc limits 
and the mpi ranks will run directly in a task.

With mpirun you usually let your MPI distribution start on task per node which 
will spawn the mpi manager which will start the actual mpi program.

You might very well end up with different memory limits per process which could 
be the cause of your OOM issue. Especially if not all MPI ranks use the same 
amount of memory.

Ward

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to