We share our 28-core gpu nodes with non-gpu jobs through a set of ‘any’ 
partitions. The ‘any’ partitions have a setting of MaxCPUsPerNode=12, and the 
gpu partitions have a setting o MaxCPUsPerNode=16. That’s more or less 
documented in the slurm.conf documentation under “MaxCPUsPerNode”.

From: slurm-users <slurm-users-boun...@lists.schedmd.com> on behalf of Ahmad 
Khalifa <underoath...@gmail.com>
Reply-To: Slurm User Community List <slurm-users@lists.schedmd.com>
Date: Wednesday, September 30, 2020 at 3:13 PM
To: "slurm-users@lists.schedmd.com" <slurm-users@lists.schedmd.com>
Subject: [slurm-users] Running gpu and cpu jobs on the same node


External Email Warning

This email originated from outside the university. Please use caution when 
opening attachments, clicking links, or responding to requests.

________________________________
I have a machine with 4 rtx2080ti and a core i9. I submit jobs to it through 
MPI PMI2 (from Relion).

If I use 5 MPI and 4 threads, then basically I'm using all 4 GPUs and 20 
threads of my cpu.

My question is, my current configuration allows submitting jobs to the same 
node, but with a different partition, but I'm not sure if I use #SBATCH 
--partition=cpu that the submitted jobs will only use the remaining 2 cores (4 
threads) or is it going to share resources with my gpu job?!

Thanks.


Reply via email to