We have overlapping partitions for GPU work and some kinds non-GPU work (both large memory and regular memory jobs).
For 28-core nodes with 2 GPUs, we have: PartitionName=gpu MaxCPUsPerNode=16 … Nodes=gpunode[001-004] PartitionName=any-interactive MaxCPUsPerNode=12 … Nodes=node[001-040],gpunode[001-004] PartitionName=bigmem MaxCPUsPerNode=12 … Nodes=gpunode[001-003] PartitionName=hugemem MaxCPUsPerNode=12 … Nodes=gpunode004 Worst case, non-GPU jobs could reserve up to 24 of the 28 cores on a GPU node, but only for a limited time (our any-interactive partition has a 2 hour time limit). In practice, it's let us use a lot of otherwise idle CPU capacity in the GPU nodes for short test runs. From: slurm-users <slurm-users-boun...@lists.schedmd.com> Date: Wednesday, December 16, 2020 at 1:04 PM To: Slurm User Community List <slurm-users@lists.schedmd.com> Subject: [slurm-users] using resources effectively? External Email Warning This email originated from outside the university. Please use caution when opening attachments, clicking links, or responding to requests. ________________________________ Hi, Say if I have a Slurm node with 1 x GPU and 112 x CPU cores, and: 1) there is a job running on the node using the GPU and 20 x CPU cores 2) there is a job waiting in the queue asking for 1 x GPU and 20 x CPU cores Is it possible to a) let a new job asking for 0 x GPU and 20 x CPU cores (safe for the queued GPU job) start immediately; and b) let a new job asking for 0 x GPU and 100 x CPU cores (not safe for the queued GPU job) wait in the queue? Or c) is it doable to put the node into two Slurm partitions, 56 CPU cores to a "cpu" partition, and 56 CPU cores to a "gpu" partition, for example? Thank you in advance for any suggestions / tips. Best, Weijun =========== Weijun Gao Computational Research Support Specialist Department of Psychology, University of Toronto Scarborough 1265 Military Trail, Room SW416 Toronto, ON M1C 1M2 E-mail: weijun....@utoronto.ca