User Community List
Subject: Re: [slurm-users] Node is not allocating all CPUs
No, the user is submitting four jobs, each requesting 1/4 of the memory and 1/4
of the CPUs (i.e. 8 out of 32). But even though there are 32 physical cores,
Slurm only shows 16 as trackable resources:
>From scont
uld the number of trackable resources be different from the number of
actual CPUs?
David Guertin
From: slurm-users on behalf of Sarlo,
Jeffrey S
Sent: Wednesday, April 6, 2022 10:30 AM
To: Slurm User Community List
Subject: Re: [slurm-users] Node is
@lists.schedmd.com
Subject: Re: [slurm-users] Node is not allocating all CPUs
Thanks. That shows 32 cores, as expected:
# /cm/shared/apps/slurm/19.05.8/sbin/slurmd -C
NodeName=node020 CPUs=32 Boards=1 SocketsPerBoard=2 CoresPerSocket=16
ThreadsPerCore=1 RealMemory=257600
UpTime=0-22:39:36
But I can
g
16.
David Guertin
From: slurm-users on behalf of Brian
Andrus
Sent: Tuesday, April 5, 2022 6:14 PM
To: slurm-users@lists.schedmd.com
Subject: Re: [slurm-users] Node is not allocating all CPUs
You want to see what is output on the node itself when yo
You want to see what is output on the node itself when you run:
slurmd -C
Brian Andrus
On 4/5/2022 2:11 PM, Guertin, David S. wrote:
We've added a new GPU node to our cluster with 32 cores. It contains 2
16-core sockets, and hyperthreading is turned off, so the total is 32
cores. But jobs
We've added a new GPU node to our cluster with 32 cores. It contains 2 16-core
sockets, and hyperthreading is turned off, so the total is 32 cores. But jobs
are only being allowed to use 16 cores.
Here's the relevant line from slurm.conf:
NodeName=node020 CoresPerSocket=16 RealMemory=257600 Thr