Your nodes are hyperthreaded (ThreadsPerCore=2). Slurm always allocates _all
threads_ associated with a selected core to jobs. So you're being assigned
both threads on core N.
On our development-partition nodes we configure the threads as cores, e.g.
NodeName=moria CPUs=16 Boards=1 SocketsP
Hi All,
We configured slurm on a server with 8 GPU and 16 CPUs and want to use
slurm to scheduler for both CPU and GPU jobs. We observed an unexpected
behavior that, although there are 16 CPUs, slurm only schedule 8 jobs to
run even if there are jobs not asking any GPU. If I inspect detailed
infor