On Saturday, 17 February 2018 2:16:35 AM AEDT david martin wrote:
> NodeName=obelix CPUs=64 RealMemory=48 CoresPerSocket=16
> ThreadsPerCore=1 state=UNKNOWN
> >sinfo -Nl
>
> sinfo: error: NodeNames=obelix CPUs=64 doesn't match
> Sockets*CoresPerSocket*ThreadsPerCore (16), resetting CPUs
Bas
Log in to the compute node and run 'slurmd -C' to get Slurm's viewpoint:
e.g.
[root@cwc001 ~]# slurmd -C
NodeName=cwc001 CPUs=12 Boards=1 SocketsPerBoard=2 CoresPerSocket=6
ThreadsPerCore=1 RealMemory=36138 TmpDisk=92680
~~
A
From: slurm-users on behalf of da
I have included in slurm.conf the following (based on web configurator).
i have 64 cpus, not 63.
NodeName=obelix CPUs=64 RealMemory=48 CoresPerSocket=16
ThreadsPerCore=1 state=UNKNOWN
>sinfo -Nl
sinfo: error: NodeNames=obelix CPUs=64 doesn't match
Sockets*CoresPerSocket*ThreadsPerCore
Am 16.02.2018 um 15:28 schrieb david martin:
> *I have a single physical server with :*
> * *63 cpus (each cpu has 16 cores) *
> * *480Gb total memory*
>
> *NodeNAME= Sockets=1 CoresPerSocket=16 ThreadsPerCore=1 Procs=63
> REALMEMORY=48***
> *This configuration will not work. What is s
*Hi,*
**
*I have a single physical server with :*
**
* *63 cpus (each cpu has 16 cores) *
* *480Gb total memory*
**
**
**
*NodeNAME= Sockets=1 CoresPerSocket=16 ThreadsPerCore=1 Procs=63
REALMEMORY=48***
**
**
**
**
**
*This configuration will not work. What is should be ?*
*Hi,*
**
*I have a single physical server with :*
**
* *63 cpus (each cpu has 16 cores) *
* *480Gb total memory*
**
**
**
*NodeNAME= Sockets=1 CoresPerSocket=16 ThreadsPerCore=1 Procs=63
REALMEMORY=48***
**
**
**
**
**
*This configuration will not work. What is should be ?*
Hi Ole,
On 16/02/18 22:23, Ole Holm Nielsen wrote:
Question: Is it safer to wait for 17.11.4 where the issue will
presumably be solved?
I don't think the commit has been backported to 17.11.x to date.
It's in master (for 18.08) here:
commit 4a16541bf0e005e1984afd4201b97df482e269ee
Author: T
We're planning to upgrade Slurm 17.02 to 17.11 soon, so it's important
for us to test the slurmdbd and database upgrade before doing the actual
upgrade.
I've made a *successful* upgrade of the database migration from 17.02 to
17.11, making a dry run on an offlined compute node running CentOS 7