Re: [slurm-users] Include some cores of the head node to a partition

2018-04-28 Thread Mahmood Naderan
[root@rocks7 ~]# slurmd -C NodeName=rocks7 slurmd: Considering each NUMA node as a socket CPUs=32 Boards=1 SocketsPerBoard=4 CoresPerSocket=8 ThreadsPerCore=1 RealMemory=64261 UpTime=15-21:30:53 [root@rocks7 ~]# scontrol show node rocks7 NodeName=rocks7 Arch=x86_64 CoresPerSocket=1 CPUAlloc=0 CP

Re: [slurm-users] Include some cores of the head node to a partition

2018-04-28 Thread Chris Samuel
On Sunday, 29 April 2018 2:34:09 AM AEST Mahmood Naderan wrote: > [root@rocks7 ~]# sinfo --list-reasons > REASON USER TIMESTAMP NODELIST > Low socket*core*thre root 2018-04-19T16:46:39 rocks7 slurmd thinks that "rocks7" doesn't have enough hardware resources to m

Re: [slurm-users] Include some cores of the head node to a partition

2018-04-28 Thread Mahmood Naderan
[root@rocks7 ~]# sinfo --list-reasons REASON USER TIMESTAMP NODELIST Low socket*core*thre root 2018-04-19T16:46:39 rocks7 Regards, Mahmood On Sat, Apr 28, 2018 at 6:01 PM, Chris Samuel wrote: > On Saturday, 28 April 2018 7:58:08 PM AEST Mahmood Naderan wrote

Re: [slurm-users] Include some cores of the head node to a partition

2018-04-28 Thread Chris Samuel
On Saturday, 28 April 2018 7:58:08 PM AEST Mahmood Naderan wrote: > I see that the state of the frontend is Drained. Is that the default > state? Probably not. What does "sinfo --list-reasons" say? -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

Re: [slurm-users] Include some cores of the head node to a partition

2018-04-28 Thread Mahmood Naderan
Hi again I see that the state of the frontend is Drained. Is that the default state? The following line PartitionName=OTHERS AllowAccounts=em1 Nodes=compute-0-[2-3],rocks7 Should include all core numbers of all nodes. The computes are set to idle, but the frontend is drained. Regards, Mahmood