Thanks Geert for your response. I was able to start slurmd by changing 
ProctrackType setting to proctrack/linuxproc.


Thanks,
Yogesh Aggarwal

-----Original Message-----
From: slurm-users [mailto:slurm-users-boun...@lists.schedmd.com] On Behalf Of 
Geert Geurts
Sent: Wednesday, February 28, 2018 3:29 PM
To: Slurm User Community List
Subject: EXT: Re: [slurm-users] slurm-17.11.3-2 - Redhat Linux 7.2 - Not able 
to start slurmd.service

[EXTERNAL]

I think you'll be fine with  uncomenting cgroupaautomount=yes in cgoup.conf 
file.
Could you try it like that?

Regards,
Geert


________________________________
From: Yogesh Aggarwal <yogesh.aggar...@alkermes.com>
Sent: Wednesday, February 28, 2018 20:09
To: slurm-users@lists.schedmd.com
Subject: [slurm-users] slurm-17.11.3-2 - Redhat Linux 7.2 - Not able to start 
slurmd.service

Hi All,

I am trying to install slurm-17.11.3-2 on Redhat Linux 7.2 system. I have 
completed the installation and configuration but not able to start 
slurmd.service. Below are the logs from /var/log/slurmd.log. I have also pasted 
slurm.conf, cgroup.conf and cgroup_allowed_devices_file.conf for quick 
reference. I am able to successfully start slurmctld service. Can anyone please 
guide me on finding the particular settings which is causing this issue?

[2018-02-28T12:55:34.973] Message aggregation disabled 
[2018-02-28T12:55:34.974] error: cgroup namespace 'freezer' not mounted. 
aborting [2018-02-28T12:55:34.974] error: unable to create freezer cgroup 
namespace [2018-02-28T12:55:34.974] error: Couldn't load specified plugin name 
for proctrack/cgroup: Plugin init() callback failed [2018-02-28T12:55:34.974] 
error: cannot create proctrack context for proctrack/cgroup 
[2018-02-28T12:55:34.974] error: slurmd initialization failed


[root@montecarlo01 etc]# cat /usr/local/etc/slurm.conf | grep -v ^#
ControlMachine=montecarlo01
ControlAddr=<IP Address removed from email.> GresTypes=gpu MpiDefault=none 
ProctrackType=proctrack/cgroup
ReturnToService=1
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmdPidFile=/var/run/slurmd.pid
SlurmdSpoolDir=/var/spool/slurmd
SlurmUser=slurm
StateSaveLocation=/var/spool/slurmctld
SwitchType=switch/none
TaskPlugin=task/cgroup
FastSchedule=1
SchedulerType=sched/backfill
SelectType=select/linear
AccountingStorageType=accounting_storage/none
ClusterName=montecarlo01
JobAcctGatherType=jobacct_gather/linux
SlurmctldLogFile=/var/log/slurmctld.log
SlurmdLogFile=/var/log/slurmd.log
NodeName=montecarlo01 NodeAddr=<IP Address removed from email.>  Sockets=1 
CoresPerSocket=8 ThreadsPerCore=2 Gres=gpu:8 State=UNKNOWN 
PartitionName=AMBER_GPU Nodes=montecarlo01 Default=YES MaxTime=INFINITE State=UP
[root@montecarlo01 etc]#


[root@montecarlo01 slurm]# cat /usr/local/etc/slurm/cgroup.conf 
#CgroupMountpoint="/sys/fs/cgroup"
#CgroupAutomount=yes
#iCgroupReleaseAgentDir="/usr/local/etc/slurm/cgroup"
#AllowedDevicesFile="/usr/local/etc/slurm/cgroup_allowed_devices_file.conf"
ConstrainCores=no
TaskAffinity=no
ConstrainRAMSpace=no
ConstrainSwapSpace=no
ConstrainDevices=no
AllowedRamSpace=no
AllowedSwapSpace=no
MaxRAMPercent=100
MaxSwapPercent=100
MinRAMSpace=30
[root@montecarlo01 slurm]#

[root@montecarlo01 slurm]# cat 
/usr/local/etc/slurm/cgroup_allowed_devices_file.conf
/dev/null
/dev/urandom
/dev/zero
/dev/sda*
/dev/cpu/*/*
/dev/pts/*
[root@montecarlo01 slurm]#



Thanks,
Yogesh Aggarwal

This message may contain confidential or privileged information. If you are not 
the intended recipient, please advise the sender immediately by reply e-mail 
and delete this message and any attachments without retaining a copy. Thank you 
for your cooperation.

This message may contain confidential or privileged information. If you are not 
the intended recipient, please advise the sender immediately by reply e-mail 
and delete this message and any attachments without retaining a copy. Thank you 
for your cooperation.

Reply via email to