Hi, all
OS Version: RHEL 7.6
SLURM Version: slurm 18.08.6
I defined the gpu resource as follows:
[test@ohpc137pbsop-c001 ~]$ scontrol show config |grep TaskPlugin
TaskPlugin = task/cgroup
TaskPluginParam = (null type)
[test@ohpc137pbsop-c001 ~]$
[test@ohpc137pbs
Thank you. I now have a deeper understanding of this topic.
Looks like there is no problem without 'cpu_bind -v' mode.
[test@ohpc137pbsop-sms ~]$ srun --nodes=1-1 --ntasks=6 --cpu-bind=cores cat
/proc/self/status | grep Cpus_allowed_list
Cpus_allowed_list: 0-1,12,24-25,36
Cpus
Hi, All
I checked 'Example 16' of CPU Management User and Administrator Guide.
However, the following message was output.
task/cgroup: task[1] not enough Core objects (4 < 6), disabling affinity
task/cgroup: task[3] not enough Core objects (4 < 6), disabling affinity
task/cgroup: task[4] no
Hi, all
"man scontrol" says:
Do not display information about hidden partitions, their jobs and job steps.
By default, neither partitions that are configured as hidden nor those
partitions
unavailable to user's group will be displayed (i.e. this is the default
behavior).
'hide' option is no
Hi, all
Sorry, it's my mistake. Please forget about it.
Regards,
Tomo
-Original Message-
From: Uemoto, Tomoki/上本 友樹
Sent: Monday, October 21, 2019 11:54 AM
To: slurm-users@lists.schedmd.com
Subject: All job steps are not displayed even if 'allsteps' is specified.
Hi, all
Is the sstat allsteps option working?
$ sstat --version
slurm 18.08.6
$
I ran the following job script for test reason.
$ cat sleep_60.sh
#!/bin/bash
#SBATCH -J sleep_60
#SBATCH -o job.%j.out
srun sleep 60 &
srun sleep 60 &
srun sleep 60
$
$ sbatch sleep_60.sh
Submitted b
Hi, All
I don't understand which case to use the delay-boot option.
Should I check as follows?
1. sbatch --delay-boot=
m] On Behalf Of
Chris Samuel
Sent: Friday, October 04, 2019 2:38 PM
To: slurm-users@lists.schedmd.com
Subject: Re: [slurm-users] ReqGRES value is not valid
On 3/10/19 10:23 pm, Uemoto, Tomoki wrote:
> I don't know why it return value of ReqGres is 0.
Which version of Slurm are you on?
Also t
Hi, all
I want to configure generic consumable resources(gpu) and confirm that the
resources
are assigned to jobs on each node.
I executed the following settings.
o gres.conf
Name=gpu File=/dev/tty[0-3] CPUs=[0-24]
Name=gpu File=/dev/tty[4-7] CPUs=[25-47]
o slurm.conf
TaskPlugin=task/af
reqMin ReqCPUFreqMax
ReqCPUFreqGov
-- -- -- - -
-
56 normal sleep_60 Unknown Unknown
Unknown
$
-Original Message-
From: slurm-users [mailto:slurm-users-boun...@lists.schedmd.com] On Behalf Of
Uemoto, Tomoki
Hi all,
I am checking the --cpu-freq option of the sbatch command.
The CPU frequency of the target node is as follows.
# cpupower frequency-info
analyzing CPU 0:
driver: acpi-cpufreq
CPUs which run at the same hardware frequency: 0
CPUs which need to have their frequency coord
cated node.
@Tomo
So, if you want to use the broadwell node, just use
-C broadwell
best
Marcus
On 10/2/19 8:23 AM, Loris Bennett wrote:
> Hi Tomo,
>
> "Uemoto, Tomoki" writes:
>
>> Hi,all
>> I'm working with slurm 18.08.6 on RHEL7.6
>>mana
Hi,all
I'm working with slurm 18.08.6 on RHEL7.6
manager : 1node
computes: 2nodes (c001:haswell,c002:broadwell)
I am checking the --batch option of the sbatch command.
The following Features were set for testing.
# scontrol update nodename=c001 Features=haswell
# scontrol update nodename=
13 matches
Mail list logo