Thanks to everyone who replied! It's working now.
I had to make a number of changes
1. set the env vars in the TaskProlog so that they are exported to the
job/task (I just had assumed that even though the Prolog is run under
root and not the user id of the job that it would still end up pass
Your prolog script is run by/as the same user as slurmd, so any
environment variables you set there will not be available to the job
being run.
See: https://slurm.schedmd.com/prolog_epilog.html for info.
Brian Andrus
On 2/12/2021 1:27 PM, mercan wrote:
Hi;
Prolog and TaskProlog are differe
Hi;
Prolog and TaskProlog are different parameters and scripts. You should
use the TaskProlog script to set env. variables.
Regards;
Ahmet M.
13.02.2021 00:12 tarihinde Herc Silverstein yazdı:
Hi,
I have a prolog script that is being run via the slurm.conf Prolog=
setting. I've verifie
In our taskprolog file we have something like
#!/bin/sh
echo export SCRATCHDIR=/scratch/${SLURM_JOBID}
From: slurm-users on behalf of Herc
Silverstein
Sent: Friday, February 12, 2021 3:12 PM
To: slurm-us...@schedmd.com
Subject: [slurm-users] prolog not passin
Hi,
I have a prolog script that is being run via the slurm.conf Prolog=
setting. I've verified that it's being executed on the compute node.
My problem is that I cannot get environment variables that I set in this
prolog to be set/seen in the job. For example the prolog:
#!/bin/bash
...
On Fri, Feb 12, 2021 at 09:47:56AM +0100, Ole Holm Nielsen wrote:
>
> Could you kindly say where you have found documentation of the
> DefaultCpusPerGpu (or DefCpusPerGpu?) parameter.
Humph, I shouldn't have written the message from memory. It's actually
DefCpuPerGPU (singular).
> I'm unable t
Thanks Guys!
All information is valuable. I'll look up our setting and try to tune our Slurm
cluster to get higher performance.
Best,
Kota
露崎 浩太 (Kota Tsuyuzaki)
kota.tsuyuzaki...@hco.ntt.co.jp
NTTソフトウェアイノベーションセンタ
分散処理基盤技術プロジェクト
0422-59-2837
Hi Thomas,
Indeed, even on my cluster, the CPU ID does not match the physical CPU
assigned to the job
# scontrol show job 24115206_399 -d
JobId=24115684 ArrayJobId=24115206 ArrayTaskId=399 JobName=s10
JOB_GRES=(null)
Nodes=spartan-bm096 CPU_IDs=50 Mem=4000 GRES=
[root@spartan-bm096 ~]# c
On 2/12/21 9:24 AM, Ansgar Esztermann-Kirchner wrote:
After scouring the docs once more, I've noticed DefaultCpusPerGpu,
which seems to be exactly what I was looking for: jobs request a
number of GPUs, but no CPUs; and Slurm will assign an appropriate
number of CPUs. The only disadvantage is the
On Mon, Feb 08, 2021 at 12:36:06PM +0100, Ansgar Esztermann-Kirchner wrote:
> Of course, one could use different partitions for different nodes, and
> then submit individual jobs with CPU requests tailored to one such
> partition, but I'd prefer a more flexible approach where a given job
> could r
10 matches
Mail list logo