Re: [slurm-users] prolog not passing env var to job

2021-02-12 Thread Herc Silverstein
Thanks to everyone who replied!  It's working now. I had to make a number of changes 1. set the env vars in the TaskProlog so that they are exported to the job/task  (I just had assumed that even though the Prolog is run under root and not the user id of the job that it would still end up pass

Re: [slurm-users] prolog not passing env var to job

2021-02-12 Thread Brian Andrus
Your prolog script is run by/as the same user as slurmd, so any environment variables you set there will not be available to the job being run. See: https://slurm.schedmd.com/prolog_epilog.html for info. Brian Andrus On 2/12/2021 1:27 PM, mercan wrote: Hi; Prolog and TaskProlog are differe

Re: [slurm-users] prolog not passing env var to job

2021-02-12 Thread mercan
Hi; Prolog and TaskProlog are different parameters and scripts. You should use the TaskProlog script to set env. variables. Regards; Ahmet M. 13.02.2021 00:12 tarihinde Herc Silverstein yazdı: Hi, I have a prolog script that is being run via the slurm.conf Prolog= setting.  I've verifie

Re: [slurm-users] prolog not passing env var to job

2021-02-12 Thread Sarlo, Jeffrey S
In our taskprolog file we have something like #!/bin/sh echo export SCRATCHDIR=/scratch/${SLURM_JOBID} From: slurm-users on behalf of Herc Silverstein Sent: Friday, February 12, 2021 3:12 PM To: slurm-us...@schedmd.com Subject: [slurm-users] prolog not passin

[slurm-users] prolog not passing env var to job

2021-02-12 Thread Herc Silverstein
Hi, I have a prolog script that is being run via the slurm.conf Prolog= setting.  I've verified that it's being executed on the compute node.  My problem is that I cannot get environment variables that I set in this prolog to be set/seen in the job. For example the prolog: #!/bin/bash ...

Re: [slurm-users] Job flexibility with cons_tres

2021-02-12 Thread Ansgar Esztermann-Kirchner
On Fri, Feb 12, 2021 at 09:47:56AM +0100, Ole Holm Nielsen wrote: > > Could you kindly say where you have found documentation of the > DefaultCpusPerGpu (or DefCpusPerGpu?) parameter. Humph, I shouldn't have written the message from memory. It's actually DefCpuPerGPU (singular). > I'm unable t

Re: [slurm-users] Rate Limiting of RPC calls

2021-02-12 Thread Kota Tsuyuzaki
Thanks Guys! All information is valuable. I'll look up our setting and try to tune our Slurm cluster to get higher performance. Best, Kota 露崎 浩太 (Kota Tsuyuzaki) kota.tsuyuzaki...@hco.ntt.co.jp NTTソフトウェアイノベーションセンタ 分散処理基盤技術プロジェクト 0422-59-2837

Re: [slurm-users] [EXT] How to determine (on the ControlMachine) which cores/gpus are assigned to a job?

2021-02-12 Thread Sean Crosby
Hi Thomas, Indeed, even on my cluster, the CPU ID does not match the physical CPU assigned to the job # scontrol show job 24115206_399 -d JobId=24115684 ArrayJobId=24115206 ArrayTaskId=399 JobName=s10 JOB_GRES=(null) Nodes=spartan-bm096 CPU_IDs=50 Mem=4000 GRES= [root@spartan-bm096 ~]# c

Re: [slurm-users] Job flexibility with cons_tres

2021-02-12 Thread Ole Holm Nielsen
On 2/12/21 9:24 AM, Ansgar Esztermann-Kirchner wrote: After scouring the docs once more, I've noticed DefaultCpusPerGpu, which seems to be exactly what I was looking for: jobs request a number of GPUs, but no CPUs; and Slurm will assign an appropriate number of CPUs. The only disadvantage is the

Re: [slurm-users] Job flexibility with cons_tres

2021-02-12 Thread Ansgar Esztermann-Kirchner
On Mon, Feb 08, 2021 at 12:36:06PM +0100, Ansgar Esztermann-Kirchner wrote: > Of course, one could use different partitions for different nodes, and > then submit individual jobs with CPU requests tailored to one such > partition, but I'd prefer a more flexible approach where a given job > could r