On 17/03/2023 13:11, William Brown wrote:
We create the temporary directories using SLURM_JOB_ID, and that works
fine with Job Arrays so far as I can see. Don't you have a problem
if a user has multiple jobs on the same node?
William
Ours users just have /work/$username, anything below that
We create the temporary directories using SLURM_JOB_ID, and that works
fine with Job Arrays so far as I can see. Don't you have a problem
if a user has multiple jobs on the same node?
William
On Fri, 17 Mar 2023 at 11:17, Timo Rothenpieler
wrote:
>
> Hello!
>
> I'm currently facing a bit of an
Message: 1
Date: Fri, 5 Mar 2021 11:56:05 +0100
From: Ole Holm Nielsen
To:
Subject: Re: [slurm-users] Get original script of a job
Message-ID: <61c47956-5fc9-5be5-9aef-08f8e27bf...@fysik.dtu.dk>
Content-Type: text/plain; charset="utf
I put this line in my job-control file (written in bash) to capture the
original as part of the run:
cp $0 $RUNDIR/$SLURM_JOB_NAME
The $0 gives the full path to the working copy of the script, so it
expands to this for example:
/fs/slurm/var/spool/job67842/slurm_script
It depends on t
Hi,
On 5/03/2021 11:29, Alberto Morillas, Angelines wrote:
> I know that when I send a job with scontroI can get the path and the
> name of the script used to send this job, but normally the users change
> theirs scripts and sometimes all was wrong after that, so is there any
> possibility to rep
On 05-03-2021 11:29, Alberto Morillas, Angelines wrote:
I would like to know if it will be possible to get the script that was
used to send a job.
I know that when I send a job with scontroI can get the path and the
name of the script used to send this job, but normally the users change
their
Okay ... obviously an auto-complete error that I failed to check: Please
ignore and accept my apologies.
> On Dec 16, 2019, at 7:03 AM, Wiegand, Paul wrote:
>
> unlock stokes-arcc
> get stokes-arcc
>
On 15/11/2019 17.06, Miguel Oliveira wrote:
Thank! Nice code and just what I was needing! A few wrinkles:
a) on reading the Gres from scontrol for each job on my version this is on a
TRES record not as an individual Gres. Possibly version/configuration issue.
b) converting pid2id from /proc//cg
Janne Blomqvist writes:
> On 14/11/2019 20.41, Prentice Bisbal wrote:
>> Is there any way to see how much a job used the GPU(s) on a cluster
>> using sacct or any other slurm command?
>>
>
> We have created
> https://github.com/AaltoScienceIT/ansible-role-sacct_gpu/ as a quick
> hack to put GPU uti
Thank! Nice code and just what I was needing! A few wrinkles:
a) on reading the Gres from scontrol for each job on my version this is on a
TRES record not as an individual Gres. Possibly version/configuration issue.
b) converting pid2id from /proc//cgroup is problematic on array jobs.
Again many
On 14/11/2019 20.41, Prentice Bisbal wrote:
> Is there any way to see how much a job used the GPU(s) on a cluster
> using sacct or any other slurm command?
>
We have created
https://github.com/AaltoScienceIT/ansible-role-sacct_gpu/ as a quick
hack to put GPU utilization stats into the comment fie
Do you mean akin to what some would consider "CPU efficiency" on a CPU job?
"How much... used" is a little vague.
From: slurm-users on behalf of Prentice
Bisbal
Sent: Thursday, November 14, 2019 13:41
To: Slurm User Community List
Subject: [slurm-users]
Hi Jeff,
Quite close:
$ sinfo --Format=nodehost,statelong
Cheers,
--
Kilian
I use
alias sn='sinfo -Nle -o "%.20n %.15C %.8O %.7t" | uniq'
and then it's just
[root@machine]# sn
cheers
L.
--
"The antidote to apocalypticism is *apocalyptic civics*. Apocalyptic civics
is the insistence that we cannot ignore the truth, nor should we panic
about it. It is a shared consc
14 matches
Mail list logo