We are pleased to announce the availability of the Slurm 23.11 release.
To highlight some new features in 23.11:
- Substantially overhauled the SlurmDBD association management code. For
clusters updated to 23.11, account and user additions or removals are
significantly faster than in prior rel
ok, I understand synching of users to slurm database is a task which it
not built-in, but could be added outside of slurm :-)
With regards to the QoS or Partition QoS setting I've tried several
settings and configurations however it was not possible at all to
configure a QoS on partition level
Hi,
On 21/11/2023 13:52, Arsene Marian Alain wrote:
But how can user write or access the hidden directory .1809 if he doesn't have
read/write permission on main directory 1809?
Because it works as a namespace. On my side:
$ ls -alh /local/6000523/
total 0
drwx-- 3 root root 33 Nov
Hi,
From the perspective of the job, those directories are mapped to /tmp (and
others, depending on your job_container.conf). There's no need for the user to
be aware of the basepath that is specified in the conf file.
You can easily verify it is working by writing files to /tmp from a new slurm
Hello Alain,
maybe I'm missing the point, but from my understanding the
job_container/tmpfs plugin uses the directory under BasePath to store
its data, used to create the bind mounts for the users. The folder
itself is not meant to be used by others.
The folders in the hidden directory with us
Hello Alain,
as an alternative to job_container/tmpfs, you may also try your luck
with the 'auto_tmpdir' SPANK plugin:
https://github.com/University-of-Delaware-IT-RCI/auto_tmpdir
We've been using using that on our small HPC cluster (Slurm 22.05) and
it does what it's supposed to. One thing
Thanks Sean. I've tried using slurm prolog/epilog scripts but without any
success. That's why I decided to look for other solutions and
job_container/tmpfs plugin seemed like a good alternative.
De: slurm-users En nombre de Sean Mc
Grath
Enviado el: martes, 21 de noviembre de 2023 12:57
Para:
Hi Ward,
You're right.
[root@node01 scratch]# pwd
/scratch
[root@node01 scratch]# ll
total 0
drwx-- 3 root root 30 nov 21 13:41 1809
[root@node01 scratch]# ls -la 1809/
total 0
drwx-- 3 root root 30 nov 21 13:41 .
drwxrwxrwt. 3 root root 18 nov 21 13:41 ..
drwx-- 2 thais root
Hi Arsene,
On 21/11/2023 10:58, Arsene Marian Alain wrote:
I just give my Basepath=/scratch (a local directory for each node that is already mounted
with 1777 permissions) in job_container.conf. The plugin automatically generates for each
job a directory with the "JOB_ID", for example: /scrat
Would a prolog script, https://slurm.schedmd.com/prolog_epilog.html, do what
you need? Sorry if you have already considered that and I missed it.
---
Sean McGrath
Senior Systems Administrator, IT Services
From: slurm-users on behalf of Arsene
Marian Alain
Sent
Hello Brian,
Thanks for your answer. With the job_container/tmpfs plugin I don't really
create the directory manually.
I just give my Basepath=/scratch (a local directory for each node that is
already mounted with 1777 permissions) in job_container.conf. The plugin
automatically generates for
11 matches
Mail list logo