Hi, I've been looking into writing applications that can efficiently query
and interact with the SLURM job queue and accounting system. For this
reason my initial instinct was to write a Rust application that uses the
libslurm ABI to avoid inefficiently spawning subprocesses. However a quick
look i
On Tuesday, 29 November 2022, at 08:44:48 (+),
Mark Holliman wrote:
I mentioned Fedora 9 and CentOS 9 (Stream) simply because they tend
to be compatible, and something that works on them is likely to work
on Rocky9.
RHEL 8.x is based on Fedora 28. RHEL 9.x is based on Fedora 34 via
CentOS
Can sview display job history? By default, it appears only to show running
jobs. Could it display something like the last 6 hours of job history? This
is something my users could really benefit from. Some users even claim to
need it.
--
*Chase Schuette **Pronouns: He/Him/His** | Caterpillar*
*A
Hi Ole,
On my system that doesn't show me any MaxTRESPU info, which is how I've
implemented user limits. E.g.:
% showuserlimits -q normal
scontrol -o show assoc_mgr users=pacey account=local qos=normal flags=QOS
Slurm share information:
AccountUser RawShares NormShares
Hi Mike,
That sounds great! It seems to me that "showuserlimits -q " would
also print the QOS information, but maybe this is not what you are
after? Have you tried this -q option, or should the script perhaps be
generalized to cover your needs?
/Ole
On 29-11-2022 14:39, Pacey, Mike wrote
Hi Ole (and Jeffrey),
Thanks for the pointer - those are some very useful scripts. I couldn't get
showslurmlimits or showslurmjobs to get quite what I was after (it wasn't
showing me memory usage). However, it pointed me in the right direction - the
scontrol command. I can run the following:
s
Never mind, I found the problem. The rebuilt nodes were still listed in my
other cluster config (running Slurm 19), and hence it was sending them status
check messages which they couldn't respond to. Tidied up the config and the
messages have disappeared.
From: slurm-users On Behalf Of Mark
H
Hello,
I've just finished building and installing Slurm 22.05.6 from source on a head
node and a couple workers. I installed the same RPMs on all the nodes, and the
slurmdbd, slurmctld, and slurmd daemons have all come online and appear healthy
(test jobs can be submitted to partitions and succ
Hello,
I've just finished building and installing Slurm 22.05.6 from source on a head
node and a couple workers. I installed the same RPMs on all the nodes, and the
slurmdbd, slurmctld, and slurmd daemons have all come online and appear healthy
(test jobs can be submitted to partitions and succ
Hi everybody,
I have a code that can run on n MPI processes.
MPI process 0 is multithreaded (using OpenMP), all other MPI processes are
single-threaded.
I don't want to reserve several threads for all processes (only the first
one)
Is it possible to specify this in a slurm batch job ?
Thanks in
Hi Mark,
I'm glad you found the solution! I recommend to use Cgroups
(proctrack/cgroup), see the manual pages cgroup.conf and slurm.conf and
https://wiki.fysik.dtu.dk/Niflheim_system/Slurm_configuration/#cgroup-configuration
/Ole
On 29-11-2022 09:44, Mark Holliman wrote:
Thanks for replyin
Ole,
Thanks for replying. I installed a fresh version of RockyLinux 9.1. I mentioned
Fedora 9 and CentOS 9 (Stream) simply because they tend to be compatible, and
something that works on them is likely to work on Rocky9.
I am building the RPMs from tarball, following instructions near identical
12 matches
Mail list logo