Thanks a lot
I will stick to OpenHPC ,
THANKS a LOT
On Tuesday, 3 October, 2023 at 05:12:02 pm GST, Renfro, Michael
wrote:
I’d probably default to OpenHPC just for the community around it, but I’ll also
note that TrinityX might not have had any commits in their GitHub for an
18-mo
And weirdly enough it has now stopped working again, after I did the
experimentation for power save described in the other thread.
That is really strange. At the highest verbosity level the logs just say
slurmdbd: debug: REQUEST_PERSIST_INIT: CLUSTER:cluster VERSION:9984
UID:1457 IP:192.168.2.254
I'm experimenting with slurm powersave and I have several questions. I'm
following the guidance from https://slurm.schedmd.com/power_save.html and
the great presentation from our own
https://slurm.schedmd.com/SLUG23/DTU-SLUG23.pdf
I am running slurm 23.02.3
1) I'm not sure I fully understand Reco
Hello,
I did an upgrade of Slurm this week (20.11 to 21.08.8) and while
everything seems to be working with srun and sbatch commands, here is
what I get when I try to launch jobs from drmaa library:
python: /usr/local/lib/slurm/auth_munge.so: Incompatible Slurm plugin
version (21.08.8)
pyth
Thank you for your response,
Just to clarify,
We do specify the node weight in the node setting lines, I was just wondering
if there was a way to be more detailed in our weight assignments.
Here is our configuration right now:
---
# COMPUTE NODES
Le mercredi 4 octobre 2023 à 06:03, Kratz, Zach a écrit :
> We use an interactive node that will randomly select from our list of
> computing nodes to complete the job. We would like to find a way to select
> from our list of old nodes first, before using the newer ones. We tried using
> weigh