Just for the data bank: the EPEL el9 source RPM for slurm v22 builds “just
fine” on Alma 8. The issue turned out to be a bug/feature in the
slurm_addto_id_char_list-test check (make check-TESTS) where the check fails if
the user executing the build has a supplemental group with a space in the gr
Yeah, our spec is based off of their spec with our own additional
features plugged in.
-Paul Edmon-
On 12/2/22 2:12 PM, David Thompson wrote:
Hi Paul, thanks for passing that along. The error I saw was coming
from the rpmbuild %check stage in the el9/fc38 builds, which your
.spec file doesn
Hi Paul, thanks for passing that along. The error I saw was coming from the
rpmbuild %check stage in the el9/fc38 builds, which your .spec file doesn’t run
(likewise the spec file included in the schedmd tarball). Certainly one way to
avoid failing a check is to not run it.
Regardless, I apprec
I successfully build it for Rocky straight from the tgz file as usual
with rpmbuild -ta
Brian Andrus
On 12/2/2022 9:21 AM, David Thompson wrote:
Hi folks, I’m working on getting Slurm v22 RPMs built for our Alma 8
Slurm cluster. We would like to be able to use the sbatch –prefer
option, whi
Nousheen,
When a node is not responding the first place to start is to ensure that the
node is up and slurmd is running. It looks like you have confirmed that with
your output from the command “scontrol show slurmd” so that is a good start.
After verifying that slurmd is running the next step w
Yup, here is the spec we use that works for CentOS 7, Rocky 8, and Alma 8.
-Paul Edmon-
On 12/2/22 12:21 PM, David Thompson wrote:
Hi folks, I’m working on getting Slurm v22 RPMs built for our Alma 8
Slurm cluster. We would like to be able to use the sbatch –prefer
option, which isn’t presen
Hi folks, I'm working on getting Slurm v22 RPMs built for our Alma 8 Slurm
cluster. We would like to be able to use the sbatch -prefer option, which isn't
present in the current EPEL el8 rpms (version 20.11.9). Rebuilding from either
the el9 or fc38 SRPM or fails on a protocol test in
testsuite
Dear Ole,
Thank you so much for your response. I have now adjusted the RealMemory in
the slurm.conf which was set by default previously. Your insight was really
helpful. Now, when I submit the job, it is running on three nodes but one
node (104) is not responding. The details of some commands are