I completely missed that, thank you!
-Dj
Laura Hild via slurm-users wrote:
PropagateResourceLimitsExcept won't do it?
Sarlo, Jeffrey S wrote:
You might look at the PropagateResourceLimits and PropagateResourceLimitsExcept
settings in slurm.conf
--
slurm-users mailing list -- slurm-users@l
s/cgroup/memory/slurm_*/uid_$(id -u)/job_*/memory.limit_in_bytes
in both shells?
Regards,
Hemann
On 5/14/24 20:38, Dj Merrill via slurm-users wrote:
I'm running into a strange issue and I'm hoping another set of brains
looking at this might help. I would appreciate any feedback.
I have
x-x86-64.so.2 (0x14a9d8306000)
-Dj
On 5/14/24 15:25, Feng Zhang via slurm-users wrote:
Looks more like a runtime environment issue.
Check the binaries:
ldd /mnt/local/ollama/ollama
on both clusters and comparing the output may give some hints.
Best,
Feng
On Tue, May 14, 2024 at
I'm running into a strange issue and I'm hoping another set of brains
looking at this might help. I would appreciate any feedback.
I have two Slurm Clusters. The first cluster is running Slurm 21.08.8
on Rocky Linux 8.9 machines. The second cluster is running Slurm
23.11.6 on Rocky Linux 9.
Thank you Carsten. I'll take a closer look at the QOS limit approach.
If I'm understanding the documentation correctly, partition limits (non
QOS) are set via the slurm.conf file, and although there are options for
limiting the max number of nodes for a person, and the max cpus per
node, ther
Hi all,
I'm relatively new to Slurm and my Internet searches so far have turned
up lots of examples from the client perspective, but not from the admin
perspective on how to set this up, and I'm hoping someone can point us
in the right direction. This should be pretty simple... :-)
We have