Hi Alex,
Thanks a lot. I suspected it was something trivial.
ubuntu@ip-172-31-12-211:~$ scontrol show config | grep -i defmem
DefMemPerNode = UNLIMITED
Specifying `sbatch --mem=1M job.sh` works. I will probably specify a
default value in the slurm.conf (just tried; that also helps).
Hi,
Am setting up SLURM on a single shared memory machine. Found the
following blog post:
http://rolk.github.io/2015/04/20/slurm-cluster
The main suggestion is to use cgroups to partition the resources. Are
ther any other suggestions of changes to implement that differ from the
standard clust
Hello,
I would like to know if it is normal not to see "slurmd" daemon in systemd
services tree. I have run "systemd-analyze plot > /tmp/plot.txt" and, then, I
have search "slurmd" in that file, but no match is found. I comment it because
I would like if it could be a SLURM problem or a systemd
*Ack* you're correct. This was a recent, hastily added feature to control
budgets on our system. I need to re-evaluate our approach. As you mentioned,
the QOS GrpTRESMins limits appear to be the only NoDecay option. Managing them
will be challenging on our system because they'll be numerous.
Hi,
I installed a cluster with 10 nodes and I'd like to try compiling a very
large code base using all the nodes. The context is as follows:
- my code base is in C++, I use gcc.
- configuration is done with CMake
- compilation is processed by ninja (something similar to make)
I can srun ninja and