Hi Alex,
Thanks a lot. I suspected it was something trivial.
ubuntu@ip-172-31-12-211:~$ scontrol show config | grep -i defmem
DefMemPerNode = UNLIMITED
Specifying `sbatch --mem=1M job.sh` works. I will probably specify a
default value in the slurm.conf (just tried; that also helps).
Hi,
Your job does not request any specific amount of memory, so it gets the
default request. I believe the default request is all the RAM in the node.
Try something like:
$ scontrol show config | grep -i defmem
DefMemPerNode = 64000
Regards,
Alex
On Mon, Nov 23, 2020 at 12:33 PM Jan
Hi,
I am having issues getting slurm to run multiple jobs in parallel on the
same machine.
Most of our jobs are either (relatively) low on CPU and high on memory
(data processing) or low on memory and high on CPU (simulations). The
server we have is generally big enough (256GB Mem; 16 cores)