Hi Mahmood,

It seems your compute node is configured with this limit:

virtual memory          (kbytes, -v) 72089600

So when the batch job tries to set a higher limit (ulimit -v 82089600) than permitted by the system (72089600), this must surely get rejected, as you have discovered!

You may want to reconfigure your compute nodes' limits, for example by setting the virtual memory limit to "unlimited" in your configuration. If the nodes has a very small RAM memory + swap space size, you might encounter Out Of Memory errors...

/Ole

On 15-04-2018 19:21, Mahmood Naderan wrote:
Hi,
The user can run "ulimit -v VALUE" on the frontend. However, when I
put that command in a slurm script, it says that operation is not
permitted by the user!

[hamid@rocks7 case1_source2]$ ulimit -v 82089600
[hamid@rocks7 case1_source2]$ cat slurm_script.sh
#!/bin/bash
#SBATCH --job-name=hvacSteadyFoam
#SBATCH --output=hvac.log
#SBATCH --ntasks=32
#SBATCH --time=100:00:00
#SBATCH --mem=64000M
ulimit -v 82089600
ulimit -a
mpirun hvacSteadyFoam -parallel
[hamid@rocks7 case1_source2]$ sbatch slurm_script.sh
Submitted batch job 50
[hamid@rocks7 case1_source2]$ cat hvacSteadyFoam.log
/var/spool/slurmd/job00050/slurm_script: line 11: ulimit: virtual
memory: cannot modify limit: Operation not permitted
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 256712
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) 65536000
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4096
virtual memory          (kbytes, -v) 72089600
file locks                      (-x) unlimited
[hamid@rocks7 case1_source2]$

Reply via email to