ng the status of the nodes, I see
RealMemory=120705 AllocMem=1024 FreeMem=309 Sockets=32 Boards=1
The question is why freemem is so much low while allocmem is far less than
realmemory?
Regards,
Mahmood
On Mon, Dec 16, 2019 at 10:12 AM Kraus, Sebastian
mailto:sebastian.kr...@tu-berlin.de>> w
Hi Mahmood,
>> will it reserve (looks for) 200GB of memory for the job? Or this is the hard
>> limit of the memory required by job?
No, this indicates the amount of residual/real memory as reqeusted per node.
Your job will be only runnable on nodes that offer at least 200 GB main memory
(sum
sses found |
+-+
[user@gpu045 ~]$ exit
exit
[user@login005 ~]$ scancel 7052366
[user@login005 ~]$
On 12/13/19 11:48 AM, Kraus, Sebastian wrote:
> Dear Valantis,
> thanks for the exp
. Juni 135
10623 Berlin
Tel.: +49 30 314 22263
Fax: +49 30 314 29309
Email: sebastian.kr...@tu-berlin.de
From: Chrysovalantis Paschoulas
Sent: Friday, December 13, 2019 13:05
To: Kraus, Sebastian
Subject: Re: [slurm-users] srun: job steps and generic res
Dear all,
I am facing the following nasty problem.
I use to start interactive batch jobs via:
srun -ppartition -N1 -n4 --time=00:30:00 --mem=1G -Jjobname --pty /bin/bash -il
Then, explicitly starting a job step within such a session via:
srun -l hostname
works fine.
But, as soon as I add a generic