On Friday, 11 May 2018 11:15:49 PM AEST Mahmood Naderan wrote:
> Excuse me... I see the output of squeue which says
> 170 IACTIVE bash mahmood PD 0:00 1 (AssocGrpMemLimit)
>
> I don't understand why the memory limit is reach?
That's based on what your job requests, not what is
Excuse me... I see the output of squeue which says
170 IACTIVE bash mahmood PD 0:00 1 (AssocGrpMemLimit)
I don't understand why the memory limit is reach? I can not see the
memory usage of a running job from sacct commands. However, using
"top" on the compute node, I see 6 cores
On Friday, 16 March 2018 2:44:09 AM AEDT Mahmood Naderan wrote:
> That is not what I want. I want a total upper limit for time usage of
> a user.
I'm not sure Slurm can do that for you by setting limits on a user I'm afraid!
You can set GrpWall on an association or QoS but that will govern the m
OK thanks. I see that.
My issue with that option is described below. Assume a 2 core program
runs about 40 minutes. So I set
sacctmgr modify user name=mahmood set MaxWall=00:50:00
which means 50 minutes wall clock limit. That seems to be a per job
limit. Therefore, if the users submits for the
Mahmood Naderan writes:
> Hi,
> Among many control commands and option, I want to retrieve the limits
> which have been set for users. But I can not find the correct command,
> e.g sacctmgr, sreport, ...
>
> For example, I ran this commnad
>
> # sacctmgr modify user name=mahmood set MaxWall=00:10