*SelectTypeParameters = **CR_CPU_Memory *or the like.
>
> Also, you may want to use systemd-cgtop to at least confirm jobs are
> indeed running in cgroups.
>
> Sincerely,
> S. Zhang
>
> On Fri, Jun 23, 2023, 12:07 Boris Yazlovitsky wrote:
>
>> it's still not co
ces=yes
> ConstrainRAMSpace=yes
>
>
>
> Vlad.
>
>
>
> *From:* slurm-users *On Behalf Of
> *Boris Yazlovitsky
> *Sent:* Thursday, June 22, 2023 5:40 PM
> *To:* Slurm User Community List
> *Subject:* Re: [slurm-users] [EXT] --mem is not limiting the job's mem
e following configured and it seems to be working ok.
>
>
>
> CgroupAutomount=yes
> ConstrainCores=yes
> ConstrainDevices=yes
> ConstrainRAMSpace=yes
>
> Vlad.
>
>
>
> *From:* slurm-users *On Behalf Of
> *Boris Yazlovitsky
> *Sent:* Thursday, June 22,
amp;& thank you!
-b
On Thu, Jun 22, 2023 at 4:02 PM Ozeryan, Vladimir <
vladimir.ozer...@jhuapl.edu> wrote:
> --mem=5G. Should allocate 5G of memory per node.
>
> Are your cgroups configured?
>
>
>
> *From:* slurm-users *On Behalf Of
> *Boris Yazlovitsky
Running slurm 22.03.02 on Ubunutu 22.04 server.
Jobs submitted with --mem=5g are able to allocate an unlimited amount of
memory.
how to limit on the job submission level how much memory it can grab?
thanks, and best regards!
Boris
I sent this a while ago - don't know if it got to the mailing list:
I'm running slurm 23.02.0 on ubuntu 14.04
when a batch job is submitted, getting this message in the error file:
slurmstepd: error: common_file_write_content: unable to write 1 bytes to
cgroup /sys/fs/cgroup/memory/slurm/uid_1000