Hello
Good afternoon, i have a query currently in our cluster we have different
partitions:
1 partition called slims with 48 Gb of ram
1 partition called general 192 Gb of ram
1 partition called largemem with 768 Gb of ram.
Is it possible to restrict access to the largemem partition and for tas
the early versions of
> 18.08 prior to 18.08.04 there was a bug with weights not working. Once we
> got past 18.08.04, then weights worked for us.
>
>
>
> Jeff
>
> University of Houston - HPC
>
>
>
> *From:* slurm-users [mailto:slurm-users-boun...@lists.schedmd.com]
0-00:00:00 AllowGroups=ALL PriorityJobFactor=1 PriorityTier=1
> DisableRootJobs=NO RootOnly=NO Hidden=NO Shared=NO GraceTime=0
> PreemptMode=OFF ReqResv=NO DefMemPerCPU=2000 AllowAccounts=ALL AllowQos=ALL
> LLN=NO MaxCPUsPerNode=16 QoS=gpu ExclusiveUser=NO OverSubscribe=NO
> OverTime
Hi All,
Thanks all for your posts
Reading the documentation of Slurm and other sites like Niflheim
https://wiki.fysik.dtu.dk/niflheim/Slurm_configuration#node-weight (Ole
Holm Nielsen) the parameter "Weight" is to assign a value to the nodes,
with this you can have priority in the nodes. But I ha
Hi all,
Currently we have two types of nodes, one with 3GB and another with 2GB of
RAM, it is required that in nodes of 3 GB it is not allowed to execute
tasks with less than 2GB, to avoid underutilization of resources.
This, because we have nodes that can fulfill the condition of executing
tasks
Hi.
>
>
> On 8/3/19 12:37 AM, Sistemas NLHPC wrote:
>
> Hi all,
>
> Currently we have two types of nodes, one with 192GB and another with
> 768GB of RAM, it is required that in nodes of 768 GB it is not allowed to
> execute tasks with less than 192GB, to avoid underutilization o
Hi all,
Currently we have two types of nodes, one with 192GB and another with 768GB
of RAM, it is required that in nodes of 768 GB it is not allowed to execute
tasks with less than 192GB, to avoid underutilization of resources.
This, because we have nodes that can fulfill the condition of executi