[slurm-users] Query for minimum memory required in partition

2020-12-16 Thread Sistemas NLHPC
Hello Good afternoon, i have a query currently in our cluster we have different partitions: 1 partition called slims with 48 Gb of ram 1 partition called general 192 Gb of ram 1 partition called largemem with 768 Gb of ram. Is it possible to restrict access to the largemem partition and for tas

Re: [slurm-users] Slurm configuration, Weight Parameter

2019-12-05 Thread Sistemas NLHPC
the early versions of > 18.08 prior to 18.08.04 there was a bug with weights not working. Once we > got past 18.08.04, then weights worked for us. > > > > Jeff > > University of Houston - HPC > > > > *From:* slurm-users [mailto:slurm-users-boun...@lists.schedmd.com]

Re: [slurm-users] Slurm configuration, Weight Parameter

2019-12-03 Thread Sistemas NLHPC
0-00:00:00 AllowGroups=ALL PriorityJobFactor=1 PriorityTier=1 > DisableRootJobs=NO RootOnly=NO Hidden=NO Shared=NO GraceTime=0 > PreemptMode=OFF ReqResv=NO DefMemPerCPU=2000 AllowAccounts=ALL AllowQos=ALL > LLN=NO MaxCPUsPerNode=16 QoS=gpu ExclusiveUser=NO OverSubscribe=NO > OverTime

Re: [slurm-users] Slurm configuration, Weight Parameter

2019-11-29 Thread Sistemas NLHPC
Hi All, Thanks all for your posts Reading the documentation of Slurm and other sites like Niflheim https://wiki.fysik.dtu.dk/niflheim/Slurm_configuration#node-weight (Ole Holm Nielsen) the parameter "Weight" is to assign a value to the nodes, with this you can have priority in the nodes. But I ha

[slurm-users] Slurm configuration, Weight Parameter

2019-11-21 Thread Sistemas NLHPC
Hi all, Currently we have two types of nodes, one with 3GB and another with 2GB of RAM, it is required that in nodes of 3 GB it is not allowed to execute tasks with less than 2GB, to avoid underutilization of resources. This, because we have nodes that can fulfill the condition of executing tasks

Re: [slurm-users] Slurm configuration

2019-10-29 Thread Sistemas NLHPC
Hi. > > > On 8/3/19 12:37 AM, Sistemas NLHPC wrote: > > Hi all, > > Currently we have two types of nodes, one with 192GB and another with > 768GB of RAM, it is required that in nodes of 768 GB it is not allowed to > execute tasks with less than 192GB, to avoid underutilization o

[slurm-users] Slurm configuration

2019-08-02 Thread Sistemas NLHPC
Hi all, Currently we have two types of nodes, one with 192GB and another with 768GB of RAM, it is required that in nodes of 768 GB it is not allowed to execute tasks with less than 192GB, to avoid underutilization of resources. This, because we have nodes that can fulfill the condition of executi