We have weights and priority/multifactor.
Jeff
From: Sistemas NLHPC [mailto:siste...@nlhpc.cl]
Sent: Thursday, December 05, 2019 12:01 PM
To: Sarlo, Jeffrey S; Slurm User Community List
Subject: Re: [slurm-users] Slurm configuration, Weight Parameter
Thanks Jeff !
We upgrade slurm to 18.08.4
*On
> Behalf Of *Sistemas NLHPC
> *Sent:* Tuesday, December 03, 2019 12:33 PM
> *To:* Slurm User Community List
> *Subject:* Re: [slurm-users] Slurm configuration, Weight Parameter
>
>
>
> Hi Renfro
>
>
>
> I am testing this configuration, test configuration an
Hi Renfro
I am testing this configuration, test configuration and as clean as
possible:
NodeName=devcn050 RealMemory=3007 Features=3007MB Weight=200 State=idle
Sockets=2 CoresPerSocket=1
NodeName=devcn002 RealMemory=3007 Features=3007MB Weight=1 State=idle
Sockets=2 CoresPerSocket=1
NodeNam
We’ve been using that weighting scheme for a year or so, and it works as
expected. Not sure how Slurm would react to multiple NodeName=DEFAULT lines
like you have, but here’s our node settings and a subset of our partition
settings.
In our environment, we’d often have lots of idle cores on GPU
Hi All,
Thanks all for your posts
Reading the documentation of Slurm and other sites like Niflheim
https://wiki.fysik.dtu.dk/niflheim/Slurm_configuration#node-weight (Ole
Holm Nielsen) the parameter "Weight" is to assign a value to the nodes,
with this you can have priority in the nodes. But I ha
On 23/11/19 9:14 am, Chris Samuel wrote:
My gut instinct (and I've never tried this) is to make the 3GB nodes be
in a separate partition that is guarded by AllowQos=3GB and have a QOS
called "3GB" that uses MinTRESPerJob to require jobs to ask for more
than 2GB of RAM to be allowed into the QO
On 21/11/19 7:25 am, Sistemas NLHPC wrote:
Currently we have two types of nodes, one with 3GB and another with 2GB
of RAM, it is required that in nodes of 3 GB it is not allowed to
execute tasks with less than 2GB, to avoid underutilization of resources.
My gut instinct (and I've never tried
Can't you just set the usage priority to be higher for the 2GB machines?
This way, if the requested memory is less than 2GB those machines will
be used first, and larger jobs skip to the higher memory machines.
On 11/21/19 9:44 AM, Jim Prewett wrote:
>
> Hi Sistemas,
>
> I could be mista
Hi Sistemas,
I could be mistaken, but I don't think there is a way to require jobs on
the 3GB nodes to request more than 2GB!
https://slurm.schedmd.com/slurm.conf.html states this: "Note that if a job
allocation request can not be satisfied using the nodes with the lowest
weight, the set o
Hi Daniel
I have tried this configuration but it has not given me results.
Is there any other option to be able to do this, or should something else
be configured to use the weight parameter?
Thanks in advance.
Regards,
El lun., 5 ago. 2019 a las 5:35, Daniel Letai ()
escribió:
> Hi.
>
>
> On
Hi.
On 8/3/19 12:37 AM, Sistemas NLHPC
wrote:
Hi all,
Currently we have two types of nodes, one with 192GB and another
with 768GB of RAM, it is required that in nodes of 768 GB it is
not allowed to execute tasks
Hi NLHPC employee,
Sistemas NLHPC writes:
> Hi all,
>
> Currently we have two types of nodes, one with 192GB and another with
> 768GB of RAM, it is required that in nodes of 768 GB it is not allowed
> to execute tasks with less than 192GB, to avoid underutilization of
> resources.
>
> This, beca
Hi,
Have you checked documentation of MinMemory in Slurm.conf for node definition?
Best,
Andreas
> Am 02.08.2019 um 23:53 schrieb Sistemas NLHPC :
>
> Hi all,
>
> Currently we have two types of nodes, one with 192GB and another with 768GB
> of RAM, it is required that in nodes of 768 GB it is
Thanks, Ahmet, in the settings it mentions servername and servertype so it
seemed at first glance that some relatively elaborate communication with a
license server was involved. I'll try if with some scripting around the
options you mentioned I can find a suitable solution.
Greetings, Pim
On Tue
Hi;
As far as I know, the slurm is not able to work (communicate) with
reprise license manager or any other license manager. Slurm just sums
the used licenses according to the -L parameter of the jobs, and
subtracts this sum from the total license count which given by using
"sacctmgr add/modi
I’m assuming you have LDAP and Slurm already working on all your nodes, and
want to restrict access to two of the nodes based off of Unix group membership,
while letting all users access the rest of the nodes.
If that’s the case, you should be able to put the two towers into a separate
partitio
Dear,
Can someone help me to configure several machines under Slurm with
different rights per user group?
Or can someone redirect me to a tuoriel that explains these different
points?
Regards,
Jean-Sébastien
Le lun. 28 janv. 2019 à 17:52, Jean-Sébastien Lerat
a écrit :
> Hi,
>
> I have two to
17 matches
Mail list logo