sts.schedmd.com
Sent: Friday, April 19, 2019 11:27:08 AM
Subject: Re: [slurm-users] Increasing job priority based on resources requested.
Ryan,
I certainly understand your point of view, but yes, this is definitely
what I want. We only have a few large memory nodes, so we want jobs that
request a lo
mem nodes would wait idle if only low
mem jobs are in the queue.
cheers,
P
>> - Original Message -
>> From: "Prentice Bisbal"
>> To: slurm-users@lists.schedmd.com
>> Sent: Friday, April 19, 2019 11:27:08 AM
>> Subject: Re: [slurm-users] Increa
Hi,if you want to affect priority, you can create additional partitions that contain nodes of a certain type, like bigmem, ibnet, etc. and set a priority boost of your choosing. Jobs that require certain features or exceed predefined thresholds can be then filtered and assigned to the appropriate p
ble nodes they fit in, and the larger or more feature-rich nodes
have a kind of soft reservation either for large jobs or for busy times.
Cheers,
Chris
- Original Message -
From: "Prentice Bisbal"
To: slurm-users@lists.schedmd.com
Sent: Friday, April 19, 2019 11:27:08 AM
Subjec
Hi;
We use node weight parameter to do that. When you set High mem nodes
with high weight, and low mem nodes with low weight; Slurm will select
lowest weight nodes which have enough mem job requested. So, if there
are free low mem nodes, high mem nodes will stay free. At our cluster,
low mem
entice Bisbal"
To: slurm-users@lists.schedmd.com
Sent: Friday, April 19, 2019 11:27:08 AM
Subject: Re: [slurm-users] Increasing job priority based on resources requested.
Ryan,
I certainly understand your point of view, but yes, this is definitely
what I want. We only have a few large memo
Ryan,
I certainly understand your point of view, but yes, this is definitely
what I want. We only have a few large memory nodes, so we want jobs that
request a lot of memory to have higher priority so they get assigned to
those large memory nodes ahead of lower-memory jobs which could run
any
This is not an official answer really, but I’ve always just considered this to
be the way that the scheduler works. It wants to get work completed, so it will
have a bias toward doing what is possible vs. not (can’t use 239GB of RAM on a
128GB node). And really, is a higher priority what you wan
Slurm-users,
Is there away to increase a jobs priority based on the resources or
constraints it has requested?
For example, we have a very heterogeneous cluster here: Some nodes only
have 1 Gb Ethernet, some have 10 Gb Ethernet, and others have DDR IB. In
addition, we have some large memory