On 10/29/19 12:42 PM, Igor Feghali wrote:
fairshare is been calculated for the entire cluster and not per partition.
That's correct - jobs can request multiple partitions (and will run in
the first one available to service it).
All the best,
Chris
--
Chris Samuel : http://www.csamuel.or
We have a situation where Slurm is cancelling jobs due to the error "job
dependency can't be satisfied". However, all of the job dependencies are
completing successfully. Sacct shows that they are successful and scontrol
shows they all have exit codes of 0.
Subsequently, once we have dete
hi there
i'm pretty new to slurm and trying to learn my way through it's many
configurations. my config looks like:
SchedulerType=sched/backfill
PriorityType=priority/multifactor
PriorityWeightAge=1
PriorityWeightFairshare=1
PriorityWeightJobSize=5000
PriorityWeightPartition=1
Priorit
I prefer building packages.
I did have to extract and change the .spec file to accommodate some of
the changes as well as set up the environment to complete.
Brian
On 10/29/2019 8:11 AM, Christopher Benjamin Coffey wrote:
Brian, I've actually just started attempting to build slurm 19 on cent
Brian, I've actually just started attempting to build slurm 19 on centos 8
yesterday. As you say, there are packages missing now from repos like:
rpmbuild -ta slurm-19.05.3-2.tar.bz2 --define '%_with_lua 1' --define
'%_with_x11 1'
warning: Macro expanded in comment on line 22: %_prefix path
Hi Marcus, yes we are talking about the jobacct_gather/cgroup plugin. Yes, if
you want cgroups you need:
ProctrackType=proctrack/cgroup
TaskPlugin=task/cgroup,task/affinity
But that doesn't mean you have to run the jobacct_gather/cgroup plugin, you
have the option to use jobacct_gather/linux in
Thanks for the tip Mark it looks quite useful for my purposes.
I am setting this up for a small intimate group so abuse is a non issue and I
want everyone to be responsible for maintenance.
Oytun Peksel
oytun.pek...@semcon.com
Mobile +46739205917
-Original Message-
From: slurm-users
Hi Daniel
I have tried this configuration but it has not given me results.
Is there any other option to be able to do this, or should something else
be configured to use the weight parameter?
Thanks in advance.
Regards,
El lun., 5 ago. 2019 a las 5:35, Daniel Letai ()
escribió:
> Hi.
>
>
> On
#SBATCH -L [ here I would like to calculate it like (perl -e "int $SLURM_NTASKS
* 0.422 * 5 " ) ]
I think a calculation like this needs to take place in the job_submit.lua.
So you decide on a way for users to signal that they want this behavior,
(partition, gre?) and that script calculat
Hi,
I would like to use a sbatch script with a license request using -L. However I
would like to calculate the license necessary from the cpus that is being
requested.
Ex:
#SBATCH -N 1
#SBATCH --ntasks=2
#SBATCH -L [ here I would like to calculate it like (perl -e "int
$SLURM_NTASKS * 0.
Hi all!
I think i solved the problem
The system is an opensuse leap 15 installation and slurm comes from the
repository. By default a slurm.epilog.clean skript is installed which kills
everything that belongs to the user when a job is finished including other
jobs, ssh-sessions and so on. I do not
I do want them to have both, just not at the same time. I'm looking for
a slightly more automated way to either turn the priority on the QoS
down when there's a reservation, or force them to 'use up' the priority
resources within the reservation.
Sounds like we'll have to solve it with some scr
12 matches
Mail list logo