nt
>Dunedin School of Medicine
>sam.hawarden(at)otago.ac.nz
>
>From: slurm-users on behalf of
>Carlos Fenoy
>Sent: Thursday, 20 December 2018 04:59
>To: Slurm User Community List
>Subject: Re: [slurm-users] requesting resources and after
hool of Medicine
sam.hawarden(at)otago.ac.nz
From: slurm-users on behalf of Carlos
Fenoy
Sent: Thursday, 20 December 2018 04:59
To: Slurm User Community List
Subject: Re: [slurm-users] requesting resources and afterwards launch an array
of calculations
Hi Alfredo,
thank you very much Carlos for the info,
regards
Alfredo
Enviado desde BlueMail
En 19 de diciembre de 2018 13:36, en 13:36, Carlos Fenoy
escribió:
>Hi Alfredo,
>
>You can have a look at using https://github.com/eth-cscs/GREASY . It
>was
>developed before array-jobs were supported in slurm an
Hi Alfredo,
You can have a look at using https://github.com/eth-cscs/GREASY . It was
developed before array-jobs were supported in slurm and it will do exactly
what you want.
Regards,
Carlos
On Wed, Dec 19, 2018 at 3:33 PM Alfredo Quevedo
wrote:
> thank you Michael for the feedback, my scenari
Thank you Aaron for the reply,
Specifically I am trying to run what in chemistry is known as an
Umbrella Samplig simulation, in which independent simulation windows are
run. The total number of windows for the whole simulation is 104, but
allocating 104 cores to perform the simulation would si
Literal job arrays are built into Slurm:
https://slurm.schedmd.com/job_array.html
yes, and the best way to describe these are "job generators".
that is, you submit one and it sits in the pending queue, while
the array elements kind of "bud" off the parent job. each of
the array jobs is a ful
Alfredo,
I’m assuming the resources are used initially in some sort of tightly-coupled
parallel task, or at least some workload where all the tasks finish at about
the same time. I’m wondering and also assuming that the tasks you’re looking to
run afterwards as part of an array are less tightly
thank you Michael for the feedback, my scenario is the following: I want
to run a job array of (lets say) 30 jobs. So I setted the slurm input as
follows:
#SBATCH --array=1-104%30
#SBATCH --ntasks=1
however only 4 jobs within the array are launched at a time due to the
allowed max number of j
Literal job arrays are built into Slurm:
https://slurm.schedmd.com/job_array.html
Alternatively, if you wanted to allocate a set of CPUs for a parallel task, and
then run a set of single-CPU tasks in the same job, something like:
#!/bin/bash
#SBATCH --ntasks=30
srun --ntasks=${SLURM_NTASK
Dear slurm users,
I would like to know if it is possible to prepare a slurm submission
script in a way that initially CPU resources are requested (lets say 30
CPUs), and afterwards, the assigned resources are used to launch an
array of 30 single CPU jobs array? I would greatly appreciate any h
10 matches
Mail list logo