Hello,

Slurm can have a behavior similar to the one you were used on SGE, it's not the default but it's really commonly used. This is called Consumable Resources in Slurm, you don't allocate full nodes but resources on the node like cores or memory. It's activated in slurm.conf (after restarting or reconfiguring the controller) :

SelectType=select/cons_res
SelectTypeParameters=CR_Core_Memory

https://slurm.schedmd.com/cons_res.html

MPI serves another purpose that do not really fit you model. The solution to your issue is Consumable Resources but you can take a look at Job Arrays to group all jobs of your loop together.

Regards,

Thomas HAMEL


On 19/06/2018 23:33, Anson Abraham wrote:
Hi,
Relatively new to Slurm.  I've been using Sun GridEngine mostly.
I have a cluster of 3 machines each with 8 cores.  In SGE i allocate the PE slots per machine, where if i submit 24 jobs it'll run all 24 jobs (b/c each job will use 1 core). however, if i submit job in Slurm through sbatch i can only get it to run 3 jobs at a time, even when i define the cpus_per_task.  I was told to use openMPI for this.
i'm not familiar w/ openMPI so i did an apt install of libopenmpi-dev.

Do i have to loop through my job submission w/ mpirun
and run an sbatch outside of it?
Again, i'm still new to this, and w/ sge it was pretty straight fwd where all i had to do was:
 loop through files
    qsub -N {name of job}  script.sh {filename}.

not sure how would i do so here.


Reply via email to