Hi;

You can use the "--dependency=afterok:jobid:jobid ..." parameter of the sbatch to ensure the new submitted job will be waiting until all older jobs are finished. Simply, you can submit the new job even while older jobs are running, the new job will not start before old jobs are finished.

Regards,

Ahmet M.


On 22.10.2019 11:53, Florian Lohoff wrote:
Hi,
i am using slurm in a single node job batching system. Slurm ist perfect
for that case and works for a couple years flawlessly. Lately i was
shuffleing around jobs which take much longer to run to only run
daily, and other jobs to run more frequently.

A Question i had was - is there a possibility to lock jobs not to
run multiple times? Or better - i have a list of jobs with heavy
dependencys - and i'd like to run this job list again when all
of them have completed.

So i could create a lock and an cleanup job which removes that
lock and depends on all other jobs i queue in this batch.

Currently i have something like this in my cron scripts which
looks into the job queue and if it identifies jobs it does
not queue new ones.

        squeue  -l | egrep -q "osm.*RUNNING"

        # Still jobs running
        if [ $? -eq 0 ]; then
                exit 0
        fi

So i run the cron job a lot more often than i can process all of the
data. I feel this to be a bit like a hack.

Flo

Reply via email to