Topics Map > •Research Computing

CRC Forcing Jobs to Run in Sequence via SLURM

CRC Forcing Jobs to Run in Sequence via SLURM

Occasionally it might be necessary for your jobs to run in a particular sequence. This document will describe how to force a submitted job to wait on the completion of another job before it executes.

Use --dependency=afterok feature

In order to get a job to wait (block) and not run until another job has completed successfully, you must use the SLURM parameter --dependency. For example, once you submit your first job you will receive a jobID number. You can now use this jobID on subsequent sbatch commands to force your jobs to run in sequence by specifying the option --dependency=afterok:jobID as follows:

sbatch --dependency=afterok:jobID myJobScript.slurm

Your job script must be the last parameter i.e. sbatch myJobScript.slurm --dependency=afterok:JobID does not work. Order matters.

Using this option, the second job will be put in a 'Dependency' state until the first job completes successfully. Submitting multiple jobs in an automated fashion will require some scripting (i.e. bash, tcsh, perl) to run the sbatch command in a loop while trapping the jobID of each job into a variable. This variable can then be used to represent the jobID in subsequent calls to sbatch. The technical details of a script like this are beyond the scope of this document.

It is possible to submit what are called "singleton" jobs defined as:

This job can begin execution after any previously launched jobs sharing the same job name and user have terminated.

sbatch --dependency=singleton myJobScript.slurm



Keywords:
CRC Forcing Jobs to Run in Sequence via SLURM 
Doc ID:
108439
Owned by:
Bryan R. in Rice U
Created:
2021-01-19
Updated:
2024-08-22
Sites:
Rice University