Topics Map > •Research Computing

CRC Using an Interactive Partition via SLURM

CRC Using an Interactive Partition via SLURM

Introduction

An Interactive partition is a higher priority partition with the purpose of serving debugging sessions and interactive jobs.  The maximum number of nodes that can be accessed through this partition is four with a maximum job run time of 30 minutes.  This partition will provide a mechanism for debugging your code, in that you will be given an interactive command line prompt on a compute node where you can run your job interactively. There is usually little or no wait time to get access to this partition. This partition is not intended for the purpose of running short compute jobs. It is for debugging sessions and for jobs that require interactive execution. The remainder of this document will describe how to use this partition.

Some systems will not have an interactive queue

Not all systems will have an interactive partition.  Use sinfo to see if the system has an interactive partition or not.  If the system does not have an interactive partition you will still be able to submit interactive jobs using one of the available partitions, though you may experience a long wait time before the job begins.

Submitting an Interactive Job

The usual way to submit jobs to a partition is with a SLURM batch script. However, in the case of the Interactive partition, you must use the sbatch command line (rather than a SLURM batch script) in combination with the -I argument to access this partition as illustrated in the following example:

srun --partition=interactive --pty --export=ALL --ntasks=1 --time=00:30:00 /bin/bash

The sbatch arguments in this example represent the following options which you will recognize from your SLURM batch scripts:

Option

Description

#SBATCH --partition=interactive

Select the interactive partition

#SBATCH --export=ALL

Exports all environment variables to the job.

#SBATCH --ntasks=1

Select one processor.

You can specify any SLURM batch script argument on the srun command line. Once this interactive job is submitted and begins to execute, you will receive a command line prompt on a compute node where you can manually run your code for debugging purposes as follows:

user@login3:~> srun --partition=interactive --pty --export=ALL --ntasks=1 --time=00:30:00 /bin/bash
user@compute6:~> /path/to/myprogram

In the example above, note that the hostname changed from a login node designation to a compute node designation. At this point you have a command prompt on an actual compute node.

Do not use a SLURM batch script to access the interactive partition. Use the srun command line as illustrated above along with any SLURM arguments you would normally use in a batch script. If you try to use a SLURM batch script, the job submission will not work.

Submitting an MPI Interactive Job

If you need to submit a multiprocessor job to the Interactive partition, you will do so the same way as described above except that you will request a larger number of processor cores and/or nodes as in the following example:

srun --partition=interactive --pty --export=ALL --ntasks=24 --time=00:30:00 /bin/bash

Running this command will give you twenty four processor cores in the interactive partition. You will be given an interactive prompt on one of those nodes where you can use srun to launch a multiprocessor job for debugging. Your MPI code will have access to all of the processors allocated to your job even though you have a command line prompt on only a single node. SLURM and srun will communicate with each other so that you have access to all of the processors allocated to your job. 



Keywords:
CRC Using an Interactive Partition via SLURM 
Doc ID:
108437
Owned by:
Bryan R. in Rice U
Created:
2021-01-19
Updated:
2024-08-22
Sites:
Rice University