multithreadingschedulingslurmjob-scheduling

Sbatch resouce allocation for mult-threaded application


After reading the very informative post, HPC cluster: select the number of CPUs and threads in SLURM sbatch, I understand the difference b/w the --ntasks, --cpus-per-task and --ntasks-per-node flags for sbatch.

My multi-threaded 'shared-memory' application is main command for my job execution. Sample script below,

#!/bin/bash

#SBATCH -J test_job 
#SBATCH -t 01:00:00
#SBATCH -A project
#SBATCH -D /opt/
#SBATCH --ntasks=16
#SBATCH --nodes=1

cd model/

./a.out -parasol 16 -f input.dat

My question is


  1. Whats the difference b/w the following combinations w.r.t. sbatch ?

--ntasks 1 --cpus-per-task 16

--ntasks 16 --nodes 1


  1. Which of the above is the best/correct way to allocate resources for a multi-threaded (shared-memory) application ?

Solution

  • There will be no difference between --ntasks 1 --cpus-per-task 16 and --ntasks 16 --nodes 1 in terms of CPU allocation. Both will end up with 16 CPUs on the same node.

    There will be difference though in the environment setup by Slurm for your job, among other, obviously, the values of $SLURM_NTASKS and $SLURM_CPUS_PER_TASK. This could have an impact on your job if ./a.out uses those variables to decided on the number of threads to spawn. Or if ./a.out uses the value of the $OMP_NUM_THREADS and that variable is set automatically by Slurm in the prolog to be equal to $SLURM_CPUS_PER_TASK (some clusters are configured that way).

    There will also be a difference on the CPU placement if you use the --distribution option, or other related options.