Slurm commands

From wiki
Jump to: navigation, search


slurm commands

note the BIOINF community are supposed to use the -p bigmem queue.

Sun Grid Engines (what we use on Marvin) to slurm command conversion:

https://srcc.stanford.edu/sge-slurm-conversion


The following are command line commands, or tags you can put in your shell script (or at the command line) to achieve certain functionality.

request_48_thread_1.3TBRAM
#!/bin/bash -l  # not the -l is essential now
#SBATCH -J fly_pilon   #jobname
#SBATCH -N 1     #node
#SBATCH --ntasks-per-node=48
#SBATCH --threads-per-core=2
#SBATCH -p bigmem
#SBATCH --nodelist=kennedy150  # this is the specific node. This one has 1.5TB RAM
#SBATCH --mem=1350GB


test_conda_activate
#!/bin/bash -l
#SBATCH -J conda_test   #jobname
#SBATCH -N 1     #node
#SBATCH --tasks-per-node=1
#SBATCH -p bigmem    # big mem if for the BIOINF community
#SBATCH --mail-type=END     # email at the end of the job
#SBATCH --mail-user=$USER@st-andrews.ac.uk      # your email address

cd /gpfs1/home/$USER/

pyv="$(python -V 2>&1)"

echo "$pyv"

  1. conda to activate the software

echo $PATH

conda activate spades

pyv="$(python -V 2>&1)"

echo "$pyv"

conda deactivate

conda activate python27

pyv="$(python2 -V 2>&1)"

echo "$pyv"

12threads_bigMeme_30G_RAM

!/bin/bash -l # essential

  1. SBATCH -J trimmo #jobname
  2. SBATCH -N 1 #node
  3. SBATCH --ntasks-per-node=12
  4. SBATCH --threads-per-core=2
  5. SBATCH -p bigmem
  6. SBATCH --mem=30GB


Request an interactive job with one GPU:

# srun --gres=gpu:1 -N 1 -p singlenode --pty /bin/bash