Difference between revisions of "Slurm commands"
PeterThorpe (talk | contribs) (Created page with " == slurm commands == note the BIOINF community are supposed to use the -p bigmem queue. The following are command line commands, or tags you can put in your shell scrip...") |
PeterThorpe (talk | contribs) |
||
(One intermediate revision by the same user not shown) | |||
Line 4: | Line 4: | ||
note the BIOINF community are supposed to use the -p bigmem queue. | note the BIOINF community are supposed to use the -p bigmem queue. | ||
+ | |||
+ | Sun Grid Engines (what we use on Marvin) to slurm command conversion: | ||
+ | https://srcc.stanford.edu/sge-slurm-conversion | ||
Line 62: | Line 65: | ||
#SBATCH -p bigmem | #SBATCH -p bigmem | ||
#SBATCH --mem=30GB | #SBATCH --mem=30GB | ||
+ | |||
+ | |||
+ | Request an interactive job with one GPU: | ||
+ | # srun --gres=gpu:1 -N 1 -p singlenode --pty /bin/bash |
Latest revision as of 12:25, 6 May 2020
slurm commands
note the BIOINF community are supposed to use the -p bigmem queue.
Sun Grid Engines (what we use on Marvin) to slurm command conversion:
https://srcc.stanford.edu/sge-slurm-conversion
The following are command line commands, or tags you can put in your shell script (or at the command line) to achieve certain functionality.
request_48_thread_1.3TBRAM #!/bin/bash -l # not the -l is essential now #SBATCH -J fly_pilon #jobname #SBATCH -N 1 #node #SBATCH --ntasks-per-node=48 #SBATCH --threads-per-core=2 #SBATCH -p bigmem #SBATCH --nodelist=kennedy150 # this is the specific node. This one has 1.5TB RAM #SBATCH --mem=1350GB
test_conda_activate #!/bin/bash -l #SBATCH -J conda_test #jobname #SBATCH -N 1 #node #SBATCH --tasks-per-node=1 #SBATCH -p bigmem # big mem if for the BIOINF community #SBATCH --mail-type=END # email at the end of the job #SBATCH --mail-user=$USER@st-andrews.ac.uk # your email address
cd /gpfs1/home/$USER/
pyv="$(python -V 2>&1)"
echo "$pyv"
- conda to activate the software
echo $PATH
conda activate spades
pyv="$(python -V 2>&1)"
echo "$pyv"
conda deactivate
conda activate python27
pyv="$(python2 -V 2>&1)"
echo "$pyv"
12threads_bigMeme_30G_RAM
!/bin/bash -l # essential
- SBATCH -J trimmo #jobname
- SBATCH -N 1 #node
- SBATCH --ntasks-per-node=12
- SBATCH --threads-per-core=2
- SBATCH -p bigmem
- SBATCH --mem=30GB
Request an interactive job with one GPU:
# srun --gres=gpu:1 -N 1 -p singlenode --pty /bin/bash