<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://stab.st-andrews.ac.uk/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=PeterThorpe</id>
		<title>wiki - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://stab.st-andrews.ac.uk/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=PeterThorpe"/>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Special:Contributions/PeterThorpe"/>
		<updated>2026-04-16T08:33:05Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.30.0</generator>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Singularity_with_grid_engine&amp;diff=3510</id>
		<title>Singularity with grid engine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Singularity_with_grid_engine&amp;diff=3510"/>
				<updated>2022-01-21T09:21:45Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We have been working hard in the background to allow users to be able to run Docker images. &lt;br /&gt;
Good news you can now do this via Singularity (Don’t worry this is all installed and should just work). &lt;br /&gt;
Try Conda to install your package of interest first. If it is not on Conda, you can try the following:&lt;br /&gt;
&lt;br /&gt;
Please remember, you must run this through qsub and not directly on the head node. &lt;br /&gt;
&lt;br /&gt;
Full singularity documentation is here; &lt;br /&gt;
 https://www.sylabs.io/guides/3.2/user-guide/&lt;br /&gt;
&lt;br /&gt;
where to get the images from? search here: &lt;br /&gt;
 https://hub.docker.com/&lt;br /&gt;
if you search funannotate you will see the phrase: nextgenusfs/funannotate&lt;br /&gt;
this is what you want&lt;br /&gt;
&lt;br /&gt;
to download the image, simply type:&lt;br /&gt;
 singularity pull docker:name_of_image&lt;br /&gt;
 e.g.  singularity pull docker:nextgenusfs/funannotate&lt;br /&gt;
&lt;br /&gt;
Be slightly careful about versions. Teh command may not actually pull down the laters see this below, also how to do this on Rocky Linux 8:&lt;br /&gt;
My download command (which pulled down an old version): &lt;br /&gt;
 singularity pull docker:docker broadinstitute/gatk  (did not pull the latest - check versions)&lt;br /&gt;
&lt;br /&gt;
what did pull down the correct version (Redhat 6.9, running on centos 7):&lt;br /&gt;
&lt;br /&gt;
 singularity pull docker:broadinstitute/gatk:latest&lt;br /&gt;
&lt;br /&gt;
Now trying on Rocky Linux 8. This worked:&lt;br /&gt;
&lt;br /&gt;
 singularity pull docker://broadinstitute/gatk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Please, do not run this on the head node, this must be run through the qsub system. &lt;br /&gt;
&lt;br /&gt;
Example of how to run via qsub:&lt;br /&gt;
 qsub -l singularity -b y singularity run /full_path_to/ubuntu.sif /full_path_to/test_script.sh&lt;br /&gt;
 replace: ubuntu.sif with whatever image you are trying to run&lt;br /&gt;
&lt;br /&gt;
Lets go through that command in more depth:&lt;br /&gt;
 qsub -l singularity -b y singularity run&lt;br /&gt;
&lt;br /&gt;
this is a special command so singularity will run on a specific server, you don&amp;#039;t need to alter this, just copy it. &lt;br /&gt;
  /full_path_to/ubuntu.sif &lt;br /&gt;
this is the image you download for the software you are interested in&lt;br /&gt;
&lt;br /&gt;
  /full_path_to/test_script.sh&lt;br /&gt;
 this needs to contain the commands you want to run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
example 2, running the image with qsub:&lt;br /&gt;
 qsub -pe multi 8 -l singularity -b y singularity run /full_path/funannotate_latest.sif /full_path/fun_singularity.sh&lt;br /&gt;
The shell must have the current working directory full path in it as cd /ful_path/   &lt;br /&gt;
 putting #!cwd command in your shell scripts will not work!&lt;br /&gt;
 cd /ful_path/&lt;br /&gt;
 -pe multi 8     this asks for 8 cores, just as normal. &lt;br /&gt;
&lt;br /&gt;
please note that in the shell script:&lt;br /&gt;
 full paths to everything are required!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notes for Admin:&lt;br /&gt;
To add another node with singularity on:&lt;br /&gt;
&lt;br /&gt;
 qconf -me &amp;lt;nodename&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On the complex_values line remove NONE if present, and add &amp;quot;singularity=TRUE&amp;quot;&lt;br /&gt;
Followed guide here: https://blogs.univa.com/2019/01/using-univa-grid-engine-with-singularity/&lt;br /&gt;
now a request-able resource with &amp;quot;-l singularity&amp;quot; to make sure you get a node with singularity on&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Conda&amp;diff=3509</id>
		<title>Conda</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Conda&amp;diff=3509"/>
				<updated>2020-07-28T08:56:26Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=  conda =&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;This is the future&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Conda on Marvin on a per user basis, so you have total control over it.  &lt;br /&gt;
Log in and from the command line type Only ever do this once or you will get error pessages):&lt;br /&gt;
 &lt;br /&gt;
 install-bioconda&lt;br /&gt;
 &lt;br /&gt;
Either log out  and back in, or type:&lt;br /&gt;
 source ~/.bashrc&lt;br /&gt;
&lt;br /&gt;
This will make a user specific version of conda avaible to you.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;To find the package you want, search here:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
 https://bioconda.github.io/conda-recipe_index.html&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;or google search:&amp;#039;&amp;#039;&amp;#039; - as there are multiple channels!!&lt;br /&gt;
 conda name_of_tool_you_want&lt;br /&gt;
 &lt;br /&gt;
We strongly advise using environments: You can do this in many ways. Please see the link for more details (here you can specifiy exact version etc ..)&lt;br /&gt;
 https://conda.io/docs/user-guide/tasks/manage-environments.html &lt;br /&gt;
&lt;br /&gt;
The easiest usage would be:    e.g.  &lt;br /&gt;
 &lt;br /&gt;
 conda create -n NAME_OF_ENV PACKAGE_TO_INSTALL&lt;br /&gt;
&lt;br /&gt;
This is an example of a toold called roary&lt;br /&gt;
 &lt;br /&gt;
 conda create -n roaryENV roary&lt;br /&gt;
&lt;br /&gt;
once it has installed, you can activate the environment by typing:&lt;br /&gt;
 &lt;br /&gt;
 conda activate roaryENV&lt;br /&gt;
&lt;br /&gt;
As a lot of tools have dependencies, all the dependencies should be installed during this process. It is a good idea to keep them looked up in their own ENV, so they dont interfere with dependencies and spcific version required for other things you have installed. &lt;br /&gt;
&lt;br /&gt;
To get the latest version&lt;br /&gt;
 &lt;br /&gt;
 conda update roary&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
You are now ready to use this package.&lt;br /&gt;
 &lt;br /&gt;
 conda deactivate        to leave this environment.&lt;br /&gt;
 &lt;br /&gt;
If you want to install a specific version of a tool (use the equals sign and the version you want):&lt;br /&gt;
 &lt;br /&gt;
 conda create -n samtools1.3 samtools=1.3&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
 &lt;br /&gt;
 conda create -n python27 python=27&lt;br /&gt;
&lt;br /&gt;
once you have this version of python installed, you can easily use (after you have activated the python env:&lt;br /&gt;
 pip install biopython   (or whatever you require)&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;conda and perl&amp;#039;&amp;#039;&amp;#039; &lt;br /&gt;
Firstly, perl is a pain to look after. The way the cluster has been historically set up, all perl versions and modules are different on all the nodes. The new cluster will not be like this. Therefore, we cannot install modules for every user on all the nodes. What we recommend is you having you own perl version using conda:&lt;br /&gt;
 conda create -n perlEnv perl&lt;br /&gt;
&lt;br /&gt;
 conda activate perlEnv&lt;br /&gt;
&lt;br /&gt;
then install the modules you want&lt;br /&gt;
&lt;br /&gt;
 http://www.cpan.org/modules/INSTALL.html&lt;br /&gt;
&lt;br /&gt;
 cpan App::cpanminus&lt;br /&gt;
 &lt;br /&gt;
 cpanm Term::ReadKey&lt;br /&gt;
&lt;br /&gt;
Bioperl is difficult to install. Fact. But someone has put this in coda. So lets use that:&lt;br /&gt;
&lt;br /&gt;
 conda create -n bioperlEnv perl-bioperl&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
To list all the environments you have created (you will forget the envs after a while, so name them well!!):&lt;br /&gt;
 &lt;br /&gt;
 conda info –envs&lt;br /&gt;
&lt;br /&gt;
To list all the dependencies you have within an env&lt;br /&gt;
 &lt;br /&gt;
 conda list -n &amp;#039;&amp;#039;envname&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
For example, trinity comes with jellyfish, which can be difficult to install. Therefore some of these tools are a treasure chest of cools tools, which can save many hours and kg of pain in installation process.  &lt;br /&gt;
&lt;br /&gt;
To install multiple tools in one env:&lt;br /&gt;
 conda create -n python36_bioperl python=3.6 perl-bioperl mummer scipy numpy biopython matplotlib&lt;br /&gt;
&lt;br /&gt;
You can activate multiple conda envs, they have preference as the latest is used first in you path: &lt;br /&gt;
 conda activate trinityenv&lt;br /&gt;
 conda activate python27&lt;br /&gt;
 conda activate python36&lt;br /&gt;
in this example, a simple python command will use the python3 version as this was the last to be activated, if you want python 2. You have to specifiy python2. The perl will be the one in the monster python36 env i created. But jelly fish will be from the trinity env. Make sense? &lt;br /&gt;
&lt;br /&gt;
ENJOY!!!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
installing packages: https://bioconda.github.io/conda-recipe_index.html&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;conda and samtools problems&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Bascially sometimes this get installed from the wrong channel. This may happen with other tools. We havent come across this yet. &lt;br /&gt;
&lt;br /&gt;
The problem will look like this if you try to use samtools:&lt;br /&gt;
 samtools: error while loading shared libraries: libcrypto.so.1.0.0: cannot open shared object file: No such file or directory).&lt;br /&gt;
&lt;br /&gt;
The solution:&lt;br /&gt;
 install from the bioconda channel: -c bioconda &lt;br /&gt;
 conda install -c bioconda samtools&lt;br /&gt;
&lt;br /&gt;
or specifically the problem was found with unicycler:&lt;br /&gt;
remove the old unicycler failed install:&lt;br /&gt;
 conda remove --name unicyclerENV --all &lt;br /&gt;
&lt;br /&gt;
remake this env from the bioconda channel:&lt;br /&gt;
 conda create -n unicyclerENV -c bioconda unicycler&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
to update the base conda, if required&lt;br /&gt;
&lt;br /&gt;
 conda update -n base conda&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;conda hanging on solving environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Can you try this please, then try re installing the beastie package.&lt;br /&gt;
 &lt;br /&gt;
 conda config --remove channels conda-forge&lt;br /&gt;
 conda config --add channels conda-forge&lt;br /&gt;
&lt;br /&gt;
if this still doesnt work try:&lt;br /&gt;
 conda config --set channel_priority strict&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More samtools problems, try:&lt;br /&gt;
https://github.com/bioconda/bioconda-recipes/issues/12100&lt;br /&gt;
samtools: error while loading shared libraries: libcrypto.so.1.0.0: cannot open shared object file: No such file or directory&lt;br /&gt;
 conda install -c bioconda samtools openssl=1.0&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Firewall_and_iptables&amp;diff=3508</id>
		<title>Firewall and iptables</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Firewall_and_iptables&amp;diff=3508"/>
				<updated>2020-07-22T18:21:05Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: Created page with &amp;quot;firstly the iptables was accidentally wiped when trying to mount marvin to kennedy. SGE failed and so did ldap authentication from the nodes on new users.    solution for ldap...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;firstly the iptables was accidentally wiped when trying to mount marvin to kennedy. SGE failed and so did ldap authentication from the nodes on new users. &lt;br /&gt;
&lt;br /&gt;
 solution for ldap&lt;br /&gt;
Fixed. Ldap is not ldap. It is smbladp which listens on port 1544. OPen up port 1544 (see below on how to open up ports)&lt;br /&gt;
&lt;br /&gt;
restart ldap&lt;br /&gt;
 service slapd restart&lt;br /&gt;
&lt;br /&gt;
How to turn on and off firwll and save to iptables. &lt;br /&gt;
https://www.cyberciti.biz/faq/turn-on-turn-off-firewall-in-linux/&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/iptables save&lt;br /&gt;
 /etc/init.d/iptables stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
SGE stopped working: &lt;br /&gt;
 Got it working was on port 6444&lt;br /&gt;
&lt;br /&gt;
To open firewall ports. Log in as root, but ssh as root with -x&lt;br /&gt;
 system-config-firewall &lt;br /&gt;
&lt;br /&gt;
&amp;quot;other ports&amp;quot; on the left - open the bad boys you want open.&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Main_Page&amp;diff=3507</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Main_Page&amp;diff=3507"/>
				<updated>2020-07-22T18:14:33Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: /* Cluster Administration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;KENNEDY HPC for Bioinf community &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* [[Kennedy manual]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Usage of Cluster=&lt;br /&gt;
* [[Cluster Manual]]&lt;br /&gt;
* [[Kennedy manual]]&lt;br /&gt;
* [[Why a Queue Manager?]]&lt;br /&gt;
* [[Available Software]]&lt;br /&gt;
* [[how to use the cluster training course]]&lt;br /&gt;
* [[windows network connect]]&lt;br /&gt;
&lt;br /&gt;
= Documented Programs =&lt;br /&gt;
&lt;br /&gt;
The following can be seen as extra notes referring to these programs usage on the marvin cluster, with an emphais on example use-cases. Most, if not all, will have their own special websites, with more detailed manuals and further information.&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;width:85%&amp;quot;&lt;br /&gt;
|* [[abacas]]&lt;br /&gt;
|* [[albacore]]&lt;br /&gt;
|* [[ariba]]&lt;br /&gt;
|* [[aspera]]&lt;br /&gt;
|* [[assembly-stats]]&lt;br /&gt;
|* [[augustus]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BamQC]]&lt;br /&gt;
|* [[bamtools]]&lt;br /&gt;
|* [[banjo]]&lt;br /&gt;
|* [[bcftools]]&lt;br /&gt;
|* [[bedtools]]&lt;br /&gt;
|* [[bgenie]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BLAST]]&lt;br /&gt;
|* [[Blat]]&lt;br /&gt;
|* [[blast2go: b2g4pipe]]&lt;br /&gt;
|* [[bowtie]]&lt;br /&gt;
|* [[bowtie2]]&lt;br /&gt;
|* [[bwa]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BUSCO]]&lt;br /&gt;
|* [[CAFE]]&lt;br /&gt;
|* [[canu]]&lt;br /&gt;
|* [[cd-hit]]&lt;br /&gt;
|* [[cegma]]&lt;br /&gt;
|* [[clustal]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[cramtools]]&lt;br /&gt;
|* [[conda]]&lt;br /&gt;
|* [[deeptools]]&lt;br /&gt;
|* [[detonate]]&lt;br /&gt;
|* [[diamond]]&lt;br /&gt;
|* [[ea-utils]]&lt;br /&gt;
|* [[ensembl]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[ETE]]&lt;br /&gt;
|* [[FASTQC and MultiQC]]&lt;br /&gt;
|* [[Archaeopteryx and Forester]]&lt;br /&gt;
|* [[GapFiller]]&lt;br /&gt;
|* [[GenomeTools]]&lt;br /&gt;
|* [[gubbins]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[JBrowse]]&lt;br /&gt;
|* [[kallisto]]&lt;br /&gt;
|* [[kentUtils]]&lt;br /&gt;
|* [[last]]&lt;br /&gt;
|* [[lastz]]&lt;br /&gt;
|* [[macs2]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[Mash]]&lt;br /&gt;
|* [[mega]]&lt;br /&gt;
|* [[meryl]]&lt;br /&gt;
|* [[MUMmer]]&lt;br /&gt;
|* [[NanoSim]]&lt;br /&gt;
|* [[nseq]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[OrthoFinder]]&lt;br /&gt;
|* [[PASA]]&lt;br /&gt;
|* [[perl]]&lt;br /&gt;
|* [[PGAP]]&lt;br /&gt;
|* [[picard-tools]]&lt;br /&gt;
|* [[poRe]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[poretools]]&lt;br /&gt;
|* [[prokka]]&lt;br /&gt;
|* [[pyrad]]&lt;br /&gt;
|* [[python]]&lt;br /&gt;
|* [[qualimap]]&lt;br /&gt;
|* [[quast]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[qiime2]]&lt;br /&gt;
|* [[R]]&lt;br /&gt;
|* [[RAxML]]&lt;br /&gt;
|* [[Repeatmasker]]&lt;br /&gt;
|* [[Repeatmodeler]]&lt;br /&gt;
|* [[rnammer]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[roary]]&lt;br /&gt;
|* [[RSeQC]]&lt;br /&gt;
|* [[samtools]]&lt;br /&gt;
|* [[Satsuma]]&lt;br /&gt;
|* [[sickle]]&lt;br /&gt;
|* [[SPAdes]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[squid]]&lt;br /&gt;
|* [[sra-tools]]&lt;br /&gt;
|* [[srst2]]&lt;br /&gt;
|* [[SSPACE]]&lt;br /&gt;
|* [[stacks]]&lt;br /&gt;
|* [[Thor]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[Tophat]]&lt;br /&gt;
|* [[trimmomatic]]&lt;br /&gt;
|* [[Trinity]]&lt;br /&gt;
|* [[t-coffee]]&lt;br /&gt;
|* [[Unicycler]]&lt;br /&gt;
|* [[velvet]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[ViennaRNA]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Queue Manager Tips =&lt;br /&gt;
A cluster is a shared resource with different users running different types of analyses. Nearly all clusters use a piece of software called a queue manager to fairly share out the resource. The queue manager on marvin is called Grid Engine, and it has several commands available, all beginning with &amp;#039;&amp;#039;&amp;#039;q&amp;#039;&amp;#039;&amp;#039; and with &amp;#039;&amp;#039;&amp;#039;qsub&amp;#039;&amp;#039;&amp;#039; being the most commonly used as it submits a command via a jobscript to be processed. Here are some tips:&lt;br /&gt;
* [[Queue Manager Tips]]&lt;br /&gt;
* [[Queue Manager : shell script command]]&lt;br /&gt;
* [[Queue Manager emailing when jobs run]]&lt;br /&gt;
* [[General Command-line Tips]]&lt;br /&gt;
* [[DRMAA for further Gridengine automation]]&lt;br /&gt;
&lt;br /&gt;
= Data Examples =&lt;br /&gt;
* [[Two Eel Scaffolds]]&lt;br /&gt;
&lt;br /&gt;
= Procedures =&lt;br /&gt;
(short sequence of tasks with a certain short-term goal, often, a simple script)&lt;br /&gt;
* [[Calculating coverage]]&lt;br /&gt;
* [[MinION Coverage sensitivity analysis]]&lt;br /&gt;
&lt;br /&gt;
= Navigating genomic data websites=&lt;br /&gt;
* [[Patric]]&lt;br /&gt;
* [[NCBI]]&lt;br /&gt;
* [[IGSR/1000 Genomes]]&lt;br /&gt;
&lt;br /&gt;
= Explanations=&lt;br /&gt;
* [[ITUcourse]]&lt;br /&gt;
* [[VCF]]&lt;br /&gt;
* [[Maximum Likelihood]]&lt;br /&gt;
* [[SNP Analysis and phylogenetics]]&lt;br /&gt;
* [[Normalization]]&lt;br /&gt;
&lt;br /&gt;
= Pipelines =&lt;br /&gt;
(Workflow with specific end-goals)&lt;br /&gt;
* [[Trinity_Protocol]]&lt;br /&gt;
* [[STAR BEAST]]&lt;br /&gt;
* [[callSNPs.py]]&lt;br /&gt;
* [[pairwiseCallSNPs]]&lt;br /&gt;
* [[mapping.py]]&lt;br /&gt;
* [[Edgen RNAseq]]&lt;br /&gt;
* [[Miseq Prokaryote FASTQ analysis]]&lt;br /&gt;
* [[snpcallphylo]]&lt;br /&gt;
* [[Bottlenose dolphin population genomic analysis]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast 12.09.2017]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast 07.11.2017]]&lt;br /&gt;
* [[Bisulfite Sequencing]]&lt;br /&gt;
* [[microRNA and Salmo Salar]]&lt;br /&gt;
&lt;br /&gt;
=Protocols=&lt;br /&gt;
(Extensive workflows with different with several possible end goals)&lt;br /&gt;
* [[Synthetic Long reads]]&lt;br /&gt;
* [[MinION (Oxford Nanopore)]]&lt;br /&gt;
* [[MinKNOW folders and log files]]&lt;br /&gt;
* [[Research Data Management]]&lt;br /&gt;
* [[MicroRNAs]]&lt;br /&gt;
&lt;br /&gt;
= Tech Reviews =&lt;br /&gt;
* [[SWATH-MS Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
= Cluster Administration =&lt;br /&gt;
* [[StABDMIN]]&lt;br /&gt;
* [[Hardware Issues]]&lt;br /&gt;
* [[marvin and IPMI (remote hardware control)]]&lt;br /&gt;
* [[restart a node]]&lt;br /&gt;
* [[mounting drives]]&lt;br /&gt;
* [[Admin Tips]]&lt;br /&gt;
* [[RedHat]]&lt;br /&gt;
* [[Globus_gridftp]]&lt;br /&gt;
* [[Galaxy Setup]]&lt;br /&gt;
* [[Son of Gridengine]]&lt;br /&gt;
* [[Blas Libraries]]&lt;br /&gt;
* [[CMake]]&lt;br /&gt;
* [[conda bioconda]]&lt;br /&gt;
* [[Users and Groups]]&lt;br /&gt;
* [[Installing software on marvin]]&lt;br /&gt;
* [[emailing]]&lt;br /&gt;
* [[biotime machine]]&lt;br /&gt;
* [[SCAN-pc laptop]]&lt;br /&gt;
* [[node1 issues]]&lt;br /&gt;
* [[6TB storage expansion]]&lt;br /&gt;
* [[PIs storage sacrifice]]&lt;br /&gt;
* [[SAN relocation task]]&lt;br /&gt;
* [[Home directories max-out incident 28.11.2016]]&lt;br /&gt;
* [[Frontend Restart]]&lt;br /&gt;
* [[environment-modules]]&lt;br /&gt;
* [[H: drive on cluster]]&lt;br /&gt;
* [[Incident: Can&amp;#039;t connect to BerkeleyDB]]&lt;br /&gt;
* [[Bioinformatics Wordpress Site]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* [[users disk usage]]&lt;br /&gt;
* [[Updating BLAST databases]]&lt;br /&gt;
* [[Python DRMAA]]&lt;br /&gt;
* [[message of the day]]&lt;br /&gt;
* [[SAN disconnect incident 10.01.2017]]&lt;br /&gt;
* [[Memory repair glitch 16.02.2017]]&lt;br /&gt;
* [[node9 network failure incident 16-20.03.2017]]&lt;br /&gt;
* [[Incorrect rebooting of marvin 19.09.2017]]&lt;br /&gt;
* [[ansible]]&lt;br /&gt;
* [[webstie and word press]]&lt;br /&gt;
* [[allow user access to other peoples data]]&lt;br /&gt;
* [[RAM and RAM slots]]&lt;br /&gt;
* [[ldap is not ldap]]&lt;br /&gt;
* [[reset a password]]&lt;br /&gt;
* [[sending emails from command line examples]]&lt;br /&gt;
* [[disk management after shelf disk failure]]&lt;br /&gt;
* [[firewall and iptables]]&lt;br /&gt;
&lt;br /&gt;
= Courses =&lt;br /&gt;
&lt;br /&gt;
==I2U4BGA==&lt;br /&gt;
* [[Original schedule]]&lt;br /&gt;
* [[New schedule]]&lt;br /&gt;
* [[Actual schedule]]&lt;br /&gt;
* [[Course itself]]&lt;br /&gt;
* [[Biolinux Source course]]&lt;br /&gt;
* [[Directory Organization Exercise]]&lt;br /&gt;
* [[Glossary]]&lt;br /&gt;
* [[Key Bindings]]&lt;br /&gt;
* [[one-liners]]&lt;br /&gt;
* [[Cheatsheets]]&lt;br /&gt;
* [[Links]]&lt;br /&gt;
* [[pandoc modified manual]]&lt;br /&gt;
* [[Command Line Exercises]]&lt;br /&gt;
&lt;br /&gt;
= hdi2u =&lt;br /&gt;
&lt;br /&gt;
The half-day linux course held on 20th April. Modified version of I2U4BGA.&lt;br /&gt;
&lt;br /&gt;
* [[hdi2u_intro]]&lt;br /&gt;
* [[hdi2u_commandbased_exercises]]&lt;br /&gt;
* [[hdi2u_dirorg_exercise]]&lt;br /&gt;
* [[hdi2u_rendertotsv_exercise]]&lt;br /&gt;
&lt;br /&gt;
= RNAseq for DGE =&lt;br /&gt;
* [[Theoretical background]]&lt;br /&gt;
* [[Quality Control and Preprocessing]]&lt;br /&gt;
* [[Mapping to Reference]]&lt;br /&gt;
* [[Mapping Quality Exercise]]&lt;br /&gt;
* [[Key Aspects of using R]]&lt;br /&gt;
* [[Estimating Gene Count Exercise]]&lt;br /&gt;
* [[Differential Expression Exercise]]&lt;br /&gt;
* [[Functional Analysis Exercise]]&lt;br /&gt;
&lt;br /&gt;
= Introduction to Unix 2017 =&lt;br /&gt;
* [[Introduction_to_Unix_2017]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Templates==&lt;br /&gt;
* [[edgenl2g]]&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Users_and_Groups&amp;diff=3506</id>
		<title>Users and Groups</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Users_and_Groups&amp;diff=3506"/>
				<updated>2020-07-22T09:32:22Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: /* Usage: How to add a new user */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction=&lt;br /&gt;
&lt;br /&gt;
Some, though not all, of the tips here are for setting up users and groups.&lt;br /&gt;
&lt;br /&gt;
The tool of choice is smbldap.&lt;br /&gt;
&lt;br /&gt;
= Usage: How to add a new user =&lt;br /&gt;
==Users==&lt;br /&gt;
&lt;br /&gt;
it may be required now to discable the firewall when creating new accouts. Make sure you turn it back on:&lt;br /&gt;
https://www.cyberciti.biz/faq/turn-on-turn-off-firewall-in-linux/&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/iptables save&lt;br /&gt;
 /etc/init.d/iptables stop&lt;br /&gt;
&lt;br /&gt;
* To create a new user(s)&lt;br /&gt;
Root has a script in bin/creasu.sh, so as root:&lt;br /&gt;
 sh bin/creasu.sh &amp;lt;user&amp;gt; &amp;lt;user1&amp;gt; &amp;lt;user2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
if this line fails go to the admin page which talk about ldap.&lt;br /&gt;
Manually doing the command from the script worked for me when this failed&lt;br /&gt;
&lt;br /&gt;
 # (only if needed - perl errors) service restart slapd&lt;br /&gt;
&lt;br /&gt;
 NU=test06&lt;br /&gt;
 smbldap-groupadd -a $NU&lt;br /&gt;
 smbldap-useradd -g $NU -a $NU&lt;br /&gt;
 smbldap-passwd $NU&lt;br /&gt;
 bash_files=/etc/skel&lt;br /&gt;
 basepath=/storage/home/users&lt;br /&gt;
 path=$basepath/$NU&lt;br /&gt;
 echo $path&lt;br /&gt;
 cd $basepath&lt;br /&gt;
 cp -r $bash_files/.{m,n,b,g}* $NU&lt;br /&gt;
 chown -R $NU:$NU $path&lt;br /&gt;
 smbldap-groupadd -a $NU&lt;br /&gt;
 chown -R $NU:$NU $path&lt;br /&gt;
 chmod 0701 $NU&lt;br /&gt;
 chcon &amp;#039;unconfined_u:object_r:user_home_dir_t:s0&amp;#039; $path&lt;br /&gt;
&lt;br /&gt;
will create groups, accounts, home folder and all relevant files into the new home folder. &lt;br /&gt;
Then you need to setup passwords with (password promt will appear):&lt;br /&gt;
 smbldap-passwd &amp;lt;user&amp;gt;&lt;br /&gt;
for each of the users.&lt;br /&gt;
&lt;br /&gt;
Then setup an ssh key for logging into the nodes by doing the following: &lt;br /&gt;
&lt;br /&gt;
as root user, login a user via&lt;br /&gt;
 su - &amp;lt;newuserid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and execute&lt;br /&gt;
 ssh-keygen&lt;br /&gt;
&lt;br /&gt;
and just accept all the suggestions, keep accepting then as they are ...&lt;br /&gt;
.ssh/id_rsa and .ssh/id_rsa.pub, then get created.&lt;br /&gt;
&lt;br /&gt;
then&lt;br /&gt;
 cp .ssh/id_rsa.pub .ssh/authorized_keys&lt;br /&gt;
&lt;br /&gt;
and&lt;br /&gt;
 chmod 600 .ssh/authorized_keys&lt;br /&gt;
&lt;br /&gt;
then ssh node1 should log in to node1 without password (no need to test other nodes).&lt;br /&gt;
&lt;br /&gt;
Then tell the user to change their password by doing:&lt;br /&gt;
 passwd&lt;br /&gt;
&lt;br /&gt;
==Groups==&lt;br /&gt;
* To create a new group  (we dont have groups YET!)&lt;br /&gt;
 smbldap-groupadd -a &amp;lt;newgrpname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* To add users to a certain group (note that this seems to take some time to propagate, as well as only working on fresh logins)&lt;br /&gt;
 smbldap-groupmod -m &amp;lt;list,of,users&amp;gt; &amp;lt;targetgroup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
as root turn firewall back on&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/iptables start&lt;br /&gt;
&lt;br /&gt;
= change a password =&lt;br /&gt;
&lt;br /&gt;
when a user forgets their password and asks for a new one:&lt;br /&gt;
as root&lt;br /&gt;
 smbldap-passwd &amp;lt;user&amp;gt;&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Disk_management_after_shelf_disk_failure&amp;diff=3505</id>
		<title>Disk management after shelf disk failure</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Disk_management_after_shelf_disk_failure&amp;diff=3505"/>
				<updated>2020-06-19T11:15:11Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: Created page with &amp;quot;  The software for your RAID controller was LSI (now Broadcom) MegaCli, I&amp;#039;ve not been able to find any evidence that there&amp;#039;s a Redhat package but I have found a download. The...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
The software for your RAID controller was LSI (now Broadcom) MegaCli, I&amp;#039;ve not been able to find any evidence that there&amp;#039;s a Redhat package but I have found a download. The URL below should get you a zip file that contains a rpm and other installers for other operating systems.&lt;br /&gt;
&lt;br /&gt;
https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.broadcom.com%2Fdocs%2F12351587&amp;amp;amp;data=02%7C01%7Cjrc9%40st-andrews.ac.uk%7Cbe3c1405b577499c6ef808d813989466%7Cf85626cb0da849d3aa5864ef678ef01a%7C0%7C0%7C637280893338362007&amp;amp;amp;sdata=p93NQ8JyhxGgqCHMiAZEHq8xSdODq1urak0LKSNo3%2F0%3D&amp;amp;amp;reserved=0&lt;br /&gt;
&lt;br /&gt;
If you can install the RPM that you should be able to display the current status of the array to confirm it&amp;#039;s safe to remove the failed or failing disk and if necessary configure the new disks. The following commands should show the status of any RAID volumes and physical disks, if you can pipe the output into a couple of files and send it over I can advise on what needs to be done.&lt;br /&gt;
&lt;br /&gt;
 /opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -Lall -aall&lt;br /&gt;
 /opt/MegaRAID/MegaCli/MegaCli64 -PDList -aall&lt;br /&gt;
&lt;br /&gt;
The first command should produce something like this, we&amp;#039;ll be able to tell if the array is degraded or not.&lt;br /&gt;
&lt;br /&gt;
&amp;gt; Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
&amp;gt; Virtual Drive: 0 (Target Id: 0)&lt;br /&gt;
&amp;gt; Name                :&lt;br /&gt;
&amp;gt; RAID Level          : Primary-1, Secondary-3, RAID Level Qualifier-0&lt;br /&gt;
&amp;gt; Size                : 203.25 GB&lt;br /&gt;
&amp;gt; Sector Size         : 512&lt;br /&gt;
&amp;gt; Mirror Data         : 203.25 GB&lt;br /&gt;
&amp;gt; State               : Optimal&lt;br /&gt;
&amp;gt; Strip Size          : 64 KB&lt;br /&gt;
&amp;gt; Number Of Drives per span:2&lt;br /&gt;
&amp;gt; Span Depth          : 3&lt;br /&gt;
&amp;gt; Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache &lt;br /&gt;
&amp;gt; if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Direct, No &lt;br /&gt;
&amp;gt; Write Cache if Bad BBU Default Access Policy: Read/Write Current &lt;br /&gt;
&amp;gt; Access Policy: Read/Write&lt;br /&gt;
&amp;gt; Disk Cache Policy   : Disk&amp;#039;s Default&lt;br /&gt;
&amp;gt; Encryption Type     : None&lt;br /&gt;
&amp;gt; Is VD Cached: No&lt;br /&gt;
&lt;br /&gt;
The second command will produce a lot of output for each disk. The &amp;#039;Firmware State&amp;#039; line should show how each disk is configured, most should be &amp;#039;Online&amp;#039; or &amp;#039;Hotspare&amp;#039; . Once the new disks are added we&amp;#039;ll need to re-run this, they&amp;#039;ll probably be listed as &amp;#039;Unconfigured (Good)&amp;#039;. With some information about the disk position the command to configure the disks will be something like the following, I&amp;#039;ll confirm when we know what the variable are.&lt;br /&gt;
&lt;br /&gt;
/opt/MegaRAID/MegaCli/MegaCli64 -PDHSP -PhysDrv\[$ENCLOSURE:$SLOT\] -a$ARRAY&lt;br /&gt;
&lt;br /&gt;
Then if the array is degraded it should automatically start to rebuild using one or more of the new disks.&lt;br /&gt;
&lt;br /&gt;
If it&amp;#039;s still rebuilding this should show the progress:&lt;br /&gt;
&lt;br /&gt;
/opt/MegaRAID/MegaCli/MegaCli64 -PDRbld -ShowProg -PhysDrv\[8:13\] -a0&lt;br /&gt;
&lt;br /&gt;
I added one additional disk to each tray, they should be in Enclosure 8 Slot 14 and Enclosure 9 Slot 13. Confirm that with the output from PDList, if that&amp;#039;s correct this should mark them as hotspares.&lt;br /&gt;
&lt;br /&gt;
 /opt/MegaRAID/MegaCli/MegaCli64 -PDHSP -Set -PhysDrv\[8:14\] -a0&lt;br /&gt;
 /opt/MegaRAID/MegaCli/MegaCli64 -PDHSP - Set -PhysDrv\[9:13\] -a0&lt;br /&gt;
&lt;br /&gt;
Confirm by running PDList again, their Firmware State should have updated from Unconfigured Good to Configured Hotspare or something similar.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
to set as hot spares&lt;br /&gt;
&lt;br /&gt;
 /opt/MegaRAID/MegaCli/MegaCli64 -PDHSP  -Set -PhysDrv\[8:14\] -a0&lt;br /&gt;
 /opt/MegaRAID/MegaCli/MegaCli64 -PDHSP  -Set -PhysDrv\[9:13\] -a0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From the verbose MegaCli -h … all the options in some kind of order&lt;br /&gt;
&amp;gt; MegaCli -PDHSP {-Set [-Dedicated [-ArrayN|-Array0,1,2...]] [-EnclAffinity] [-nonRevertible]}&lt;br /&gt;
&amp;gt;     |-Rmv -PhysDrv[E0:S0,E1:S1,...] -aN|-a0,1,2|-aALL&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Main_Page&amp;diff=3504</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Main_Page&amp;diff=3504"/>
				<updated>2020-06-19T11:12:28Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: /* Cluster Administration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;KENNEDY HPC for Bioinf community &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* [[Kennedy manual]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Usage of Cluster=&lt;br /&gt;
* [[Cluster Manual]]&lt;br /&gt;
* [[Kennedy manual]]&lt;br /&gt;
* [[Why a Queue Manager?]]&lt;br /&gt;
* [[Available Software]]&lt;br /&gt;
* [[how to use the cluster training course]]&lt;br /&gt;
* [[windows network connect]]&lt;br /&gt;
&lt;br /&gt;
= Documented Programs =&lt;br /&gt;
&lt;br /&gt;
The following can be seen as extra notes referring to these programs usage on the marvin cluster, with an emphais on example use-cases. Most, if not all, will have their own special websites, with more detailed manuals and further information.&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;width:85%&amp;quot;&lt;br /&gt;
|* [[abacas]]&lt;br /&gt;
|* [[albacore]]&lt;br /&gt;
|* [[ariba]]&lt;br /&gt;
|* [[aspera]]&lt;br /&gt;
|* [[assembly-stats]]&lt;br /&gt;
|* [[augustus]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BamQC]]&lt;br /&gt;
|* [[bamtools]]&lt;br /&gt;
|* [[banjo]]&lt;br /&gt;
|* [[bcftools]]&lt;br /&gt;
|* [[bedtools]]&lt;br /&gt;
|* [[bgenie]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BLAST]]&lt;br /&gt;
|* [[Blat]]&lt;br /&gt;
|* [[blast2go: b2g4pipe]]&lt;br /&gt;
|* [[bowtie]]&lt;br /&gt;
|* [[bowtie2]]&lt;br /&gt;
|* [[bwa]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BUSCO]]&lt;br /&gt;
|* [[CAFE]]&lt;br /&gt;
|* [[canu]]&lt;br /&gt;
|* [[cd-hit]]&lt;br /&gt;
|* [[cegma]]&lt;br /&gt;
|* [[clustal]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[cramtools]]&lt;br /&gt;
|* [[conda]]&lt;br /&gt;
|* [[deeptools]]&lt;br /&gt;
|* [[detonate]]&lt;br /&gt;
|* [[diamond]]&lt;br /&gt;
|* [[ea-utils]]&lt;br /&gt;
|* [[ensembl]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[ETE]]&lt;br /&gt;
|* [[FASTQC and MultiQC]]&lt;br /&gt;
|* [[Archaeopteryx and Forester]]&lt;br /&gt;
|* [[GapFiller]]&lt;br /&gt;
|* [[GenomeTools]]&lt;br /&gt;
|* [[gubbins]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[JBrowse]]&lt;br /&gt;
|* [[kallisto]]&lt;br /&gt;
|* [[kentUtils]]&lt;br /&gt;
|* [[last]]&lt;br /&gt;
|* [[lastz]]&lt;br /&gt;
|* [[macs2]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[Mash]]&lt;br /&gt;
|* [[mega]]&lt;br /&gt;
|* [[meryl]]&lt;br /&gt;
|* [[MUMmer]]&lt;br /&gt;
|* [[NanoSim]]&lt;br /&gt;
|* [[nseq]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[OrthoFinder]]&lt;br /&gt;
|* [[PASA]]&lt;br /&gt;
|* [[perl]]&lt;br /&gt;
|* [[PGAP]]&lt;br /&gt;
|* [[picard-tools]]&lt;br /&gt;
|* [[poRe]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[poretools]]&lt;br /&gt;
|* [[prokka]]&lt;br /&gt;
|* [[pyrad]]&lt;br /&gt;
|* [[python]]&lt;br /&gt;
|* [[qualimap]]&lt;br /&gt;
|* [[quast]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[qiime2]]&lt;br /&gt;
|* [[R]]&lt;br /&gt;
|* [[RAxML]]&lt;br /&gt;
|* [[Repeatmasker]]&lt;br /&gt;
|* [[Repeatmodeler]]&lt;br /&gt;
|* [[rnammer]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[roary]]&lt;br /&gt;
|* [[RSeQC]]&lt;br /&gt;
|* [[samtools]]&lt;br /&gt;
|* [[Satsuma]]&lt;br /&gt;
|* [[sickle]]&lt;br /&gt;
|* [[SPAdes]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[squid]]&lt;br /&gt;
|* [[sra-tools]]&lt;br /&gt;
|* [[srst2]]&lt;br /&gt;
|* [[SSPACE]]&lt;br /&gt;
|* [[stacks]]&lt;br /&gt;
|* [[Thor]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[Tophat]]&lt;br /&gt;
|* [[trimmomatic]]&lt;br /&gt;
|* [[Trinity]]&lt;br /&gt;
|* [[t-coffee]]&lt;br /&gt;
|* [[Unicycler]]&lt;br /&gt;
|* [[velvet]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[ViennaRNA]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Queue Manager Tips =&lt;br /&gt;
A cluster is a shared resource with different users running different types of analyses. Nearly all clusters use a piece of software called a queue manager to fairly share out the resource. The queue manager on marvin is called Grid Engine, and it has several commands available, all beginning with &amp;#039;&amp;#039;&amp;#039;q&amp;#039;&amp;#039;&amp;#039; and with &amp;#039;&amp;#039;&amp;#039;qsub&amp;#039;&amp;#039;&amp;#039; being the most commonly used as it submits a command via a jobscript to be processed. Here are some tips:&lt;br /&gt;
* [[Queue Manager Tips]]&lt;br /&gt;
* [[Queue Manager : shell script command]]&lt;br /&gt;
* [[Queue Manager emailing when jobs run]]&lt;br /&gt;
* [[General Command-line Tips]]&lt;br /&gt;
* [[DRMAA for further Gridengine automation]]&lt;br /&gt;
&lt;br /&gt;
= Data Examples =&lt;br /&gt;
* [[Two Eel Scaffolds]]&lt;br /&gt;
&lt;br /&gt;
= Procedures =&lt;br /&gt;
(short sequence of tasks with a certain short-term goal, often, a simple script)&lt;br /&gt;
* [[Calculating coverage]]&lt;br /&gt;
* [[MinION Coverage sensitivity analysis]]&lt;br /&gt;
&lt;br /&gt;
= Navigating genomic data websites=&lt;br /&gt;
* [[Patric]]&lt;br /&gt;
* [[NCBI]]&lt;br /&gt;
* [[IGSR/1000 Genomes]]&lt;br /&gt;
&lt;br /&gt;
= Explanations=&lt;br /&gt;
* [[ITUcourse]]&lt;br /&gt;
* [[VCF]]&lt;br /&gt;
* [[Maximum Likelihood]]&lt;br /&gt;
* [[SNP Analysis and phylogenetics]]&lt;br /&gt;
* [[Normalization]]&lt;br /&gt;
&lt;br /&gt;
= Pipelines =&lt;br /&gt;
(Workflow with specific end-goals)&lt;br /&gt;
* [[Trinity_Protocol]]&lt;br /&gt;
* [[STAR BEAST]]&lt;br /&gt;
* [[callSNPs.py]]&lt;br /&gt;
* [[pairwiseCallSNPs]]&lt;br /&gt;
* [[mapping.py]]&lt;br /&gt;
* [[Edgen RNAseq]]&lt;br /&gt;
* [[Miseq Prokaryote FASTQ analysis]]&lt;br /&gt;
* [[snpcallphylo]]&lt;br /&gt;
* [[Bottlenose dolphin population genomic analysis]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast 12.09.2017]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast 07.11.2017]]&lt;br /&gt;
* [[Bisulfite Sequencing]]&lt;br /&gt;
* [[microRNA and Salmo Salar]]&lt;br /&gt;
&lt;br /&gt;
=Protocols=&lt;br /&gt;
(Extensive workflows with different with several possible end goals)&lt;br /&gt;
* [[Synthetic Long reads]]&lt;br /&gt;
* [[MinION (Oxford Nanopore)]]&lt;br /&gt;
* [[MinKNOW folders and log files]]&lt;br /&gt;
* [[Research Data Management]]&lt;br /&gt;
* [[MicroRNAs]]&lt;br /&gt;
&lt;br /&gt;
= Tech Reviews =&lt;br /&gt;
* [[SWATH-MS Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
= Cluster Administration =&lt;br /&gt;
* [[StABDMIN]]&lt;br /&gt;
* [[Hardware Issues]]&lt;br /&gt;
* [[marvin and IPMI (remote hardware control)]]&lt;br /&gt;
* [[restart a node]]&lt;br /&gt;
* [[mounting drives]]&lt;br /&gt;
* [[Admin Tips]]&lt;br /&gt;
* [[RedHat]]&lt;br /&gt;
* [[Globus_gridftp]]&lt;br /&gt;
* [[Galaxy Setup]]&lt;br /&gt;
* [[Son of Gridengine]]&lt;br /&gt;
* [[Blas Libraries]]&lt;br /&gt;
* [[CMake]]&lt;br /&gt;
* [[conda bioconda]]&lt;br /&gt;
* [[Users and Groups]]&lt;br /&gt;
* [[Installing software on marvin]]&lt;br /&gt;
* [[emailing]]&lt;br /&gt;
* [[biotime machine]]&lt;br /&gt;
* [[SCAN-pc laptop]]&lt;br /&gt;
* [[node1 issues]]&lt;br /&gt;
* [[6TB storage expansion]]&lt;br /&gt;
* [[PIs storage sacrifice]]&lt;br /&gt;
* [[SAN relocation task]]&lt;br /&gt;
* [[Home directories max-out incident 28.11.2016]]&lt;br /&gt;
* [[Frontend Restart]]&lt;br /&gt;
* [[environment-modules]]&lt;br /&gt;
* [[H: drive on cluster]]&lt;br /&gt;
* [[Incident: Can&amp;#039;t connect to BerkeleyDB]]&lt;br /&gt;
* [[Bioinformatics Wordpress Site]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* [[users disk usage]]&lt;br /&gt;
* [[Updating BLAST databases]]&lt;br /&gt;
* [[Python DRMAA]]&lt;br /&gt;
* [[message of the day]]&lt;br /&gt;
* [[SAN disconnect incident 10.01.2017]]&lt;br /&gt;
* [[Memory repair glitch 16.02.2017]]&lt;br /&gt;
* [[node9 network failure incident 16-20.03.2017]]&lt;br /&gt;
* [[Incorrect rebooting of marvin 19.09.2017]]&lt;br /&gt;
* [[ansible]]&lt;br /&gt;
* [[webstie and word press]]&lt;br /&gt;
* [[allow user access to other peoples data]]&lt;br /&gt;
* [[RAM and RAM slots]]&lt;br /&gt;
* [[ldap is not ldap]]&lt;br /&gt;
* [[reset a password]]&lt;br /&gt;
* [[sending emails from command line examples]]&lt;br /&gt;
* [[disk management after shelf disk failure]]&lt;br /&gt;
&lt;br /&gt;
= Courses =&lt;br /&gt;
&lt;br /&gt;
==I2U4BGA==&lt;br /&gt;
* [[Original schedule]]&lt;br /&gt;
* [[New schedule]]&lt;br /&gt;
* [[Actual schedule]]&lt;br /&gt;
* [[Course itself]]&lt;br /&gt;
* [[Biolinux Source course]]&lt;br /&gt;
* [[Directory Organization Exercise]]&lt;br /&gt;
* [[Glossary]]&lt;br /&gt;
* [[Key Bindings]]&lt;br /&gt;
* [[one-liners]]&lt;br /&gt;
* [[Cheatsheets]]&lt;br /&gt;
* [[Links]]&lt;br /&gt;
* [[pandoc modified manual]]&lt;br /&gt;
* [[Command Line Exercises]]&lt;br /&gt;
&lt;br /&gt;
= hdi2u =&lt;br /&gt;
&lt;br /&gt;
The half-day linux course held on 20th April. Modified version of I2U4BGA.&lt;br /&gt;
&lt;br /&gt;
* [[hdi2u_intro]]&lt;br /&gt;
* [[hdi2u_commandbased_exercises]]&lt;br /&gt;
* [[hdi2u_dirorg_exercise]]&lt;br /&gt;
* [[hdi2u_rendertotsv_exercise]]&lt;br /&gt;
&lt;br /&gt;
= RNAseq for DGE =&lt;br /&gt;
* [[Theoretical background]]&lt;br /&gt;
* [[Quality Control and Preprocessing]]&lt;br /&gt;
* [[Mapping to Reference]]&lt;br /&gt;
* [[Mapping Quality Exercise]]&lt;br /&gt;
* [[Key Aspects of using R]]&lt;br /&gt;
* [[Estimating Gene Count Exercise]]&lt;br /&gt;
* [[Differential Expression Exercise]]&lt;br /&gt;
* [[Functional Analysis Exercise]]&lt;br /&gt;
&lt;br /&gt;
= Introduction to Unix 2017 =&lt;br /&gt;
* [[Introduction_to_Unix_2017]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Templates==&lt;br /&gt;
* [[edgenl2g]]&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Users_and_Groups&amp;diff=3503</id>
		<title>Users and Groups</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Users_and_Groups&amp;diff=3503"/>
				<updated>2020-06-18T12:03:29Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction=&lt;br /&gt;
&lt;br /&gt;
Some, though not all, of the tips here are for setting up users and groups.&lt;br /&gt;
&lt;br /&gt;
The tool of choice is smbldap.&lt;br /&gt;
&lt;br /&gt;
= Usage: How to add a new user =&lt;br /&gt;
==Users==&lt;br /&gt;
* To create a new user(s)&lt;br /&gt;
Root has a script in bin/creasu.sh, so as root:&lt;br /&gt;
 sh bin/creasu.sh &amp;lt;user&amp;gt; &amp;lt;user1&amp;gt; &amp;lt;user2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
if this line fails go to the admin page which talk about ldap.&lt;br /&gt;
Manually doing the command from the script worked for me when this failed&lt;br /&gt;
&lt;br /&gt;
 # (only if needed - perl errors) service restart slapd&lt;br /&gt;
&lt;br /&gt;
 NU=test06&lt;br /&gt;
 smbldap-groupadd -a $NU&lt;br /&gt;
 smbldap-useradd -g $NU -a $NU&lt;br /&gt;
 smbldap-passwd $NU&lt;br /&gt;
 bash_files=/etc/skel&lt;br /&gt;
 basepath=/storage/home/users&lt;br /&gt;
 path=$basepath/$NU&lt;br /&gt;
 echo $path&lt;br /&gt;
 cd $basepath&lt;br /&gt;
 cp -r $bash_files/.{m,n,b,g}* $NU&lt;br /&gt;
 chown -R $NU:$NU $path&lt;br /&gt;
 smbldap-groupadd -a $NU&lt;br /&gt;
 chown -R $NU:$NU $path&lt;br /&gt;
 chmod 0701 $NU&lt;br /&gt;
 chcon &amp;#039;unconfined_u:object_r:user_home_dir_t:s0&amp;#039; $path&lt;br /&gt;
&lt;br /&gt;
will create groups, accounts, home folder and all relevant files into the new home folder. &lt;br /&gt;
Then you need to setup passwords with (password promt will appear):&lt;br /&gt;
 smbldap-passwd &amp;lt;user&amp;gt;&lt;br /&gt;
for each of the users.&lt;br /&gt;
&lt;br /&gt;
Then setup an ssh key for logging into the nodes by doing the following: &lt;br /&gt;
&lt;br /&gt;
as root user, login a user via&lt;br /&gt;
 su - &amp;lt;newuserid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and execute&lt;br /&gt;
 ssh-keygen&lt;br /&gt;
&lt;br /&gt;
and just accept all the suggestions, keep accepting then as they are ...&lt;br /&gt;
.ssh/id_rsa and .ssh/id_rsa.pub, then get created.&lt;br /&gt;
&lt;br /&gt;
then&lt;br /&gt;
 cp .ssh/id_rsa.pub .ssh/authorized_keys&lt;br /&gt;
&lt;br /&gt;
and&lt;br /&gt;
 chmod 600 .ssh/authorized_keys&lt;br /&gt;
&lt;br /&gt;
then ssh node1 should log in to node1 without password (no need to test other nodes).&lt;br /&gt;
&lt;br /&gt;
Then tell the user to change their password by doing:&lt;br /&gt;
 passwd&lt;br /&gt;
&lt;br /&gt;
==Groups==&lt;br /&gt;
* To create a new group  (we dont have groups YET!)&lt;br /&gt;
 smbldap-groupadd -a &amp;lt;newgrpname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* To add users to a certain group (note that this seems to take some time to propagate, as well as only working on fresh logins)&lt;br /&gt;
 smbldap-groupmod -m &amp;lt;list,of,users&amp;gt; &amp;lt;targetgroup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= change a password =&lt;br /&gt;
&lt;br /&gt;
when a user forgets their password and asks for a new one:&lt;br /&gt;
as root&lt;br /&gt;
 smbldap-passwd &amp;lt;user&amp;gt;&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Main_Page&amp;diff=3502</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Main_Page&amp;diff=3502"/>
				<updated>2020-06-15T08:40:27Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: /* Cluster Administration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;KENNEDY HPC for Bioinf community &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* [[Kennedy manual]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Usage of Cluster=&lt;br /&gt;
* [[Cluster Manual]]&lt;br /&gt;
* [[Kennedy manual]]&lt;br /&gt;
* [[Why a Queue Manager?]]&lt;br /&gt;
* [[Available Software]]&lt;br /&gt;
* [[how to use the cluster training course]]&lt;br /&gt;
* [[windows network connect]]&lt;br /&gt;
&lt;br /&gt;
= Documented Programs =&lt;br /&gt;
&lt;br /&gt;
The following can be seen as extra notes referring to these programs usage on the marvin cluster, with an emphais on example use-cases. Most, if not all, will have their own special websites, with more detailed manuals and further information.&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;width:85%&amp;quot;&lt;br /&gt;
|* [[abacas]]&lt;br /&gt;
|* [[albacore]]&lt;br /&gt;
|* [[ariba]]&lt;br /&gt;
|* [[aspera]]&lt;br /&gt;
|* [[assembly-stats]]&lt;br /&gt;
|* [[augustus]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BamQC]]&lt;br /&gt;
|* [[bamtools]]&lt;br /&gt;
|* [[banjo]]&lt;br /&gt;
|* [[bcftools]]&lt;br /&gt;
|* [[bedtools]]&lt;br /&gt;
|* [[bgenie]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BLAST]]&lt;br /&gt;
|* [[Blat]]&lt;br /&gt;
|* [[blast2go: b2g4pipe]]&lt;br /&gt;
|* [[bowtie]]&lt;br /&gt;
|* [[bowtie2]]&lt;br /&gt;
|* [[bwa]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BUSCO]]&lt;br /&gt;
|* [[CAFE]]&lt;br /&gt;
|* [[canu]]&lt;br /&gt;
|* [[cd-hit]]&lt;br /&gt;
|* [[cegma]]&lt;br /&gt;
|* [[clustal]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[cramtools]]&lt;br /&gt;
|* [[conda]]&lt;br /&gt;
|* [[deeptools]]&lt;br /&gt;
|* [[detonate]]&lt;br /&gt;
|* [[diamond]]&lt;br /&gt;
|* [[ea-utils]]&lt;br /&gt;
|* [[ensembl]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[ETE]]&lt;br /&gt;
|* [[FASTQC and MultiQC]]&lt;br /&gt;
|* [[Archaeopteryx and Forester]]&lt;br /&gt;
|* [[GapFiller]]&lt;br /&gt;
|* [[GenomeTools]]&lt;br /&gt;
|* [[gubbins]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[JBrowse]]&lt;br /&gt;
|* [[kallisto]]&lt;br /&gt;
|* [[kentUtils]]&lt;br /&gt;
|* [[last]]&lt;br /&gt;
|* [[lastz]]&lt;br /&gt;
|* [[macs2]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[Mash]]&lt;br /&gt;
|* [[mega]]&lt;br /&gt;
|* [[meryl]]&lt;br /&gt;
|* [[MUMmer]]&lt;br /&gt;
|* [[NanoSim]]&lt;br /&gt;
|* [[nseq]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[OrthoFinder]]&lt;br /&gt;
|* [[PASA]]&lt;br /&gt;
|* [[perl]]&lt;br /&gt;
|* [[PGAP]]&lt;br /&gt;
|* [[picard-tools]]&lt;br /&gt;
|* [[poRe]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[poretools]]&lt;br /&gt;
|* [[prokka]]&lt;br /&gt;
|* [[pyrad]]&lt;br /&gt;
|* [[python]]&lt;br /&gt;
|* [[qualimap]]&lt;br /&gt;
|* [[quast]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[qiime2]]&lt;br /&gt;
|* [[R]]&lt;br /&gt;
|* [[RAxML]]&lt;br /&gt;
|* [[Repeatmasker]]&lt;br /&gt;
|* [[Repeatmodeler]]&lt;br /&gt;
|* [[rnammer]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[roary]]&lt;br /&gt;
|* [[RSeQC]]&lt;br /&gt;
|* [[samtools]]&lt;br /&gt;
|* [[Satsuma]]&lt;br /&gt;
|* [[sickle]]&lt;br /&gt;
|* [[SPAdes]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[squid]]&lt;br /&gt;
|* [[sra-tools]]&lt;br /&gt;
|* [[srst2]]&lt;br /&gt;
|* [[SSPACE]]&lt;br /&gt;
|* [[stacks]]&lt;br /&gt;
|* [[Thor]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[Tophat]]&lt;br /&gt;
|* [[trimmomatic]]&lt;br /&gt;
|* [[Trinity]]&lt;br /&gt;
|* [[t-coffee]]&lt;br /&gt;
|* [[Unicycler]]&lt;br /&gt;
|* [[velvet]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[ViennaRNA]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Queue Manager Tips =&lt;br /&gt;
A cluster is a shared resource with different users running different types of analyses. Nearly all clusters use a piece of software called a queue manager to fairly share out the resource. The queue manager on marvin is called Grid Engine, and it has several commands available, all beginning with &amp;#039;&amp;#039;&amp;#039;q&amp;#039;&amp;#039;&amp;#039; and with &amp;#039;&amp;#039;&amp;#039;qsub&amp;#039;&amp;#039;&amp;#039; being the most commonly used as it submits a command via a jobscript to be processed. Here are some tips:&lt;br /&gt;
* [[Queue Manager Tips]]&lt;br /&gt;
* [[Queue Manager : shell script command]]&lt;br /&gt;
* [[Queue Manager emailing when jobs run]]&lt;br /&gt;
* [[General Command-line Tips]]&lt;br /&gt;
* [[DRMAA for further Gridengine automation]]&lt;br /&gt;
&lt;br /&gt;
= Data Examples =&lt;br /&gt;
* [[Two Eel Scaffolds]]&lt;br /&gt;
&lt;br /&gt;
= Procedures =&lt;br /&gt;
(short sequence of tasks with a certain short-term goal, often, a simple script)&lt;br /&gt;
* [[Calculating coverage]]&lt;br /&gt;
* [[MinION Coverage sensitivity analysis]]&lt;br /&gt;
&lt;br /&gt;
= Navigating genomic data websites=&lt;br /&gt;
* [[Patric]]&lt;br /&gt;
* [[NCBI]]&lt;br /&gt;
* [[IGSR/1000 Genomes]]&lt;br /&gt;
&lt;br /&gt;
= Explanations=&lt;br /&gt;
* [[ITUcourse]]&lt;br /&gt;
* [[VCF]]&lt;br /&gt;
* [[Maximum Likelihood]]&lt;br /&gt;
* [[SNP Analysis and phylogenetics]]&lt;br /&gt;
* [[Normalization]]&lt;br /&gt;
&lt;br /&gt;
= Pipelines =&lt;br /&gt;
(Workflow with specific end-goals)&lt;br /&gt;
* [[Trinity_Protocol]]&lt;br /&gt;
* [[STAR BEAST]]&lt;br /&gt;
* [[callSNPs.py]]&lt;br /&gt;
* [[pairwiseCallSNPs]]&lt;br /&gt;
* [[mapping.py]]&lt;br /&gt;
* [[Edgen RNAseq]]&lt;br /&gt;
* [[Miseq Prokaryote FASTQ analysis]]&lt;br /&gt;
* [[snpcallphylo]]&lt;br /&gt;
* [[Bottlenose dolphin population genomic analysis]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast 12.09.2017]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast 07.11.2017]]&lt;br /&gt;
* [[Bisulfite Sequencing]]&lt;br /&gt;
* [[microRNA and Salmo Salar]]&lt;br /&gt;
&lt;br /&gt;
=Protocols=&lt;br /&gt;
(Extensive workflows with different with several possible end goals)&lt;br /&gt;
* [[Synthetic Long reads]]&lt;br /&gt;
* [[MinION (Oxford Nanopore)]]&lt;br /&gt;
* [[MinKNOW folders and log files]]&lt;br /&gt;
* [[Research Data Management]]&lt;br /&gt;
* [[MicroRNAs]]&lt;br /&gt;
&lt;br /&gt;
= Tech Reviews =&lt;br /&gt;
* [[SWATH-MS Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
= Cluster Administration =&lt;br /&gt;
* [[StABDMIN]]&lt;br /&gt;
* [[Hardware Issues]]&lt;br /&gt;
* [[marvin and IPMI (remote hardware control)]]&lt;br /&gt;
* [[restart a node]]&lt;br /&gt;
* [[mounting drives]]&lt;br /&gt;
* [[Admin Tips]]&lt;br /&gt;
* [[RedHat]]&lt;br /&gt;
* [[Globus_gridftp]]&lt;br /&gt;
* [[Galaxy Setup]]&lt;br /&gt;
* [[Son of Gridengine]]&lt;br /&gt;
* [[Blas Libraries]]&lt;br /&gt;
* [[CMake]]&lt;br /&gt;
* [[conda bioconda]]&lt;br /&gt;
* [[Users and Groups]]&lt;br /&gt;
* [[Installing software on marvin]]&lt;br /&gt;
* [[emailing]]&lt;br /&gt;
* [[biotime machine]]&lt;br /&gt;
* [[SCAN-pc laptop]]&lt;br /&gt;
* [[node1 issues]]&lt;br /&gt;
* [[6TB storage expansion]]&lt;br /&gt;
* [[PIs storage sacrifice]]&lt;br /&gt;
* [[SAN relocation task]]&lt;br /&gt;
* [[Home directories max-out incident 28.11.2016]]&lt;br /&gt;
* [[Frontend Restart]]&lt;br /&gt;
* [[environment-modules]]&lt;br /&gt;
* [[H: drive on cluster]]&lt;br /&gt;
* [[Incident: Can&amp;#039;t connect to BerkeleyDB]]&lt;br /&gt;
* [[Bioinformatics Wordpress Site]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* [[users disk usage]]&lt;br /&gt;
* [[Updating BLAST databases]]&lt;br /&gt;
* [[Python DRMAA]]&lt;br /&gt;
* [[message of the day]]&lt;br /&gt;
* [[SAN disconnect incident 10.01.2017]]&lt;br /&gt;
* [[Memory repair glitch 16.02.2017]]&lt;br /&gt;
* [[node9 network failure incident 16-20.03.2017]]&lt;br /&gt;
* [[Incorrect rebooting of marvin 19.09.2017]]&lt;br /&gt;
* [[ansible]]&lt;br /&gt;
* [[webstie and word press]]&lt;br /&gt;
* [[allow user access to other peoples data]]&lt;br /&gt;
* [[RAM and RAM slots]]&lt;br /&gt;
* [[ldap is not ldap]]&lt;br /&gt;
* [[reset a password]]&lt;br /&gt;
* [[sending emails from command line examples]]&lt;br /&gt;
&lt;br /&gt;
= Courses =&lt;br /&gt;
&lt;br /&gt;
==I2U4BGA==&lt;br /&gt;
* [[Original schedule]]&lt;br /&gt;
* [[New schedule]]&lt;br /&gt;
* [[Actual schedule]]&lt;br /&gt;
* [[Course itself]]&lt;br /&gt;
* [[Biolinux Source course]]&lt;br /&gt;
* [[Directory Organization Exercise]]&lt;br /&gt;
* [[Glossary]]&lt;br /&gt;
* [[Key Bindings]]&lt;br /&gt;
* [[one-liners]]&lt;br /&gt;
* [[Cheatsheets]]&lt;br /&gt;
* [[Links]]&lt;br /&gt;
* [[pandoc modified manual]]&lt;br /&gt;
* [[Command Line Exercises]]&lt;br /&gt;
&lt;br /&gt;
= hdi2u =&lt;br /&gt;
&lt;br /&gt;
The half-day linux course held on 20th April. Modified version of I2U4BGA.&lt;br /&gt;
&lt;br /&gt;
* [[hdi2u_intro]]&lt;br /&gt;
* [[hdi2u_commandbased_exercises]]&lt;br /&gt;
* [[hdi2u_dirorg_exercise]]&lt;br /&gt;
* [[hdi2u_rendertotsv_exercise]]&lt;br /&gt;
&lt;br /&gt;
= RNAseq for DGE =&lt;br /&gt;
* [[Theoretical background]]&lt;br /&gt;
* [[Quality Control and Preprocessing]]&lt;br /&gt;
* [[Mapping to Reference]]&lt;br /&gt;
* [[Mapping Quality Exercise]]&lt;br /&gt;
* [[Key Aspects of using R]]&lt;br /&gt;
* [[Estimating Gene Count Exercise]]&lt;br /&gt;
* [[Differential Expression Exercise]]&lt;br /&gt;
* [[Functional Analysis Exercise]]&lt;br /&gt;
&lt;br /&gt;
= Introduction to Unix 2017 =&lt;br /&gt;
* [[Introduction_to_Unix_2017]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Templates==&lt;br /&gt;
* [[edgenl2g]]&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Main_Page&amp;diff=3501</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Main_Page&amp;diff=3501"/>
				<updated>2020-06-15T08:40:01Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: /* Cluster Administration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;KENNEDY HPC for Bioinf community &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* [[Kennedy manual]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Usage of Cluster=&lt;br /&gt;
* [[Cluster Manual]]&lt;br /&gt;
* [[Kennedy manual]]&lt;br /&gt;
* [[Why a Queue Manager?]]&lt;br /&gt;
* [[Available Software]]&lt;br /&gt;
* [[how to use the cluster training course]]&lt;br /&gt;
* [[windows network connect]]&lt;br /&gt;
&lt;br /&gt;
= Documented Programs =&lt;br /&gt;
&lt;br /&gt;
The following can be seen as extra notes referring to these programs usage on the marvin cluster, with an emphais on example use-cases. Most, if not all, will have their own special websites, with more detailed manuals and further information.&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;width:85%&amp;quot;&lt;br /&gt;
|* [[abacas]]&lt;br /&gt;
|* [[albacore]]&lt;br /&gt;
|* [[ariba]]&lt;br /&gt;
|* [[aspera]]&lt;br /&gt;
|* [[assembly-stats]]&lt;br /&gt;
|* [[augustus]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BamQC]]&lt;br /&gt;
|* [[bamtools]]&lt;br /&gt;
|* [[banjo]]&lt;br /&gt;
|* [[bcftools]]&lt;br /&gt;
|* [[bedtools]]&lt;br /&gt;
|* [[bgenie]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BLAST]]&lt;br /&gt;
|* [[Blat]]&lt;br /&gt;
|* [[blast2go: b2g4pipe]]&lt;br /&gt;
|* [[bowtie]]&lt;br /&gt;
|* [[bowtie2]]&lt;br /&gt;
|* [[bwa]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BUSCO]]&lt;br /&gt;
|* [[CAFE]]&lt;br /&gt;
|* [[canu]]&lt;br /&gt;
|* [[cd-hit]]&lt;br /&gt;
|* [[cegma]]&lt;br /&gt;
|* [[clustal]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[cramtools]]&lt;br /&gt;
|* [[conda]]&lt;br /&gt;
|* [[deeptools]]&lt;br /&gt;
|* [[detonate]]&lt;br /&gt;
|* [[diamond]]&lt;br /&gt;
|* [[ea-utils]]&lt;br /&gt;
|* [[ensembl]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[ETE]]&lt;br /&gt;
|* [[FASTQC and MultiQC]]&lt;br /&gt;
|* [[Archaeopteryx and Forester]]&lt;br /&gt;
|* [[GapFiller]]&lt;br /&gt;
|* [[GenomeTools]]&lt;br /&gt;
|* [[gubbins]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[JBrowse]]&lt;br /&gt;
|* [[kallisto]]&lt;br /&gt;
|* [[kentUtils]]&lt;br /&gt;
|* [[last]]&lt;br /&gt;
|* [[lastz]]&lt;br /&gt;
|* [[macs2]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[Mash]]&lt;br /&gt;
|* [[mega]]&lt;br /&gt;
|* [[meryl]]&lt;br /&gt;
|* [[MUMmer]]&lt;br /&gt;
|* [[NanoSim]]&lt;br /&gt;
|* [[nseq]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[OrthoFinder]]&lt;br /&gt;
|* [[PASA]]&lt;br /&gt;
|* [[perl]]&lt;br /&gt;
|* [[PGAP]]&lt;br /&gt;
|* [[picard-tools]]&lt;br /&gt;
|* [[poRe]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[poretools]]&lt;br /&gt;
|* [[prokka]]&lt;br /&gt;
|* [[pyrad]]&lt;br /&gt;
|* [[python]]&lt;br /&gt;
|* [[qualimap]]&lt;br /&gt;
|* [[quast]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[qiime2]]&lt;br /&gt;
|* [[R]]&lt;br /&gt;
|* [[RAxML]]&lt;br /&gt;
|* [[Repeatmasker]]&lt;br /&gt;
|* [[Repeatmodeler]]&lt;br /&gt;
|* [[rnammer]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[roary]]&lt;br /&gt;
|* [[RSeQC]]&lt;br /&gt;
|* [[samtools]]&lt;br /&gt;
|* [[Satsuma]]&lt;br /&gt;
|* [[sickle]]&lt;br /&gt;
|* [[SPAdes]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[squid]]&lt;br /&gt;
|* [[sra-tools]]&lt;br /&gt;
|* [[srst2]]&lt;br /&gt;
|* [[SSPACE]]&lt;br /&gt;
|* [[stacks]]&lt;br /&gt;
|* [[Thor]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[Tophat]]&lt;br /&gt;
|* [[trimmomatic]]&lt;br /&gt;
|* [[Trinity]]&lt;br /&gt;
|* [[t-coffee]]&lt;br /&gt;
|* [[Unicycler]]&lt;br /&gt;
|* [[velvet]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[ViennaRNA]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Queue Manager Tips =&lt;br /&gt;
A cluster is a shared resource with different users running different types of analyses. Nearly all clusters use a piece of software called a queue manager to fairly share out the resource. The queue manager on marvin is called Grid Engine, and it has several commands available, all beginning with &amp;#039;&amp;#039;&amp;#039;q&amp;#039;&amp;#039;&amp;#039; and with &amp;#039;&amp;#039;&amp;#039;qsub&amp;#039;&amp;#039;&amp;#039; being the most commonly used as it submits a command via a jobscript to be processed. Here are some tips:&lt;br /&gt;
* [[Queue Manager Tips]]&lt;br /&gt;
* [[Queue Manager : shell script command]]&lt;br /&gt;
* [[Queue Manager emailing when jobs run]]&lt;br /&gt;
* [[General Command-line Tips]]&lt;br /&gt;
* [[DRMAA for further Gridengine automation]]&lt;br /&gt;
&lt;br /&gt;
= Data Examples =&lt;br /&gt;
* [[Two Eel Scaffolds]]&lt;br /&gt;
&lt;br /&gt;
= Procedures =&lt;br /&gt;
(short sequence of tasks with a certain short-term goal, often, a simple script)&lt;br /&gt;
* [[Calculating coverage]]&lt;br /&gt;
* [[MinION Coverage sensitivity analysis]]&lt;br /&gt;
&lt;br /&gt;
= Navigating genomic data websites=&lt;br /&gt;
* [[Patric]]&lt;br /&gt;
* [[NCBI]]&lt;br /&gt;
* [[IGSR/1000 Genomes]]&lt;br /&gt;
&lt;br /&gt;
= Explanations=&lt;br /&gt;
* [[ITUcourse]]&lt;br /&gt;
* [[VCF]]&lt;br /&gt;
* [[Maximum Likelihood]]&lt;br /&gt;
* [[SNP Analysis and phylogenetics]]&lt;br /&gt;
* [[Normalization]]&lt;br /&gt;
&lt;br /&gt;
= Pipelines =&lt;br /&gt;
(Workflow with specific end-goals)&lt;br /&gt;
* [[Trinity_Protocol]]&lt;br /&gt;
* [[STAR BEAST]]&lt;br /&gt;
* [[callSNPs.py]]&lt;br /&gt;
* [[pairwiseCallSNPs]]&lt;br /&gt;
* [[mapping.py]]&lt;br /&gt;
* [[Edgen RNAseq]]&lt;br /&gt;
* [[Miseq Prokaryote FASTQ analysis]]&lt;br /&gt;
* [[snpcallphylo]]&lt;br /&gt;
* [[Bottlenose dolphin population genomic analysis]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast 12.09.2017]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast 07.11.2017]]&lt;br /&gt;
* [[Bisulfite Sequencing]]&lt;br /&gt;
* [[microRNA and Salmo Salar]]&lt;br /&gt;
&lt;br /&gt;
=Protocols=&lt;br /&gt;
(Extensive workflows with different with several possible end goals)&lt;br /&gt;
* [[Synthetic Long reads]]&lt;br /&gt;
* [[MinION (Oxford Nanopore)]]&lt;br /&gt;
* [[MinKNOW folders and log files]]&lt;br /&gt;
* [[Research Data Management]]&lt;br /&gt;
* [[MicroRNAs]]&lt;br /&gt;
&lt;br /&gt;
= Tech Reviews =&lt;br /&gt;
* [[SWATH-MS Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
= Cluster Administration =&lt;br /&gt;
* [[StABDMIN]]&lt;br /&gt;
* [[Hardware Issues]]&lt;br /&gt;
* [[marvin and IPMI (remote hardware control)]]&lt;br /&gt;
* [[restart a node]]&lt;br /&gt;
* [[mounting drives]]&lt;br /&gt;
* [[Admin Tips]]&lt;br /&gt;
* [[RedHat]]&lt;br /&gt;
* [[Globus_gridftp]]&lt;br /&gt;
* [[Galaxy Setup]]&lt;br /&gt;
* [[Son of Gridengine]]&lt;br /&gt;
* [[Blas Libraries]]&lt;br /&gt;
* [[CMake]]&lt;br /&gt;
* [[conda bioconda]]&lt;br /&gt;
* [[Users and Groups  add a new user]]&lt;br /&gt;
* [[Installing software on marvin]]&lt;br /&gt;
* [[emailing]]&lt;br /&gt;
* [[biotime machine]]&lt;br /&gt;
* [[SCAN-pc laptop]]&lt;br /&gt;
* [[node1 issues]]&lt;br /&gt;
* [[6TB storage expansion]]&lt;br /&gt;
* [[PIs storage sacrifice]]&lt;br /&gt;
* [[SAN relocation task]]&lt;br /&gt;
* [[Home directories max-out incident 28.11.2016]]&lt;br /&gt;
* [[Frontend Restart]]&lt;br /&gt;
* [[environment-modules]]&lt;br /&gt;
* [[H: drive on cluster]]&lt;br /&gt;
* [[Incident: Can&amp;#039;t connect to BerkeleyDB]]&lt;br /&gt;
* [[Bioinformatics Wordpress Site]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* [[users disk usage]]&lt;br /&gt;
* [[Updating BLAST databases]]&lt;br /&gt;
* [[Python DRMAA]]&lt;br /&gt;
* [[message of the day]]&lt;br /&gt;
* [[SAN disconnect incident 10.01.2017]]&lt;br /&gt;
* [[Memory repair glitch 16.02.2017]]&lt;br /&gt;
* [[node9 network failure incident 16-20.03.2017]]&lt;br /&gt;
* [[Incorrect rebooting of marvin 19.09.2017]]&lt;br /&gt;
* [[ansible]]&lt;br /&gt;
* [[webstie and word press]]&lt;br /&gt;
* [[allow user access to other peoples data]]&lt;br /&gt;
* [[RAM and RAM slots]]&lt;br /&gt;
* [[ldap is not ldap]]&lt;br /&gt;
* [[reset a password]]&lt;br /&gt;
* [[sending emails from command line examples]]&lt;br /&gt;
&lt;br /&gt;
= Courses =&lt;br /&gt;
&lt;br /&gt;
==I2U4BGA==&lt;br /&gt;
* [[Original schedule]]&lt;br /&gt;
* [[New schedule]]&lt;br /&gt;
* [[Actual schedule]]&lt;br /&gt;
* [[Course itself]]&lt;br /&gt;
* [[Biolinux Source course]]&lt;br /&gt;
* [[Directory Organization Exercise]]&lt;br /&gt;
* [[Glossary]]&lt;br /&gt;
* [[Key Bindings]]&lt;br /&gt;
* [[one-liners]]&lt;br /&gt;
* [[Cheatsheets]]&lt;br /&gt;
* [[Links]]&lt;br /&gt;
* [[pandoc modified manual]]&lt;br /&gt;
* [[Command Line Exercises]]&lt;br /&gt;
&lt;br /&gt;
= hdi2u =&lt;br /&gt;
&lt;br /&gt;
The half-day linux course held on 20th April. Modified version of I2U4BGA.&lt;br /&gt;
&lt;br /&gt;
* [[hdi2u_intro]]&lt;br /&gt;
* [[hdi2u_commandbased_exercises]]&lt;br /&gt;
* [[hdi2u_dirorg_exercise]]&lt;br /&gt;
* [[hdi2u_rendertotsv_exercise]]&lt;br /&gt;
&lt;br /&gt;
= RNAseq for DGE =&lt;br /&gt;
* [[Theoretical background]]&lt;br /&gt;
* [[Quality Control and Preprocessing]]&lt;br /&gt;
* [[Mapping to Reference]]&lt;br /&gt;
* [[Mapping Quality Exercise]]&lt;br /&gt;
* [[Key Aspects of using R]]&lt;br /&gt;
* [[Estimating Gene Count Exercise]]&lt;br /&gt;
* [[Differential Expression Exercise]]&lt;br /&gt;
* [[Functional Analysis Exercise]]&lt;br /&gt;
&lt;br /&gt;
= Introduction to Unix 2017 =&lt;br /&gt;
* [[Introduction_to_Unix_2017]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Templates==&lt;br /&gt;
* [[edgenl2g]]&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Frontend_Restart&amp;diff=3500</id>
		<title>Frontend Restart</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Frontend_Restart&amp;diff=3500"/>
				<updated>2020-05-20T19:28:16Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: /* sungrid engine not running on the centos 7 nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
This is a highly critical operation, and though most of the possible pitfalls are documented here, there is plenty scope for new ones which could render the entire cluster inaccessible. While this does not mean lasting damage, but rather delays in the order of days, may be even over a week.&lt;br /&gt;
&lt;br /&gt;
Because of various precautions and checks the process usually requires 2-3 hours.&lt;br /&gt;
&lt;br /&gt;
Given these aspects, the question may be asked: why do it? The answer is to update the software, specifically, the kernel software.&lt;br /&gt;
&lt;br /&gt;
Short process:&lt;br /&gt;
* Give plenty of warning&lt;br /&gt;
* Make sure the STORAGE line in fstab is commented out&lt;br /&gt;
* shutdown all the nodes&lt;br /&gt;
* Double check the STORAGE line in fstab is commented out&lt;br /&gt;
* reboot Marvin&lt;br /&gt;
* re-mount the STORAGE partition&lt;br /&gt;
* comment out the STORAGE line in fstab&lt;br /&gt;
* bring the nodes back up with ipmiconfig&lt;br /&gt;
&lt;br /&gt;
= Measures =&lt;br /&gt;
&lt;br /&gt;
== Bring all nodes down  before restart ==&lt;br /&gt;
first comment out in &amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039; ,   The lines after  -  &amp;quot;The Three vital LVs&amp;quot; must be commented out. &lt;br /&gt;
&lt;br /&gt;
To bring a node down, simply (as super user, then log into the node via ssh)&lt;br /&gt;
 shutdown now&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is possibly the most useful measure. Primarily, it is due to the nodes using marvin to keep various filesystems mounted, and the havoc they experience when marvin stops doing this. NFS4 stale filehandles then appear and are hard to get rid of. This measure is not immediately obvious, because all the nodes are updated on a rolling basis and often do not need to be switched off.&lt;br /&gt;
&lt;br /&gt;
And then, when marvin is back up, and once its filesystems are verified, the nodes maybe brought back up. It is best not to bring them up at the same time, so that they all don&amp;#039;t immediately grab the central filesystem at the same time. They may be brought up say 5 minutes from each other. Simply another precaution&lt;br /&gt;
&lt;br /&gt;
Of course this seems like quite alot of extra work, but it&amp;#039;s worth it in terms of saving later debugging time. It&amp;#039;s clearly related to necessary manual mounting of the STORAGE volume documented below.&lt;br /&gt;
&lt;br /&gt;
Nodes can be restarted with&lt;br /&gt;
 pwonck &amp;lt;node number&amp;gt;&lt;br /&gt;
or &lt;br /&gt;
 pwcycck &amp;lt;node number&amp;gt; &lt;br /&gt;
as root.&lt;br /&gt;
&lt;br /&gt;
== Try to get console access to the frontend ==&lt;br /&gt;
&lt;br /&gt;
This can be solved with IPMI. IPMI IP is 138.251.13.220, username is ADMIN and the password is in the red folder. &lt;br /&gt;
&lt;br /&gt;
There are various options:&lt;br /&gt;
&lt;br /&gt;
* via the &amp;#039;&amp;#039;&amp;#039;ipmiconfig&amp;#039;&amp;#039;&amp;#039; tool, this is command line only.&lt;br /&gt;
* via the IPMIView tool, GUI.&lt;br /&gt;
* via the IPMI device&amp;#039;s webserver&lt;br /&gt;
* via the SOL (part of ipmiconfig)&lt;br /&gt;
&lt;br /&gt;
SOL is closest to being at the terminal, with the added advantage of being able to use linux screen&amp;#039;s history capability to record a session. Unfortunately, it seldom works. The webserver and the IPMIView tool have an alternative console program using java, termed &amp;quot;KVM&amp;quot;. This uses the Iced Tea jnlp environment, but it can recetnly it has been demanding keys (only for marvin) and will probably not work.&lt;br /&gt;
&lt;br /&gt;
Also many of these tools are rather old, which is often not a problem if the tool&amp;#039;s function is simple. For example &amp;quot;power on&amp;quot; and &amp;quot;power off&amp;quot; are simple functions. There terinal program is not a simple function, and sometimes it may required an old version of java to run (i.e. version 6).&lt;br /&gt;
&lt;br /&gt;
== startup services==&lt;br /&gt;
&lt;br /&gt;
All important startup services are launched automatically, however it is a good idea to verify this using the &amp;quot;chkconfig&amp;quot; command. i.e. to check the runlevels (runlevel 5 being the most important) on the network time daemon (ntpd)&lt;br /&gt;
&lt;br /&gt;
 chkconfig --list ntpd&lt;br /&gt;
&lt;br /&gt;
To enable it for automatic startup: &lt;br /&gt;
&lt;br /&gt;
 chkconfig ntpd on --level 35&lt;br /&gt;
&lt;br /&gt;
which also ensures it launches even when on the minimal level 3.&lt;br /&gt;
&lt;br /&gt;
= The main critical issue =&lt;br /&gt;
&lt;br /&gt;
All in all, restarting marvin simply means typing &amp;quot;reboot&amp;quot;, and it will power down and then power up. This will happen smoothly on the nodes for example, and is a very fortunate series of events, because it means that no BIOS interaction (key presses) are required (however the baseboard event logger sometimes fills up and may disrupt this, by asking for a key press, and so bring the boot-up process to a total stop).&lt;br /&gt;
&lt;br /&gt;
Something else however causes an interruption in the boot-up of the frontend and it is the automatic mounting of the STORAGE filesystem which holds all the users home directories. This is a networked storage system, but the marvin system configures it under LVM (Logical Volume Management system) and when one is ready, the other one isn&amp;#039;t which stalls the automatic procedure. Manual intervention is therefore required. One can detect this happening by running &amp;#039;&amp;#039;&amp;#039;vgdisplay&amp;#039;&amp;#039;&amp;#039; and noticing that the STORAGE volume is unavailable.&lt;br /&gt;
&lt;br /&gt;
Because this discrepancy halts the boot-up process it must be done manually and the mount directive available via &amp;#039;&amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039;&amp;#039; should always be commented out. In any case, the manual command is very simple, so one just performs &amp;quot;reboot&amp;quot; on marvin, and once it is back on line (could take as long as 10 minutes), the following command should be invoked:&lt;br /&gt;
&lt;br /&gt;
 vgchange -a y STORAGE&lt;br /&gt;
&lt;br /&gt;
one should check via &amp;quot;vgdisplay&amp;quot; that STORAGE is now available, and one can decomment the appropriate line in &amp;#039;&amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039;&amp;#039; and run&lt;br /&gt;
&lt;br /&gt;
 mount /storage&lt;br /&gt;
&lt;br /&gt;
Of course this should then be followed by the commenting out (once again) of the storage line in &amp;#039;&amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NOTE: if you get many unfixable satle file handle problems ... which I have .. feel my pain (Im tired now - LOOOONg day):&lt;br /&gt;
&lt;br /&gt;
# on head node&lt;br /&gt;
 showmount -e marvin&lt;br /&gt;
 exportfs -f&lt;br /&gt;
 vgchange -a y STORAGE&lt;br /&gt;
 mount /storage&lt;br /&gt;
&lt;br /&gt;
# on the slaves&lt;br /&gt;
 showmount -e marvin&lt;br /&gt;
 service nfs restart&lt;br /&gt;
 service nfs restart&lt;br /&gt;
 mount -a&lt;br /&gt;
 df -h&lt;br /&gt;
&lt;br /&gt;
= Provisos =&lt;br /&gt;
&lt;br /&gt;
Restarting marvin is a major operation, as all running jobs are lost.&lt;br /&gt;
&lt;br /&gt;
It is therefore necessary to advise all users well in advance, as to when it might happen.&lt;br /&gt;
&lt;br /&gt;
= sungrid engine not running on the centos 7 nodes=&lt;br /&gt;
restart the services. &lt;br /&gt;
&lt;br /&gt;
cd into &lt;br /&gt;
 cd/etc/init.d&lt;br /&gt;
&lt;br /&gt;
http://www.softpanorama.org/HPC/Grid_engine/Troubleshooting/starting_and_killing_sge_daemons.shtml&lt;br /&gt;
&lt;br /&gt;
cd /etc/init.d&lt;br /&gt;
&lt;br /&gt;
./sgeexecd.p6444 stop&lt;br /&gt;
   Shutting down Grid Engine execution daemon&lt;br /&gt;
./sgemaster.p6444 stop&lt;br /&gt;
   shutting down Grid Engine qmaster&lt;br /&gt;
&lt;br /&gt;
 service sgeexec.p6444 stop &amp;amp;&amp;amp; sgeexec.p6444 start&lt;br /&gt;
&lt;br /&gt;
if it fails due to &amp;quot;shepherd of job&amp;quot;&lt;br /&gt;
&lt;br /&gt;
then&lt;br /&gt;
&lt;br /&gt;
 ./sgeexecd.p6444 softstop  # does not kill sheperd jobs&lt;br /&gt;
&lt;br /&gt;
doesnt work!!!!&lt;br /&gt;
&lt;br /&gt;
also useful:&lt;br /&gt;
&lt;br /&gt;
https://www.linuxquestions.org/questions/linux-newbie-8/installing-gridengine-in-centos-7-a-4175596488-print/&lt;br /&gt;
&lt;br /&gt;
This does work:&lt;br /&gt;
 just run the script:&lt;br /&gt;
 /etc/init.d/sgeexecd.p6444&lt;br /&gt;
&lt;br /&gt;
chkconfig sgeexecd.p6444 on&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Frontend_Restart&amp;diff=3499</id>
		<title>Frontend Restart</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Frontend_Restart&amp;diff=3499"/>
				<updated>2020-05-20T19:07:37Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: /* sungrid engine not running on the centos 7 nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
This is a highly critical operation, and though most of the possible pitfalls are documented here, there is plenty scope for new ones which could render the entire cluster inaccessible. While this does not mean lasting damage, but rather delays in the order of days, may be even over a week.&lt;br /&gt;
&lt;br /&gt;
Because of various precautions and checks the process usually requires 2-3 hours.&lt;br /&gt;
&lt;br /&gt;
Given these aspects, the question may be asked: why do it? The answer is to update the software, specifically, the kernel software.&lt;br /&gt;
&lt;br /&gt;
Short process:&lt;br /&gt;
* Give plenty of warning&lt;br /&gt;
* Make sure the STORAGE line in fstab is commented out&lt;br /&gt;
* shutdown all the nodes&lt;br /&gt;
* Double check the STORAGE line in fstab is commented out&lt;br /&gt;
* reboot Marvin&lt;br /&gt;
* re-mount the STORAGE partition&lt;br /&gt;
* comment out the STORAGE line in fstab&lt;br /&gt;
* bring the nodes back up with ipmiconfig&lt;br /&gt;
&lt;br /&gt;
= Measures =&lt;br /&gt;
&lt;br /&gt;
== Bring all nodes down  before restart ==&lt;br /&gt;
first comment out in &amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039; ,   The lines after  -  &amp;quot;The Three vital LVs&amp;quot; must be commented out. &lt;br /&gt;
&lt;br /&gt;
To bring a node down, simply (as super user, then log into the node via ssh)&lt;br /&gt;
 shutdown now&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is possibly the most useful measure. Primarily, it is due to the nodes using marvin to keep various filesystems mounted, and the havoc they experience when marvin stops doing this. NFS4 stale filehandles then appear and are hard to get rid of. This measure is not immediately obvious, because all the nodes are updated on a rolling basis and often do not need to be switched off.&lt;br /&gt;
&lt;br /&gt;
And then, when marvin is back up, and once its filesystems are verified, the nodes maybe brought back up. It is best not to bring them up at the same time, so that they all don&amp;#039;t immediately grab the central filesystem at the same time. They may be brought up say 5 minutes from each other. Simply another precaution&lt;br /&gt;
&lt;br /&gt;
Of course this seems like quite alot of extra work, but it&amp;#039;s worth it in terms of saving later debugging time. It&amp;#039;s clearly related to necessary manual mounting of the STORAGE volume documented below.&lt;br /&gt;
&lt;br /&gt;
Nodes can be restarted with&lt;br /&gt;
 pwonck &amp;lt;node number&amp;gt;&lt;br /&gt;
or &lt;br /&gt;
 pwcycck &amp;lt;node number&amp;gt; &lt;br /&gt;
as root.&lt;br /&gt;
&lt;br /&gt;
== Try to get console access to the frontend ==&lt;br /&gt;
&lt;br /&gt;
This can be solved with IPMI. IPMI IP is 138.251.13.220, username is ADMIN and the password is in the red folder. &lt;br /&gt;
&lt;br /&gt;
There are various options:&lt;br /&gt;
&lt;br /&gt;
* via the &amp;#039;&amp;#039;&amp;#039;ipmiconfig&amp;#039;&amp;#039;&amp;#039; tool, this is command line only.&lt;br /&gt;
* via the IPMIView tool, GUI.&lt;br /&gt;
* via the IPMI device&amp;#039;s webserver&lt;br /&gt;
* via the SOL (part of ipmiconfig)&lt;br /&gt;
&lt;br /&gt;
SOL is closest to being at the terminal, with the added advantage of being able to use linux screen&amp;#039;s history capability to record a session. Unfortunately, it seldom works. The webserver and the IPMIView tool have an alternative console program using java, termed &amp;quot;KVM&amp;quot;. This uses the Iced Tea jnlp environment, but it can recetnly it has been demanding keys (only for marvin) and will probably not work.&lt;br /&gt;
&lt;br /&gt;
Also many of these tools are rather old, which is often not a problem if the tool&amp;#039;s function is simple. For example &amp;quot;power on&amp;quot; and &amp;quot;power off&amp;quot; are simple functions. There terinal program is not a simple function, and sometimes it may required an old version of java to run (i.e. version 6).&lt;br /&gt;
&lt;br /&gt;
== startup services==&lt;br /&gt;
&lt;br /&gt;
All important startup services are launched automatically, however it is a good idea to verify this using the &amp;quot;chkconfig&amp;quot; command. i.e. to check the runlevels (runlevel 5 being the most important) on the network time daemon (ntpd)&lt;br /&gt;
&lt;br /&gt;
 chkconfig --list ntpd&lt;br /&gt;
&lt;br /&gt;
To enable it for automatic startup: &lt;br /&gt;
&lt;br /&gt;
 chkconfig ntpd on --level 35&lt;br /&gt;
&lt;br /&gt;
which also ensures it launches even when on the minimal level 3.&lt;br /&gt;
&lt;br /&gt;
= The main critical issue =&lt;br /&gt;
&lt;br /&gt;
All in all, restarting marvin simply means typing &amp;quot;reboot&amp;quot;, and it will power down and then power up. This will happen smoothly on the nodes for example, and is a very fortunate series of events, because it means that no BIOS interaction (key presses) are required (however the baseboard event logger sometimes fills up and may disrupt this, by asking for a key press, and so bring the boot-up process to a total stop).&lt;br /&gt;
&lt;br /&gt;
Something else however causes an interruption in the boot-up of the frontend and it is the automatic mounting of the STORAGE filesystem which holds all the users home directories. This is a networked storage system, but the marvin system configures it under LVM (Logical Volume Management system) and when one is ready, the other one isn&amp;#039;t which stalls the automatic procedure. Manual intervention is therefore required. One can detect this happening by running &amp;#039;&amp;#039;&amp;#039;vgdisplay&amp;#039;&amp;#039;&amp;#039; and noticing that the STORAGE volume is unavailable.&lt;br /&gt;
&lt;br /&gt;
Because this discrepancy halts the boot-up process it must be done manually and the mount directive available via &amp;#039;&amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039;&amp;#039; should always be commented out. In any case, the manual command is very simple, so one just performs &amp;quot;reboot&amp;quot; on marvin, and once it is back on line (could take as long as 10 minutes), the following command should be invoked:&lt;br /&gt;
&lt;br /&gt;
 vgchange -a y STORAGE&lt;br /&gt;
&lt;br /&gt;
one should check via &amp;quot;vgdisplay&amp;quot; that STORAGE is now available, and one can decomment the appropriate line in &amp;#039;&amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039;&amp;#039; and run&lt;br /&gt;
&lt;br /&gt;
 mount /storage&lt;br /&gt;
&lt;br /&gt;
Of course this should then be followed by the commenting out (once again) of the storage line in &amp;#039;&amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NOTE: if you get many unfixable satle file handle problems ... which I have .. feel my pain (Im tired now - LOOOONg day):&lt;br /&gt;
&lt;br /&gt;
# on head node&lt;br /&gt;
 showmount -e marvin&lt;br /&gt;
 exportfs -f&lt;br /&gt;
 vgchange -a y STORAGE&lt;br /&gt;
 mount /storage&lt;br /&gt;
&lt;br /&gt;
# on the slaves&lt;br /&gt;
 showmount -e marvin&lt;br /&gt;
 service nfs restart&lt;br /&gt;
 service nfs restart&lt;br /&gt;
 mount -a&lt;br /&gt;
 df -h&lt;br /&gt;
&lt;br /&gt;
= Provisos =&lt;br /&gt;
&lt;br /&gt;
Restarting marvin is a major operation, as all running jobs are lost.&lt;br /&gt;
&lt;br /&gt;
It is therefore necessary to advise all users well in advance, as to when it might happen.&lt;br /&gt;
&lt;br /&gt;
= sungrid engine not running on the centos 7 nodes=&lt;br /&gt;
restart the services. &lt;br /&gt;
&lt;br /&gt;
cd into &lt;br /&gt;
 cd/etc/init.d&lt;br /&gt;
&lt;br /&gt;
http://www.softpanorama.org/HPC/Grid_engine/Troubleshooting/starting_and_killing_sge_daemons.shtml&lt;br /&gt;
&lt;br /&gt;
cd /etc/init.d&lt;br /&gt;
&lt;br /&gt;
./sgeexecd.p6444 stop&lt;br /&gt;
   Shutting down Grid Engine execution daemon&lt;br /&gt;
./sgemaster.p6444 stop&lt;br /&gt;
   shutting down Grid Engine qmaster&lt;br /&gt;
&lt;br /&gt;
 service sgeexec.p6444 stop &amp;amp;&amp;amp; sgeexec.p6444 start&lt;br /&gt;
&lt;br /&gt;
if it fails due to &amp;quot;shepherd of job&amp;quot;&lt;br /&gt;
&lt;br /&gt;
then&lt;br /&gt;
&lt;br /&gt;
 ./sgeexecd.p6444 softstop  # does not kill sheperd jobs&lt;br /&gt;
&lt;br /&gt;
doesnt work!!!!&lt;br /&gt;
&lt;br /&gt;
also useful:&lt;br /&gt;
&lt;br /&gt;
https://www.linuxquestions.org/questions/linux-newbie-8/installing-gridengine-in-centos-7-a-4175596488-print/&lt;br /&gt;
&lt;br /&gt;
This does work:&lt;br /&gt;
 just run the script:&lt;br /&gt;
 /etc/init.d/sgeexecd.p6444&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Frontend_Restart&amp;diff=3498</id>
		<title>Frontend Restart</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Frontend_Restart&amp;diff=3498"/>
				<updated>2020-05-19T18:36:20Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: /* sungrid engine not running on the centos 7 nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
This is a highly critical operation, and though most of the possible pitfalls are documented here, there is plenty scope for new ones which could render the entire cluster inaccessible. While this does not mean lasting damage, but rather delays in the order of days, may be even over a week.&lt;br /&gt;
&lt;br /&gt;
Because of various precautions and checks the process usually requires 2-3 hours.&lt;br /&gt;
&lt;br /&gt;
Given these aspects, the question may be asked: why do it? The answer is to update the software, specifically, the kernel software.&lt;br /&gt;
&lt;br /&gt;
Short process:&lt;br /&gt;
* Give plenty of warning&lt;br /&gt;
* Make sure the STORAGE line in fstab is commented out&lt;br /&gt;
* shutdown all the nodes&lt;br /&gt;
* Double check the STORAGE line in fstab is commented out&lt;br /&gt;
* reboot Marvin&lt;br /&gt;
* re-mount the STORAGE partition&lt;br /&gt;
* comment out the STORAGE line in fstab&lt;br /&gt;
* bring the nodes back up with ipmiconfig&lt;br /&gt;
&lt;br /&gt;
= Measures =&lt;br /&gt;
&lt;br /&gt;
== Bring all nodes down  before restart ==&lt;br /&gt;
first comment out in &amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039; ,   The lines after  -  &amp;quot;The Three vital LVs&amp;quot; must be commented out. &lt;br /&gt;
&lt;br /&gt;
To bring a node down, simply (as super user, then log into the node via ssh)&lt;br /&gt;
 shutdown now&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is possibly the most useful measure. Primarily, it is due to the nodes using marvin to keep various filesystems mounted, and the havoc they experience when marvin stops doing this. NFS4 stale filehandles then appear and are hard to get rid of. This measure is not immediately obvious, because all the nodes are updated on a rolling basis and often do not need to be switched off.&lt;br /&gt;
&lt;br /&gt;
And then, when marvin is back up, and once its filesystems are verified, the nodes maybe brought back up. It is best not to bring them up at the same time, so that they all don&amp;#039;t immediately grab the central filesystem at the same time. They may be brought up say 5 minutes from each other. Simply another precaution&lt;br /&gt;
&lt;br /&gt;
Of course this seems like quite alot of extra work, but it&amp;#039;s worth it in terms of saving later debugging time. It&amp;#039;s clearly related to necessary manual mounting of the STORAGE volume documented below.&lt;br /&gt;
&lt;br /&gt;
Nodes can be restarted with&lt;br /&gt;
 pwonck &amp;lt;node number&amp;gt;&lt;br /&gt;
or &lt;br /&gt;
 pwcycck &amp;lt;node number&amp;gt; &lt;br /&gt;
as root.&lt;br /&gt;
&lt;br /&gt;
== Try to get console access to the frontend ==&lt;br /&gt;
&lt;br /&gt;
This can be solved with IPMI. IPMI IP is 138.251.13.220, username is ADMIN and the password is in the red folder. &lt;br /&gt;
&lt;br /&gt;
There are various options:&lt;br /&gt;
&lt;br /&gt;
* via the &amp;#039;&amp;#039;&amp;#039;ipmiconfig&amp;#039;&amp;#039;&amp;#039; tool, this is command line only.&lt;br /&gt;
* via the IPMIView tool, GUI.&lt;br /&gt;
* via the IPMI device&amp;#039;s webserver&lt;br /&gt;
* via the SOL (part of ipmiconfig)&lt;br /&gt;
&lt;br /&gt;
SOL is closest to being at the terminal, with the added advantage of being able to use linux screen&amp;#039;s history capability to record a session. Unfortunately, it seldom works. The webserver and the IPMIView tool have an alternative console program using java, termed &amp;quot;KVM&amp;quot;. This uses the Iced Tea jnlp environment, but it can recetnly it has been demanding keys (only for marvin) and will probably not work.&lt;br /&gt;
&lt;br /&gt;
Also many of these tools are rather old, which is often not a problem if the tool&amp;#039;s function is simple. For example &amp;quot;power on&amp;quot; and &amp;quot;power off&amp;quot; are simple functions. There terinal program is not a simple function, and sometimes it may required an old version of java to run (i.e. version 6).&lt;br /&gt;
&lt;br /&gt;
== startup services==&lt;br /&gt;
&lt;br /&gt;
All important startup services are launched automatically, however it is a good idea to verify this using the &amp;quot;chkconfig&amp;quot; command. i.e. to check the runlevels (runlevel 5 being the most important) on the network time daemon (ntpd)&lt;br /&gt;
&lt;br /&gt;
 chkconfig --list ntpd&lt;br /&gt;
&lt;br /&gt;
To enable it for automatic startup: &lt;br /&gt;
&lt;br /&gt;
 chkconfig ntpd on --level 35&lt;br /&gt;
&lt;br /&gt;
which also ensures it launches even when on the minimal level 3.&lt;br /&gt;
&lt;br /&gt;
= The main critical issue =&lt;br /&gt;
&lt;br /&gt;
All in all, restarting marvin simply means typing &amp;quot;reboot&amp;quot;, and it will power down and then power up. This will happen smoothly on the nodes for example, and is a very fortunate series of events, because it means that no BIOS interaction (key presses) are required (however the baseboard event logger sometimes fills up and may disrupt this, by asking for a key press, and so bring the boot-up process to a total stop).&lt;br /&gt;
&lt;br /&gt;
Something else however causes an interruption in the boot-up of the frontend and it is the automatic mounting of the STORAGE filesystem which holds all the users home directories. This is a networked storage system, but the marvin system configures it under LVM (Logical Volume Management system) and when one is ready, the other one isn&amp;#039;t which stalls the automatic procedure. Manual intervention is therefore required. One can detect this happening by running &amp;#039;&amp;#039;&amp;#039;vgdisplay&amp;#039;&amp;#039;&amp;#039; and noticing that the STORAGE volume is unavailable.&lt;br /&gt;
&lt;br /&gt;
Because this discrepancy halts the boot-up process it must be done manually and the mount directive available via &amp;#039;&amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039;&amp;#039; should always be commented out. In any case, the manual command is very simple, so one just performs &amp;quot;reboot&amp;quot; on marvin, and once it is back on line (could take as long as 10 minutes), the following command should be invoked:&lt;br /&gt;
&lt;br /&gt;
 vgchange -a y STORAGE&lt;br /&gt;
&lt;br /&gt;
one should check via &amp;quot;vgdisplay&amp;quot; that STORAGE is now available, and one can decomment the appropriate line in &amp;#039;&amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039;&amp;#039; and run&lt;br /&gt;
&lt;br /&gt;
 mount /storage&lt;br /&gt;
&lt;br /&gt;
Of course this should then be followed by the commenting out (once again) of the storage line in &amp;#039;&amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NOTE: if you get many unfixable satle file handle problems ... which I have .. feel my pain (Im tired now - LOOOONg day):&lt;br /&gt;
&lt;br /&gt;
# on head node&lt;br /&gt;
 showmount -e marvin&lt;br /&gt;
 exportfs -f&lt;br /&gt;
 vgchange -a y STORAGE&lt;br /&gt;
 mount /storage&lt;br /&gt;
&lt;br /&gt;
# on the slaves&lt;br /&gt;
 showmount -e marvin&lt;br /&gt;
 service nfs restart&lt;br /&gt;
 service nfs restart&lt;br /&gt;
 mount -a&lt;br /&gt;
 df -h&lt;br /&gt;
&lt;br /&gt;
= Provisos =&lt;br /&gt;
&lt;br /&gt;
Restarting marvin is a major operation, as all running jobs are lost.&lt;br /&gt;
&lt;br /&gt;
It is therefore necessary to advise all users well in advance, as to when it might happen.&lt;br /&gt;
&lt;br /&gt;
= sungrid engine not running on the centos 7 nodes=&lt;br /&gt;
restart the services. &lt;br /&gt;
&lt;br /&gt;
cd into &lt;br /&gt;
 cd/etc/init.d&lt;br /&gt;
&lt;br /&gt;
http://www.softpanorama.org/HPC/Grid_engine/Troubleshooting/starting_and_killing_sge_daemons.shtml&lt;br /&gt;
&lt;br /&gt;
cd /etc/init.d&lt;br /&gt;
&lt;br /&gt;
./sgeexecd.p6444 stop&lt;br /&gt;
   Shutting down Grid Engine execution daemon&lt;br /&gt;
./sgemaster.p6444 stop&lt;br /&gt;
   shutting down Grid Engine qmaster&lt;br /&gt;
&lt;br /&gt;
 service sgeexec.p6444 stop &amp;amp;&amp;amp; sgeexec.p6444 start&lt;br /&gt;
&lt;br /&gt;
if it fails due to &amp;quot;shepherd of job&amp;quot;&lt;br /&gt;
&lt;br /&gt;
then&lt;br /&gt;
&lt;br /&gt;
 ./sgeexecd.p6444 softstop  # does not kill sheperd jobs&lt;br /&gt;
&lt;br /&gt;
doesnt work!!!!&lt;br /&gt;
&lt;br /&gt;
also useful:&lt;br /&gt;
&lt;br /&gt;
https://www.linuxquestions.org/questions/linux-newbie-8/installing-gridengine-in-centos-7-a-4175596488-print/&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Frontend_Restart&amp;diff=3497</id>
		<title>Frontend Restart</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Frontend_Restart&amp;diff=3497"/>
				<updated>2020-05-19T18:05:53Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: /* sungrid engine not running on the centos 7 nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
This is a highly critical operation, and though most of the possible pitfalls are documented here, there is plenty scope for new ones which could render the entire cluster inaccessible. While this does not mean lasting damage, but rather delays in the order of days, may be even over a week.&lt;br /&gt;
&lt;br /&gt;
Because of various precautions and checks the process usually requires 2-3 hours.&lt;br /&gt;
&lt;br /&gt;
Given these aspects, the question may be asked: why do it? The answer is to update the software, specifically, the kernel software.&lt;br /&gt;
&lt;br /&gt;
Short process:&lt;br /&gt;
* Give plenty of warning&lt;br /&gt;
* Make sure the STORAGE line in fstab is commented out&lt;br /&gt;
* shutdown all the nodes&lt;br /&gt;
* Double check the STORAGE line in fstab is commented out&lt;br /&gt;
* reboot Marvin&lt;br /&gt;
* re-mount the STORAGE partition&lt;br /&gt;
* comment out the STORAGE line in fstab&lt;br /&gt;
* bring the nodes back up with ipmiconfig&lt;br /&gt;
&lt;br /&gt;
= Measures =&lt;br /&gt;
&lt;br /&gt;
== Bring all nodes down  before restart ==&lt;br /&gt;
first comment out in &amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039; ,   The lines after  -  &amp;quot;The Three vital LVs&amp;quot; must be commented out. &lt;br /&gt;
&lt;br /&gt;
To bring a node down, simply (as super user, then log into the node via ssh)&lt;br /&gt;
 shutdown now&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is possibly the most useful measure. Primarily, it is due to the nodes using marvin to keep various filesystems mounted, and the havoc they experience when marvin stops doing this. NFS4 stale filehandles then appear and are hard to get rid of. This measure is not immediately obvious, because all the nodes are updated on a rolling basis and often do not need to be switched off.&lt;br /&gt;
&lt;br /&gt;
And then, when marvin is back up, and once its filesystems are verified, the nodes maybe brought back up. It is best not to bring them up at the same time, so that they all don&amp;#039;t immediately grab the central filesystem at the same time. They may be brought up say 5 minutes from each other. Simply another precaution&lt;br /&gt;
&lt;br /&gt;
Of course this seems like quite alot of extra work, but it&amp;#039;s worth it in terms of saving later debugging time. It&amp;#039;s clearly related to necessary manual mounting of the STORAGE volume documented below.&lt;br /&gt;
&lt;br /&gt;
Nodes can be restarted with&lt;br /&gt;
 pwonck &amp;lt;node number&amp;gt;&lt;br /&gt;
or &lt;br /&gt;
 pwcycck &amp;lt;node number&amp;gt; &lt;br /&gt;
as root.&lt;br /&gt;
&lt;br /&gt;
== Try to get console access to the frontend ==&lt;br /&gt;
&lt;br /&gt;
This can be solved with IPMI. IPMI IP is 138.251.13.220, username is ADMIN and the password is in the red folder. &lt;br /&gt;
&lt;br /&gt;
There are various options:&lt;br /&gt;
&lt;br /&gt;
* via the &amp;#039;&amp;#039;&amp;#039;ipmiconfig&amp;#039;&amp;#039;&amp;#039; tool, this is command line only.&lt;br /&gt;
* via the IPMIView tool, GUI.&lt;br /&gt;
* via the IPMI device&amp;#039;s webserver&lt;br /&gt;
* via the SOL (part of ipmiconfig)&lt;br /&gt;
&lt;br /&gt;
SOL is closest to being at the terminal, with the added advantage of being able to use linux screen&amp;#039;s history capability to record a session. Unfortunately, it seldom works. The webserver and the IPMIView tool have an alternative console program using java, termed &amp;quot;KVM&amp;quot;. This uses the Iced Tea jnlp environment, but it can recetnly it has been demanding keys (only for marvin) and will probably not work.&lt;br /&gt;
&lt;br /&gt;
Also many of these tools are rather old, which is often not a problem if the tool&amp;#039;s function is simple. For example &amp;quot;power on&amp;quot; and &amp;quot;power off&amp;quot; are simple functions. There terinal program is not a simple function, and sometimes it may required an old version of java to run (i.e. version 6).&lt;br /&gt;
&lt;br /&gt;
== startup services==&lt;br /&gt;
&lt;br /&gt;
All important startup services are launched automatically, however it is a good idea to verify this using the &amp;quot;chkconfig&amp;quot; command. i.e. to check the runlevels (runlevel 5 being the most important) on the network time daemon (ntpd)&lt;br /&gt;
&lt;br /&gt;
 chkconfig --list ntpd&lt;br /&gt;
&lt;br /&gt;
To enable it for automatic startup: &lt;br /&gt;
&lt;br /&gt;
 chkconfig ntpd on --level 35&lt;br /&gt;
&lt;br /&gt;
which also ensures it launches even when on the minimal level 3.&lt;br /&gt;
&lt;br /&gt;
= The main critical issue =&lt;br /&gt;
&lt;br /&gt;
All in all, restarting marvin simply means typing &amp;quot;reboot&amp;quot;, and it will power down and then power up. This will happen smoothly on the nodes for example, and is a very fortunate series of events, because it means that no BIOS interaction (key presses) are required (however the baseboard event logger sometimes fills up and may disrupt this, by asking for a key press, and so bring the boot-up process to a total stop).&lt;br /&gt;
&lt;br /&gt;
Something else however causes an interruption in the boot-up of the frontend and it is the automatic mounting of the STORAGE filesystem which holds all the users home directories. This is a networked storage system, but the marvin system configures it under LVM (Logical Volume Management system) and when one is ready, the other one isn&amp;#039;t which stalls the automatic procedure. Manual intervention is therefore required. One can detect this happening by running &amp;#039;&amp;#039;&amp;#039;vgdisplay&amp;#039;&amp;#039;&amp;#039; and noticing that the STORAGE volume is unavailable.&lt;br /&gt;
&lt;br /&gt;
Because this discrepancy halts the boot-up process it must be done manually and the mount directive available via &amp;#039;&amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039;&amp;#039; should always be commented out. In any case, the manual command is very simple, so one just performs &amp;quot;reboot&amp;quot; on marvin, and once it is back on line (could take as long as 10 minutes), the following command should be invoked:&lt;br /&gt;
&lt;br /&gt;
 vgchange -a y STORAGE&lt;br /&gt;
&lt;br /&gt;
one should check via &amp;quot;vgdisplay&amp;quot; that STORAGE is now available, and one can decomment the appropriate line in &amp;#039;&amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039;&amp;#039; and run&lt;br /&gt;
&lt;br /&gt;
 mount /storage&lt;br /&gt;
&lt;br /&gt;
Of course this should then be followed by the commenting out (once again) of the storage line in &amp;#039;&amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NOTE: if you get many unfixable satle file handle problems ... which I have .. feel my pain (Im tired now - LOOOONg day):&lt;br /&gt;
&lt;br /&gt;
# on head node&lt;br /&gt;
 showmount -e marvin&lt;br /&gt;
 exportfs -f&lt;br /&gt;
 vgchange -a y STORAGE&lt;br /&gt;
 mount /storage&lt;br /&gt;
&lt;br /&gt;
# on the slaves&lt;br /&gt;
 showmount -e marvin&lt;br /&gt;
 service nfs restart&lt;br /&gt;
 service nfs restart&lt;br /&gt;
 mount -a&lt;br /&gt;
 df -h&lt;br /&gt;
&lt;br /&gt;
= Provisos =&lt;br /&gt;
&lt;br /&gt;
Restarting marvin is a major operation, as all running jobs are lost.&lt;br /&gt;
&lt;br /&gt;
It is therefore necessary to advise all users well in advance, as to when it might happen.&lt;br /&gt;
&lt;br /&gt;
= sungrid engine not running on the centos 7 nodes=&lt;br /&gt;
restart the services. &lt;br /&gt;
&lt;br /&gt;
cd into &lt;br /&gt;
 cd/etc/init.d&lt;br /&gt;
&lt;br /&gt;
http://www.softpanorama.org/HPC/Grid_engine/Troubleshooting/starting_and_killing_sge_daemons.shtml&lt;br /&gt;
&lt;br /&gt;
cd /etc/init.d&lt;br /&gt;
&lt;br /&gt;
./sgeexecd.p6444 stop&lt;br /&gt;
   Shutting down Grid Engine execution daemon&lt;br /&gt;
./sgemaster.p6444 stop&lt;br /&gt;
   shutting down Grid Engine qmaster&lt;br /&gt;
&lt;br /&gt;
 service sgeexec.p6444 stop &amp;amp;&amp;amp; sgeexec.p6444 start&lt;br /&gt;
&lt;br /&gt;
if it fails due to &amp;quot;shepherd of job&amp;quot;&lt;br /&gt;
&lt;br /&gt;
then&lt;br /&gt;
&lt;br /&gt;
 ./sgeexecd.p6444 softstop  # does not kill sheperd jobs&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Frontend_Restart&amp;diff=3496</id>
		<title>Frontend Restart</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Frontend_Restart&amp;diff=3496"/>
				<updated>2020-05-19T17:39:05Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
This is a highly critical operation, and though most of the possible pitfalls are documented here, there is plenty scope for new ones which could render the entire cluster inaccessible. While this does not mean lasting damage, but rather delays in the order of days, may be even over a week.&lt;br /&gt;
&lt;br /&gt;
Because of various precautions and checks the process usually requires 2-3 hours.&lt;br /&gt;
&lt;br /&gt;
Given these aspects, the question may be asked: why do it? The answer is to update the software, specifically, the kernel software.&lt;br /&gt;
&lt;br /&gt;
Short process:&lt;br /&gt;
* Give plenty of warning&lt;br /&gt;
* Make sure the STORAGE line in fstab is commented out&lt;br /&gt;
* shutdown all the nodes&lt;br /&gt;
* Double check the STORAGE line in fstab is commented out&lt;br /&gt;
* reboot Marvin&lt;br /&gt;
* re-mount the STORAGE partition&lt;br /&gt;
* comment out the STORAGE line in fstab&lt;br /&gt;
* bring the nodes back up with ipmiconfig&lt;br /&gt;
&lt;br /&gt;
= Measures =&lt;br /&gt;
&lt;br /&gt;
== Bring all nodes down  before restart ==&lt;br /&gt;
first comment out in &amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039; ,   The lines after  -  &amp;quot;The Three vital LVs&amp;quot; must be commented out. &lt;br /&gt;
&lt;br /&gt;
To bring a node down, simply (as super user, then log into the node via ssh)&lt;br /&gt;
 shutdown now&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is possibly the most useful measure. Primarily, it is due to the nodes using marvin to keep various filesystems mounted, and the havoc they experience when marvin stops doing this. NFS4 stale filehandles then appear and are hard to get rid of. This measure is not immediately obvious, because all the nodes are updated on a rolling basis and often do not need to be switched off.&lt;br /&gt;
&lt;br /&gt;
And then, when marvin is back up, and once its filesystems are verified, the nodes maybe brought back up. It is best not to bring them up at the same time, so that they all don&amp;#039;t immediately grab the central filesystem at the same time. They may be brought up say 5 minutes from each other. Simply another precaution&lt;br /&gt;
&lt;br /&gt;
Of course this seems like quite alot of extra work, but it&amp;#039;s worth it in terms of saving later debugging time. It&amp;#039;s clearly related to necessary manual mounting of the STORAGE volume documented below.&lt;br /&gt;
&lt;br /&gt;
Nodes can be restarted with&lt;br /&gt;
 pwonck &amp;lt;node number&amp;gt;&lt;br /&gt;
or &lt;br /&gt;
 pwcycck &amp;lt;node number&amp;gt; &lt;br /&gt;
as root.&lt;br /&gt;
&lt;br /&gt;
== Try to get console access to the frontend ==&lt;br /&gt;
&lt;br /&gt;
This can be solved with IPMI. IPMI IP is 138.251.13.220, username is ADMIN and the password is in the red folder. &lt;br /&gt;
&lt;br /&gt;
There are various options:&lt;br /&gt;
&lt;br /&gt;
* via the &amp;#039;&amp;#039;&amp;#039;ipmiconfig&amp;#039;&amp;#039;&amp;#039; tool, this is command line only.&lt;br /&gt;
* via the IPMIView tool, GUI.&lt;br /&gt;
* via the IPMI device&amp;#039;s webserver&lt;br /&gt;
* via the SOL (part of ipmiconfig)&lt;br /&gt;
&lt;br /&gt;
SOL is closest to being at the terminal, with the added advantage of being able to use linux screen&amp;#039;s history capability to record a session. Unfortunately, it seldom works. The webserver and the IPMIView tool have an alternative console program using java, termed &amp;quot;KVM&amp;quot;. This uses the Iced Tea jnlp environment, but it can recetnly it has been demanding keys (only for marvin) and will probably not work.&lt;br /&gt;
&lt;br /&gt;
Also many of these tools are rather old, which is often not a problem if the tool&amp;#039;s function is simple. For example &amp;quot;power on&amp;quot; and &amp;quot;power off&amp;quot; are simple functions. There terinal program is not a simple function, and sometimes it may required an old version of java to run (i.e. version 6).&lt;br /&gt;
&lt;br /&gt;
== startup services==&lt;br /&gt;
&lt;br /&gt;
All important startup services are launched automatically, however it is a good idea to verify this using the &amp;quot;chkconfig&amp;quot; command. i.e. to check the runlevels (runlevel 5 being the most important) on the network time daemon (ntpd)&lt;br /&gt;
&lt;br /&gt;
 chkconfig --list ntpd&lt;br /&gt;
&lt;br /&gt;
To enable it for automatic startup: &lt;br /&gt;
&lt;br /&gt;
 chkconfig ntpd on --level 35&lt;br /&gt;
&lt;br /&gt;
which also ensures it launches even when on the minimal level 3.&lt;br /&gt;
&lt;br /&gt;
= The main critical issue =&lt;br /&gt;
&lt;br /&gt;
All in all, restarting marvin simply means typing &amp;quot;reboot&amp;quot;, and it will power down and then power up. This will happen smoothly on the nodes for example, and is a very fortunate series of events, because it means that no BIOS interaction (key presses) are required (however the baseboard event logger sometimes fills up and may disrupt this, by asking for a key press, and so bring the boot-up process to a total stop).&lt;br /&gt;
&lt;br /&gt;
Something else however causes an interruption in the boot-up of the frontend and it is the automatic mounting of the STORAGE filesystem which holds all the users home directories. This is a networked storage system, but the marvin system configures it under LVM (Logical Volume Management system) and when one is ready, the other one isn&amp;#039;t which stalls the automatic procedure. Manual intervention is therefore required. One can detect this happening by running &amp;#039;&amp;#039;&amp;#039;vgdisplay&amp;#039;&amp;#039;&amp;#039; and noticing that the STORAGE volume is unavailable.&lt;br /&gt;
&lt;br /&gt;
Because this discrepancy halts the boot-up process it must be done manually and the mount directive available via &amp;#039;&amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039;&amp;#039; should always be commented out. In any case, the manual command is very simple, so one just performs &amp;quot;reboot&amp;quot; on marvin, and once it is back on line (could take as long as 10 minutes), the following command should be invoked:&lt;br /&gt;
&lt;br /&gt;
 vgchange -a y STORAGE&lt;br /&gt;
&lt;br /&gt;
one should check via &amp;quot;vgdisplay&amp;quot; that STORAGE is now available, and one can decomment the appropriate line in &amp;#039;&amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039;&amp;#039; and run&lt;br /&gt;
&lt;br /&gt;
 mount /storage&lt;br /&gt;
&lt;br /&gt;
Of course this should then be followed by the commenting out (once again) of the storage line in &amp;#039;&amp;#039;&amp;#039;/etc/fstab&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NOTE: if you get many unfixable satle file handle problems ... which I have .. feel my pain (Im tired now - LOOOONg day):&lt;br /&gt;
&lt;br /&gt;
# on head node&lt;br /&gt;
 showmount -e marvin&lt;br /&gt;
 exportfs -f&lt;br /&gt;
 vgchange -a y STORAGE&lt;br /&gt;
 mount /storage&lt;br /&gt;
&lt;br /&gt;
# on the slaves&lt;br /&gt;
 showmount -e marvin&lt;br /&gt;
 service nfs restart&lt;br /&gt;
 service nfs restart&lt;br /&gt;
 mount -a&lt;br /&gt;
 df -h&lt;br /&gt;
&lt;br /&gt;
= Provisos =&lt;br /&gt;
&lt;br /&gt;
Restarting marvin is a major operation, as all running jobs are lost.&lt;br /&gt;
&lt;br /&gt;
It is therefore necessary to advise all users well in advance, as to when it might happen.&lt;br /&gt;
&lt;br /&gt;
= sungrid engine not running on the centos 7 nodes=&lt;br /&gt;
restart the services. &lt;br /&gt;
&lt;br /&gt;
cd into &lt;br /&gt;
 cd/etc/init.d&lt;br /&gt;
&lt;br /&gt;
http://www.softpanorama.org/HPC/Grid_engine/Troubleshooting/starting_and_killing_sge_daemons.shtml&lt;br /&gt;
&lt;br /&gt;
cd /etc/init.d&lt;br /&gt;
&lt;br /&gt;
./sgeexecd.p6444 stop&lt;br /&gt;
   Shutting down Grid Engine execution daemon&lt;br /&gt;
./sgemaster.p6444 stop&lt;br /&gt;
   shutting down Grid Engine qmaster&lt;br /&gt;
&lt;br /&gt;
 service sgeexec.p6444 stop &amp;amp;&amp;amp; sgeexec.p6444 start&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3495</id>
		<title>Kennedy manual</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3495"/>
				<updated>2020-05-07T11:48:46Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
*[[quick start]]&lt;br /&gt;
&lt;br /&gt;
#[[creating ssh keys and logging on]]&lt;br /&gt;
#[[logging on to Kennedy]]&lt;br /&gt;
&lt;br /&gt;
*[[pull and push to to and from MARVIN]]&lt;br /&gt;
&lt;br /&gt;
*[[filezilla data transfer]]&lt;br /&gt;
&lt;br /&gt;
*[[slurm commands]]&lt;br /&gt;
&lt;br /&gt;
*[[submit a job and monitor queues]]&lt;br /&gt;
&lt;br /&gt;
*[[samba like connection]]&lt;br /&gt;
&lt;br /&gt;
*[[Conda]]&lt;br /&gt;
 CONDA: http://stab.st-andrews.ac.uk/wiki/index.php/Conda#conda&lt;br /&gt;
 Training: github.com/peterthorpe5/Sys_admin/tree/master/cluster_course&lt;br /&gt;
Note: on Kennedy, you may need to make programs executable to run. This is a know bug. Locate where the install have just gone, move to that directory and run &lt;br /&gt;
 chmod u+x prog(s)_of_interest. often in th elib folder&lt;br /&gt;
&lt;br /&gt;
The error that indicates this kind of error is:&lt;br /&gt;
 libptf77blas.so.3: &amp;#039;&amp;#039;&amp;#039;cannot open shared object file&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
==syntax highlighting in Nano==&lt;br /&gt;
do the following&lt;br /&gt;
 cp /gpfs1/apps/kennedy_sys_admin/misc/.nanorc ./&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3494</id>
		<title>Kennedy manual</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3494"/>
				<updated>2020-05-07T11:47:50Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
*[[quick start]]&lt;br /&gt;
&lt;br /&gt;
#[[creating ssh keys and logging on]]&lt;br /&gt;
#[[logging on to Kennedy]]&lt;br /&gt;
&lt;br /&gt;
*[[pull and push to to and from MARVIN]]&lt;br /&gt;
&lt;br /&gt;
*[[filezilla data transfer]]&lt;br /&gt;
&lt;br /&gt;
*[[slurm commands]]&lt;br /&gt;
&lt;br /&gt;
*[[submit a job and monitor queues]]&lt;br /&gt;
&lt;br /&gt;
*[[samba like connection]]&lt;br /&gt;
&lt;br /&gt;
*[[Conda]]&lt;br /&gt;
 CONDA: http://stab.st-andrews.ac.uk/wiki/index.php/Conda#conda&lt;br /&gt;
 Training: github.com/peterthorpe5/Sys_admin/tree/master/cluster_course&lt;br /&gt;
Note: on Kennedy, you may need to make programs executable to run. This is a know bug. Locate where the install have just gone, move to that directory and run chmod u+x prog_of_interest. The error that idicates something like this is:&lt;br /&gt;
 libptf77blas.so.3: &amp;#039;&amp;#039;&amp;#039;cannot open shared object file&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==syntax highlighting in Nano==&lt;br /&gt;
do the following&lt;br /&gt;
 cp /gpfs1/apps/kennedy_sys_admin/misc/.nanorc ./&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3493</id>
		<title>Kennedy manual</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3493"/>
				<updated>2020-05-06T12:35:06Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
*[[quick start]]&lt;br /&gt;
&lt;br /&gt;
#[[creating ssh keys and logging on]]&lt;br /&gt;
#[[logging on to Kennedy]]&lt;br /&gt;
&lt;br /&gt;
*[[pull and push to to and from MARVIN]]&lt;br /&gt;
&lt;br /&gt;
*[[filezilla data transfer]]&lt;br /&gt;
&lt;br /&gt;
*[[slurm commands]]&lt;br /&gt;
&lt;br /&gt;
*[[submit a job and monitor queues]]&lt;br /&gt;
&lt;br /&gt;
*[[samba like connection]]&lt;br /&gt;
&lt;br /&gt;
*[[Conda]]&lt;br /&gt;
 CONDA: http://stab.st-andrews.ac.uk/wiki/index.php/Conda#conda&lt;br /&gt;
 Training: github.com/peterthorpe5/Sys_admin/tree/master/cluster_course&lt;br /&gt;
Note: on Kennedy, you may need to make programs executable to run. This is a know bug. Locate where the install have just gone, move to that directory and run chmod u+x prog_of_interest&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==syntax highlighting in Nano==&lt;br /&gt;
do the following&lt;br /&gt;
 cp /gpfs1/apps/kennedy_sys_admin/misc/.nanorc ./&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3492</id>
		<title>Kennedy manual</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3492"/>
				<updated>2020-05-06T12:32:54Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
*[[quick start]]&lt;br /&gt;
&lt;br /&gt;
#[[creating ssh keys and logging on]]&lt;br /&gt;
#[[logging on to Kennedy]]&lt;br /&gt;
&lt;br /&gt;
*[[pull and push to to and from MARVIN]]&lt;br /&gt;
&lt;br /&gt;
*[[filezilla data transfer]]&lt;br /&gt;
&lt;br /&gt;
*[[slurm commands]]&lt;br /&gt;
&lt;br /&gt;
*[[submit a job and monitor queues]]&lt;br /&gt;
&lt;br /&gt;
*[[samba like connection]]&lt;br /&gt;
&lt;br /&gt;
*[[Conda]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==syntax highlighting in Nano==&lt;br /&gt;
do the following&lt;br /&gt;
 cp /gpfs1/apps/kennedy_sys_admin/misc/.nanorc ./&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3491</id>
		<title>Kennedy manual</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3491"/>
				<updated>2020-05-06T12:31:34Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
*[[quick start]]&lt;br /&gt;
&lt;br /&gt;
#[[creating ssh keys and logging on]]&lt;br /&gt;
#[[logging on to Kennedy]]&lt;br /&gt;
&lt;br /&gt;
*[[pull and push to to and from MARVIN]]&lt;br /&gt;
&lt;br /&gt;
*[[filezilla data transfer]]&lt;br /&gt;
&lt;br /&gt;
*[[slurm commands]]&lt;br /&gt;
&lt;br /&gt;
*[[submit a job and monitor queues]]&lt;br /&gt;
&lt;br /&gt;
*[[samba like connection]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==syntax highlighting in Nano==&lt;br /&gt;
do the following&lt;br /&gt;
 cp /gpfs1/apps/kennedy_sys_admin/misc/.nanorc ./&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Samba_like_connection&amp;diff=3490</id>
		<title>Samba like connection</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Samba_like_connection&amp;diff=3490"/>
				<updated>2020-05-06T12:30:11Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: Created page with &amp;quot;  == Windows net work connect in a samba like manner ==   http://stab.st-andrews.ac.uk/wiki/index.php/Windows_network_connect  You probably don&amp;#039;t know what samba is. This does...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== Windows net work connect in a samba like manner ==&lt;br /&gt;
&lt;br /&gt;
 http://stab.st-andrews.ac.uk/wiki/index.php/Windows_network_connect&lt;br /&gt;
&lt;br /&gt;
You probably don&amp;#039;t know what samba is. This doesn&amp;#039;t matter. ALl you need to know is - THIS IS COOL.&lt;br /&gt;
&lt;br /&gt;
install these on your windows or Linux PC. These are prebuilt installer and all you need to do it double click.&lt;br /&gt;
 sshfs-win: https://github.com/billziss-gh/sshfs-win  &lt;br /&gt;
 Requires WinFsp (https://github.com/billziss-gh/winfsp/releases/latest ) &lt;br /&gt;
 # to be installed with Cygwin FUSE support ticked in the installer.&lt;br /&gt;
&lt;br /&gt;
Now open Windows Explorer and go to “This PC”.&lt;br /&gt;
From the ribbon click “Map Network Drive”. Map this to a letter of your choice, e.g. k for kennedy&lt;br /&gt;
Enter:&lt;br /&gt;
&lt;br /&gt;
 \\sshfs.k\you@kennedy10&lt;br /&gt;
&lt;br /&gt;
Large file transfers, please use filezilla, I dont know how stable this is!!!!!!!! &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mount the scratch, you still have to navigate to it, like so:==&lt;br /&gt;
 \\sshfs.k\USERNAME@kennedy10/../../scratch/bioinf/USERNAME&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To map the scratch see the next slide.&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3489</id>
		<title>Kennedy manual</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3489"/>
				<updated>2020-05-06T12:28:37Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
*[[quick start]]&lt;br /&gt;
&lt;br /&gt;
#[[creating ssh keys and logging on]]&lt;br /&gt;
#[[logging on to Kennedy]]&lt;br /&gt;
&lt;br /&gt;
*[[pull and push to to and from MARVIN]]&lt;br /&gt;
&lt;br /&gt;
*[[filezilla data transfer]]&lt;br /&gt;
&lt;br /&gt;
*[[slurm commands]]&lt;br /&gt;
&lt;br /&gt;
*[[submit a job and monitor queues]]&lt;br /&gt;
&lt;br /&gt;
*[[samba like connection]]&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Submit_a_job_and_monitor_queues&amp;diff=3488</id>
		<title>Submit a job and monitor queues</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Submit_a_job_and_monitor_queues&amp;diff=3488"/>
				<updated>2020-05-06T12:27:43Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: Created page with &amp;quot; == submit a job ==  to submit  a shell which contains are the needed flags in the file.  sbatch spades.sh   ==Show information on the queues:==  smap  squeue  (show the queue...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== submit a job ==&lt;br /&gt;
&lt;br /&gt;
to submit  a shell which contains are the needed flags in the file.&lt;br /&gt;
 sbatch spades.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Show information on the queues:==&lt;br /&gt;
 smap&lt;br /&gt;
 squeue  (show the queue)&lt;br /&gt;
 squeue –p bigmem  (show the queue for big mem)&lt;br /&gt;
 sview&lt;br /&gt;
&lt;br /&gt;
==Interactive mode (like qrsh):==&lt;br /&gt;
 srun --pty bash -p bigmem           (Bioinf user : use the bigmem q)&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3487</id>
		<title>Kennedy manual</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3487"/>
				<updated>2020-05-06T12:26:08Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
*[[quick start]]&lt;br /&gt;
&lt;br /&gt;
#[[creating ssh keys and logging on]]&lt;br /&gt;
#[[logging on to Kennedy]]&lt;br /&gt;
&lt;br /&gt;
*[[pull and push to to and from MARVIN]]&lt;br /&gt;
&lt;br /&gt;
*[[filezilla data transfer]]&lt;br /&gt;
&lt;br /&gt;
*[[slurm commands]]&lt;br /&gt;
&lt;br /&gt;
*[[submit a job and monitor queues]]&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Slurm_commands&amp;diff=3486</id>
		<title>Slurm commands</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Slurm_commands&amp;diff=3486"/>
				<updated>2020-05-06T12:25:14Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== slurm commands ==&lt;br /&gt;
&lt;br /&gt;
note the BIOINF community are supposed to use the -p bigmem queue. &lt;br /&gt;
&lt;br /&gt;
Sun Grid Engines (what we use on Marvin) to slurm command conversion:&lt;br /&gt;
 https://srcc.stanford.edu/sge-slurm-conversion&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following are command line commands, or tags you can put in your shell script (or at the command line) to achieve certain functionality.&lt;br /&gt;
&lt;br /&gt;
 request_48_thread_1.3TBRAM&lt;br /&gt;
 #!/bin/bash -l  &amp;#039;&amp;#039;&amp;#039;# not the -l is essential now&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
 #SBATCH -J fly_pilon   #jobname&lt;br /&gt;
 #SBATCH -N 1     #node&lt;br /&gt;
 #SBATCH --ntasks-per-node=48&lt;br /&gt;
 #SBATCH --threads-per-core=2&lt;br /&gt;
 #SBATCH -p bigmem&lt;br /&gt;
 #SBATCH --nodelist=kennedy150  # this is the specific node. This one has 1.5TB RAM&lt;br /&gt;
 #SBATCH --mem=1350GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 test_conda_activate&lt;br /&gt;
 #!/bin/bash -l&lt;br /&gt;
 #SBATCH -J conda_test   #jobname&lt;br /&gt;
 #SBATCH -N 1     #node&lt;br /&gt;
 #SBATCH --tasks-per-node=1&lt;br /&gt;
 #SBATCH -p bigmem    # big mem if for the BIOINF community&lt;br /&gt;
 #SBATCH --mail-type=END     # email at the end of the job&lt;br /&gt;
 #SBATCH --mail-user=$USER@st-andrews.ac.uk      # your email address&lt;br /&gt;
&lt;br /&gt;
cd /gpfs1/home/$USER/&lt;br /&gt;
&lt;br /&gt;
pyv=&amp;quot;$(python -V 2&amp;gt;&amp;amp;1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$pyv&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# conda to activate the software&lt;br /&gt;
&lt;br /&gt;
echo $PATH&lt;br /&gt;
&lt;br /&gt;
conda activate spades&lt;br /&gt;
&lt;br /&gt;
pyv=&amp;quot;$(python -V 2&amp;gt;&amp;amp;1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$pyv&amp;quot;&lt;br /&gt;
&lt;br /&gt;
conda deactivate &lt;br /&gt;
&lt;br /&gt;
conda activate python27&lt;br /&gt;
&lt;br /&gt;
pyv=&amp;quot;$(python2 -V 2&amp;gt;&amp;amp;1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$pyv&amp;quot;&lt;br /&gt;
&lt;br /&gt;
 12threads_bigMeme_30G_RAM&lt;br /&gt;
&lt;br /&gt;
!/bin/bash -l  # essential &lt;br /&gt;
#SBATCH -J trimmo   #jobname&lt;br /&gt;
#SBATCH -N 1     #node&lt;br /&gt;
#SBATCH --ntasks-per-node=12&lt;br /&gt;
#SBATCH --threads-per-core=2&lt;br /&gt;
#SBATCH -p bigmem&lt;br /&gt;
#SBATCH --mem=30GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Request an interactive job with one GPU:&lt;br /&gt;
 # srun --gres=gpu:1 -N 1 -p singlenode --pty /bin/bash&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=File:---C--Users-pjt6-Desktop-Picture1.png&amp;diff=3485</id>
		<title>File:---C--Users-pjt6-Desktop-Picture1.png</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=File:---C--Users-pjt6-Desktop-Picture1.png&amp;diff=3485"/>
				<updated>2020-05-06T12:22:33Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: PeterThorpe uploaded a new version of File:---C--Users-pjt6-Desktop-Picture1.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=File:---C--Users-pjt6-Desktop-Picture1.png&amp;diff=3484</id>
		<title>File:---C--Users-pjt6-Desktop-Picture1.png</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=File:---C--Users-pjt6-Desktop-Picture1.png&amp;diff=3484"/>
				<updated>2020-05-06T12:21:10Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Filezilla_data_transfer&amp;diff=3483</id>
		<title>Filezilla data transfer</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Filezilla_data_transfer&amp;diff=3483"/>
				<updated>2020-05-06T12:20:24Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: Created page with &amp;quot;Media:file:///C:/Users/pjt6/Desktop/Picture1.png&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Media:file:///C:/Users/pjt6/Desktop/Picture1.png]]&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3482</id>
		<title>Kennedy manual</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3482"/>
				<updated>2020-05-06T12:18:59Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
*[[quick start]]&lt;br /&gt;
&lt;br /&gt;
#[[creating ssh keys and logging on]]&lt;br /&gt;
#[[logging on to Kennedy]]&lt;br /&gt;
&lt;br /&gt;
*[[pull and push to to and from MARVIN]]&lt;br /&gt;
&lt;br /&gt;
*[[filezilla data transfer]]&lt;br /&gt;
&lt;br /&gt;
*[[slurm commands]]&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Pull_and_push_to_to_and_from_MARVIN&amp;diff=3481</id>
		<title>Pull and push to to and from MARVIN</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Pull_and_push_to_to_and_from_MARVIN&amp;diff=3481"/>
				<updated>2020-05-06T12:17:36Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: Created page with &amp;quot; == Copy data from marvin – or push to marvin for backup ==  users will want to copy data from the backed up area Marvin to the temp work/ scratch space on Kennedy, and visa...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Copy data from marvin – or push to marvin for backup ==&lt;br /&gt;
&lt;br /&gt;
users will want to copy data from the backed up area Marvin to the temp work/ scratch space on Kennedy, and visa versa. &lt;br /&gt;
We recommend using rsync for this. It is surprisingly fast. &lt;br /&gt;
&lt;br /&gt;
 https://www.tecmint.com/rsync-local-remote-file-synchronization-commands/&lt;br /&gt;
&lt;br /&gt;
 rsync -av $USER@marvin.st-andrews.ac.uk:path/ path/&lt;br /&gt;
&lt;br /&gt;
Type yes, then your marvin password&lt;br /&gt;
&lt;br /&gt;
PUSH FILES TO MARVIN (please look up the commands)&lt;br /&gt;
&lt;br /&gt;
 rsync -avzhe ssh file_to_transfer $USER@marvin.st-andrews.ac.uk:/path_to/&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3480</id>
		<title>Kennedy manual</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3480"/>
				<updated>2020-05-06T12:14:45Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
*[[quick start]]&lt;br /&gt;
&lt;br /&gt;
#[[creating ssh keys and logging on]]&lt;br /&gt;
#[[loggin on to Kennedy]]&lt;br /&gt;
&lt;br /&gt;
*[[pull and push to to and from MARVIN]]&lt;br /&gt;
&lt;br /&gt;
*[[slurm commands]]&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Mobaxterm_for_Windows&amp;diff=3479</id>
		<title>Mobaxterm for Windows</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Mobaxterm_for_Windows&amp;diff=3479"/>
				<updated>2020-05-06T12:13:54Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;If you use previously use MOBA and want to continue to use it. This is how. You can use any ssh client you wish&lt;br /&gt;
&lt;br /&gt;
NOTE: for myself, this was done before we changed the way people log in.  This may not work for you until we test it. &lt;br /&gt;
&lt;br /&gt;
On mobaxterm:  ( you may need to change the permission of the ssh key folder). &lt;br /&gt;
&lt;br /&gt;
 ssh -i path_to_key  username@kennedy.st-andrews.ac.uk&lt;br /&gt;
&lt;br /&gt;
On windows, mobaxterm required: &lt;br /&gt;
&lt;br /&gt;
 /drives/c/ ….&lt;br /&gt;
&lt;br /&gt;
It is possible to save this session .. Google is your friend.&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Mobaxterm_for_Windows&amp;diff=3478</id>
		<title>Mobaxterm for Windows</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Mobaxterm_for_Windows&amp;diff=3478"/>
				<updated>2020-05-06T12:13:33Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: Created page with &amp;quot;If you use previously use MOBA and want to continue to use it. This is how. You can use any ssh client you wish  NOTE: for myself, this was done before we changed the way peop...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;If you use previously use MOBA and want to continue to use it. This is how. You can use any ssh client you wish&lt;br /&gt;
&lt;br /&gt;
NOTE: for myself, this was done before we changed the way people log in.  This may not work for you until we test it. &lt;br /&gt;
&lt;br /&gt;
On mobaxterm:  ( you may need to change the permission of the ssh key folder). &lt;br /&gt;
&lt;br /&gt;
ssh -i path_to_key  username@kennedy.st-andrews.ac.uk&lt;br /&gt;
&lt;br /&gt;
On windows, mobaxterm required: /drives/c/ ….&lt;br /&gt;
&lt;br /&gt;
It is possible to save this session .. Google is your friend.&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Loggin_on_to_Kennedy&amp;diff=3477</id>
		<title>Loggin on to Kennedy</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Loggin_on_to_Kennedy&amp;diff=3477"/>
				<updated>2020-05-06T12:12:42Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Log onto Kennedy using a terminal e.g. putty for Windows or terminal for Linux/ MAC ==&lt;br /&gt;
&lt;br /&gt;
*[[Mobaxterm for Windows]]&lt;br /&gt;
&lt;br /&gt;
When you have received a password from the sys admins. &lt;br /&gt;
&lt;br /&gt;
You should be able to log onto kennedy from a terminal by doing: see next slide.&lt;br /&gt;
&lt;br /&gt;
 ssh USERNAME@kennedy.st-Andrews.ac.uk&lt;br /&gt;
&lt;br /&gt;
 PASSWORD:&lt;br /&gt;
&lt;br /&gt;
You will then be prompted for your password. &lt;br /&gt;
&lt;br /&gt;
Please change your password on 1st log in conforming to the university requirements: &lt;br /&gt;
 https://www.st-andrews.ac.uk/it-support/security/password/&lt;br /&gt;
&lt;br /&gt;
A strong password is around 10 characters, including uppercase and lowercase letters, numbers and symbols.&lt;br /&gt;
&lt;br /&gt;
Use the command:&lt;br /&gt;
 passwd&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Loggin_on_to_Kennedy&amp;diff=3476</id>
		<title>Loggin on to Kennedy</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Loggin_on_to_Kennedy&amp;diff=3476"/>
				<updated>2020-05-06T12:12:26Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Log onto Kennedy using a terminal e.g. putty for Windows or terminal for Linux/ MAC ==&lt;br /&gt;
&lt;br /&gt;
*[Mobaxterm for Windows]]&lt;br /&gt;
&lt;br /&gt;
When you have received a password from the sys admins. &lt;br /&gt;
&lt;br /&gt;
You should be able to log onto kennedy from a terminal by doing: see next slide.&lt;br /&gt;
&lt;br /&gt;
 ssh USERNAME@kennedy.st-Andrews.ac.uk&lt;br /&gt;
&lt;br /&gt;
 PASSWORD:&lt;br /&gt;
&lt;br /&gt;
You will then be prompted for your password. &lt;br /&gt;
&lt;br /&gt;
Please change your password on 1st log in conforming to the university requirements: &lt;br /&gt;
 https://www.st-andrews.ac.uk/it-support/security/password/&lt;br /&gt;
&lt;br /&gt;
A strong password is around 10 characters, including uppercase and lowercase letters, numbers and symbols.&lt;br /&gt;
&lt;br /&gt;
Use the command:&lt;br /&gt;
 passwd&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Loggin_on_to_Kennedy&amp;diff=3475</id>
		<title>Loggin on to Kennedy</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Loggin_on_to_Kennedy&amp;diff=3475"/>
				<updated>2020-05-06T12:11:25Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: Created page with &amp;quot; == Log onto Kennedy using a terminal e.g. putty for Windows or terminal for Linux/ MAC ==  When you have received a password from the sys admins.   You should be able to log...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Log onto Kennedy using a terminal e.g. putty for Windows or terminal for Linux/ MAC ==&lt;br /&gt;
&lt;br /&gt;
When you have received a password from the sys admins. &lt;br /&gt;
&lt;br /&gt;
You should be able to log onto kennedy from a terminal by doing: see next slide.&lt;br /&gt;
&lt;br /&gt;
 ssh USERNAME@kennedy.st-Andrews.ac.uk&lt;br /&gt;
&lt;br /&gt;
 PASSWORD:&lt;br /&gt;
&lt;br /&gt;
You will then be prompted for your password. &lt;br /&gt;
&lt;br /&gt;
Please change your password on 1st log in conforming to the university requirements: &lt;br /&gt;
 https://www.st-andrews.ac.uk/it-support/security/password/&lt;br /&gt;
&lt;br /&gt;
A strong password is around 10 characters, including uppercase and lowercase letters, numbers and symbols.&lt;br /&gt;
&lt;br /&gt;
Use the command:&lt;br /&gt;
 passwd&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3474</id>
		<title>Kennedy manual</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3474"/>
				<updated>2020-05-06T12:10:23Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
*[[quick start]]&lt;br /&gt;
&lt;br /&gt;
#[[creating ssh keys and logging on]]&lt;br /&gt;
#[[loggin on to Kennedy]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[[slurm commands]]&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Windows_users&amp;diff=3473</id>
		<title>Windows users</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Windows_users&amp;diff=3473"/>
				<updated>2020-05-06T12:08:24Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: Created page with &amp;quot; == Windows: Create a public and private key using putty-gen ==   Type putty gen in the window search bar (download and install if needed)  Click on generate and wiggle the mo...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Windows: Create a public and private key using putty-gen ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Type putty gen in the window search bar (download and install if needed)&lt;br /&gt;
&lt;br /&gt;
Click on generate and wiggle the mouse. &lt;br /&gt;
&lt;br /&gt;
Save private key to : &lt;br /&gt;
 C:\Users\username\.ssh\putty_priv.ppk&lt;br /&gt;
&lt;br /&gt;
(do not ever share you private key)&lt;br /&gt;
&lt;br /&gt;
Send the string in the upper grey part of the PuTTYgen window that says &lt;br /&gt;
 &amp;quot;Public key for pasting into OpenSSH authorized_keys file&amp;quot;. &lt;br /&gt;
 Copy and paste this into a text file or just into the email to the sys admins.&lt;br /&gt;
&lt;br /&gt;
 note for admins:&lt;br /&gt;
The key should be called/renamed authorized_keys when put in the .ssh folder in the $HOME on kennedy – this will be done by the sys admin&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Apple_MAC_people_%2B_Linux&amp;diff=3472</id>
		<title>Apple MAC people + Linux</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Apple_MAC_people_%2B_Linux&amp;diff=3472"/>
				<updated>2020-05-06T12:05:35Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: Created page with &amp;quot; == create an ssh key using MAC or Linux platforms ==    open a terminal and enter the command   &amp;quot;ssh-keygen&amp;quot;. Accept all default filenames.   Choose a passphrase when asked f...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== create an ssh key using MAC or Linux platforms ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
open a terminal and enter the command &lt;br /&gt;
 &amp;quot;ssh-keygen&amp;quot;. Accept all default filenames. &lt;br /&gt;
&lt;br /&gt;
Choose a passphrase when asked for one. Then email the sys admins the file &lt;br /&gt;
 .ssh/id_rsa.pub&lt;br /&gt;
&lt;br /&gt;
Directories starting with a . are normally not visible, so it might be easiest to first copy that file into your home&lt;br /&gt;
directory:&lt;br /&gt;
&lt;br /&gt;
 cp ~/.ssh/id_rsa.pub&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Creating_ssh_keys_and_logging_on&amp;diff=3471</id>
		<title>Creating ssh keys and logging on</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Creating_ssh_keys_and_logging_on&amp;diff=3471"/>
				<updated>2020-05-06T12:05:06Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== the security we use requires ssh keys ==&lt;br /&gt;
&lt;br /&gt;
*[[Apple MAC people + Linux]]&lt;br /&gt;
&lt;br /&gt;
*[[Windows users]]&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Creating_ssh_keys_and_logging_on&amp;diff=3470</id>
		<title>Creating ssh keys and logging on</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Creating_ssh_keys_and_logging_on&amp;diff=3470"/>
				<updated>2020-05-06T12:03:57Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: Created page with &amp;quot; == the security we use requires ssh keys ==   &amp;#039;&amp;#039;&amp;#039;Apple MAC people&amp;#039;&amp;#039;&amp;#039;  open a terminal and enter the command   &amp;quot;ssh-keygen&amp;quot;. Accept all default filenames.   Choose a passphras...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== the security we use requires ssh keys ==&lt;br /&gt;
&lt;br /&gt;
 &amp;#039;&amp;#039;&amp;#039;Apple MAC people&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
open a terminal and enter the command &lt;br /&gt;
 &amp;quot;ssh-keygen&amp;quot;. Accept all default filenames. &lt;br /&gt;
&lt;br /&gt;
Choose a passphrase when asked for one. Then email the sys admins the file &lt;br /&gt;
 .ssh/id_rsa.pub&lt;br /&gt;
&lt;br /&gt;
Directories starting with a . are normally not visible, so it might be easiest to first copy that file into your home&lt;br /&gt;
directory:&lt;br /&gt;
&lt;br /&gt;
 cp ~/.ssh/id_rsa.pub&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3469</id>
		<title>Kennedy manual</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3469"/>
				<updated>2020-05-06T12:02:07Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
*[[quick start]]&lt;br /&gt;
&lt;br /&gt;
#[[creating ssh keys and logging on]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[[slurm commands]]&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Quick_start&amp;diff=3468</id>
		<title>Quick start</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Quick_start&amp;diff=3468"/>
				<updated>2020-05-06T12:00:58Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== quick start ==&lt;br /&gt;
&lt;br /&gt;
You must either be on the University campus or logged on via a VPN see &lt;br /&gt;
 https://www.st-andrews.ac.uk/it-support/services/internet/vpn/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Temporary quick start info can be found here. This is a work in progress until something official is made up.&lt;br /&gt;
&lt;br /&gt;
 https://github.com/peterthorpe5/How_to_use_Kennedy_HPC&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Herbert’s excellent existing documentation&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Please also see here:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 https://universityofstandrews907.sharepoint.com/sites/chemistry/CurrentStudents/Shared%20Documents/Forms/AllItems.aspx?id=%2Fsites%2Fchemistry%2FCurrentStudents%2FShared%20Documents%2Fug%2FERCF%5FIntroduction%2Epdf&amp;amp;parent=%2Fsites%2Fchemistry%2FCurrentStudents%2FShared%20Documents%2Fug&amp;amp;p=true&amp;amp;originalPath=aHR0cHM6Ly91bml2ZXJzaXR5b2ZzdGFuZHJld3M5MDcuc2hhcmVwb2ludC5jb20vOmI6L3MvY2hlbWlzdHJ5L0N1cnJlbnRTdHVkZW50cy9FY1RrWjZzZ0xfaEZnZzFGY0dUZXRZNEJicHFCWHhvNF9Pc3cwRXF3NGlXNDFRP3J0aW1lPUEyLVl0YW5jMTBn&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Slurm_commands&amp;diff=3467</id>
		<title>Slurm commands</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Slurm_commands&amp;diff=3467"/>
				<updated>2020-05-06T11:58:07Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: /* slurm commands */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== slurm commands ==&lt;br /&gt;
&lt;br /&gt;
note the BIOINF community are supposed to use the -p bigmem queue. &lt;br /&gt;
&lt;br /&gt;
Sun Grid Engines (what we use on Marvin) to slurm command conversion:&lt;br /&gt;
 https://srcc.stanford.edu/sge-slurm-conversion&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following are command line commands, or tags you can put in your shell script (or at the command line) to achieve certain functionality.&lt;br /&gt;
&lt;br /&gt;
 request_48_thread_1.3TBRAM&lt;br /&gt;
 #!/bin/bash -l  &amp;#039;&amp;#039;&amp;#039;# not the -l is essential now&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
 #SBATCH -J fly_pilon   #jobname&lt;br /&gt;
 #SBATCH -N 1     #node&lt;br /&gt;
 #SBATCH --ntasks-per-node=48&lt;br /&gt;
 #SBATCH --threads-per-core=2&lt;br /&gt;
 #SBATCH -p bigmem&lt;br /&gt;
 #SBATCH --nodelist=kennedy150  # this is the specific node. This one has 1.5TB RAM&lt;br /&gt;
 #SBATCH --mem=1350GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 test_conda_activate&lt;br /&gt;
 #!/bin/bash -l&lt;br /&gt;
 #SBATCH -J conda_test   #jobname&lt;br /&gt;
 #SBATCH -N 1     #node&lt;br /&gt;
 #SBATCH --tasks-per-node=1&lt;br /&gt;
 #SBATCH -p bigmem    # big mem if for the BIOINF community&lt;br /&gt;
 #SBATCH --mail-type=END     # email at the end of the job&lt;br /&gt;
 #SBATCH --mail-user=$USER@st-andrews.ac.uk      # your email address&lt;br /&gt;
&lt;br /&gt;
cd /gpfs1/home/$USER/&lt;br /&gt;
&lt;br /&gt;
pyv=&amp;quot;$(python -V 2&amp;gt;&amp;amp;1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$pyv&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# conda to activate the software&lt;br /&gt;
&lt;br /&gt;
echo $PATH&lt;br /&gt;
&lt;br /&gt;
conda activate spades&lt;br /&gt;
&lt;br /&gt;
pyv=&amp;quot;$(python -V 2&amp;gt;&amp;amp;1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$pyv&amp;quot;&lt;br /&gt;
&lt;br /&gt;
conda deactivate &lt;br /&gt;
&lt;br /&gt;
conda activate python27&lt;br /&gt;
&lt;br /&gt;
pyv=&amp;quot;$(python2 -V 2&amp;gt;&amp;amp;1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$pyv&amp;quot;&lt;br /&gt;
&lt;br /&gt;
 12threads_bigMeme_30G_RAM&lt;br /&gt;
&lt;br /&gt;
!/bin/bash -l  # essential &lt;br /&gt;
#SBATCH -J trimmo   #jobname&lt;br /&gt;
#SBATCH -N 1     #node&lt;br /&gt;
#SBATCH --ntasks-per-node=12&lt;br /&gt;
#SBATCH --threads-per-core=2&lt;br /&gt;
#SBATCH -p bigmem&lt;br /&gt;
#SBATCH --mem=30GB&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Slurm_commands&amp;diff=3466</id>
		<title>Slurm commands</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Slurm_commands&amp;diff=3466"/>
				<updated>2020-05-06T11:56:44Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: Created page with &amp;quot;  == slurm commands ==  note the BIOINF community are supposed to use the -p bigmem queue.     The following are command line commands, or tags you can put in your shell scrip...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== slurm commands ==&lt;br /&gt;
&lt;br /&gt;
note the BIOINF community are supposed to use the -p bigmem queue. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following are command line commands, or tags you can put in your shell script (or at the command line) to achieve certain functionality.&lt;br /&gt;
&lt;br /&gt;
 request_48_thread_1.3TBRAM&lt;br /&gt;
 #!/bin/bash -l  &amp;#039;&amp;#039;&amp;#039;# not the -l is essential now&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
 #SBATCH -J fly_pilon   #jobname&lt;br /&gt;
 #SBATCH -N 1     #node&lt;br /&gt;
 #SBATCH --ntasks-per-node=48&lt;br /&gt;
 #SBATCH --threads-per-core=2&lt;br /&gt;
 #SBATCH -p bigmem&lt;br /&gt;
 #SBATCH --nodelist=kennedy150  # this is the specific node. This one has 1.5TB RAM&lt;br /&gt;
 #SBATCH --mem=1350GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 test_conda_activate&lt;br /&gt;
 #!/bin/bash -l&lt;br /&gt;
 #SBATCH -J conda_test   #jobname&lt;br /&gt;
 #SBATCH -N 1     #node&lt;br /&gt;
 #SBATCH --tasks-per-node=1&lt;br /&gt;
 #SBATCH -p bigmem    # big mem if for the BIOINF community&lt;br /&gt;
 #SBATCH --mail-type=END     # email at the end of the job&lt;br /&gt;
 #SBATCH --mail-user=$USER@st-andrews.ac.uk      # your email address&lt;br /&gt;
&lt;br /&gt;
cd /gpfs1/home/$USER/&lt;br /&gt;
&lt;br /&gt;
pyv=&amp;quot;$(python -V 2&amp;gt;&amp;amp;1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$pyv&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# conda to activate the software&lt;br /&gt;
&lt;br /&gt;
echo $PATH&lt;br /&gt;
&lt;br /&gt;
conda activate spades&lt;br /&gt;
&lt;br /&gt;
pyv=&amp;quot;$(python -V 2&amp;gt;&amp;amp;1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$pyv&amp;quot;&lt;br /&gt;
&lt;br /&gt;
conda deactivate &lt;br /&gt;
&lt;br /&gt;
conda activate python27&lt;br /&gt;
&lt;br /&gt;
pyv=&amp;quot;$(python2 -V 2&amp;gt;&amp;amp;1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$pyv&amp;quot;&lt;br /&gt;
&lt;br /&gt;
 12threads_bigMeme_30G_RAM&lt;br /&gt;
&lt;br /&gt;
!/bin/bash -l  # essential &lt;br /&gt;
#SBATCH -J trimmo   #jobname&lt;br /&gt;
#SBATCH -N 1     #node&lt;br /&gt;
#SBATCH --ntasks-per-node=12&lt;br /&gt;
#SBATCH --threads-per-core=2&lt;br /&gt;
#SBATCH -p bigmem&lt;br /&gt;
#SBATCH --mem=30GB&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Quick_start&amp;diff=3465</id>
		<title>Quick start</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Quick_start&amp;diff=3465"/>
				<updated>2020-05-06T11:55:20Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: Created page with &amp;quot;  Temporary quick start info can be found here. This is a work in progress until something official is made up.   https://github.com/peterthorpe5/How_to_use_Kennedy_HPC&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
Temporary quick start info can be found here. This is a work in progress until something official is made up.&lt;br /&gt;
&lt;br /&gt;
 https://github.com/peterthorpe5/How_to_use_Kennedy_HPC&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3464</id>
		<title>Kennedy manual</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3464"/>
				<updated>2020-05-06T11:54:53Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: Replaced content with &amp;quot; *quick start   *slurm commands&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
*[[quick start]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[[slurm commands]]&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Main_Page&amp;diff=3463</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Main_Page&amp;diff=3463"/>
				<updated>2020-05-06T11:52:15Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;KENNEDY HPC for Bioinf community &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* [[Kennedy manual]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Usage of Cluster=&lt;br /&gt;
* [[Cluster Manual]]&lt;br /&gt;
* [[Kennedy manual]]&lt;br /&gt;
* [[Why a Queue Manager?]]&lt;br /&gt;
* [[Available Software]]&lt;br /&gt;
* [[how to use the cluster training course]]&lt;br /&gt;
* [[windows network connect]]&lt;br /&gt;
&lt;br /&gt;
= Documented Programs =&lt;br /&gt;
&lt;br /&gt;
The following can be seen as extra notes referring to these programs usage on the marvin cluster, with an emphais on example use-cases. Most, if not all, will have their own special websites, with more detailed manuals and further information.&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;width:85%&amp;quot;&lt;br /&gt;
|* [[abacas]]&lt;br /&gt;
|* [[albacore]]&lt;br /&gt;
|* [[ariba]]&lt;br /&gt;
|* [[aspera]]&lt;br /&gt;
|* [[assembly-stats]]&lt;br /&gt;
|* [[augustus]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BamQC]]&lt;br /&gt;
|* [[bamtools]]&lt;br /&gt;
|* [[banjo]]&lt;br /&gt;
|* [[bcftools]]&lt;br /&gt;
|* [[bedtools]]&lt;br /&gt;
|* [[bgenie]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BLAST]]&lt;br /&gt;
|* [[Blat]]&lt;br /&gt;
|* [[blast2go: b2g4pipe]]&lt;br /&gt;
|* [[bowtie]]&lt;br /&gt;
|* [[bowtie2]]&lt;br /&gt;
|* [[bwa]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BUSCO]]&lt;br /&gt;
|* [[CAFE]]&lt;br /&gt;
|* [[canu]]&lt;br /&gt;
|* [[cd-hit]]&lt;br /&gt;
|* [[cegma]]&lt;br /&gt;
|* [[clustal]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[cramtools]]&lt;br /&gt;
|* [[conda]]&lt;br /&gt;
|* [[deeptools]]&lt;br /&gt;
|* [[detonate]]&lt;br /&gt;
|* [[diamond]]&lt;br /&gt;
|* [[ea-utils]]&lt;br /&gt;
|* [[ensembl]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[ETE]]&lt;br /&gt;
|* [[FASTQC and MultiQC]]&lt;br /&gt;
|* [[Archaeopteryx and Forester]]&lt;br /&gt;
|* [[GapFiller]]&lt;br /&gt;
|* [[GenomeTools]]&lt;br /&gt;
|* [[gubbins]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[JBrowse]]&lt;br /&gt;
|* [[kallisto]]&lt;br /&gt;
|* [[kentUtils]]&lt;br /&gt;
|* [[last]]&lt;br /&gt;
|* [[lastz]]&lt;br /&gt;
|* [[macs2]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[Mash]]&lt;br /&gt;
|* [[mega]]&lt;br /&gt;
|* [[meryl]]&lt;br /&gt;
|* [[MUMmer]]&lt;br /&gt;
|* [[NanoSim]]&lt;br /&gt;
|* [[nseq]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[OrthoFinder]]&lt;br /&gt;
|* [[PASA]]&lt;br /&gt;
|* [[perl]]&lt;br /&gt;
|* [[PGAP]]&lt;br /&gt;
|* [[picard-tools]]&lt;br /&gt;
|* [[poRe]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[poretools]]&lt;br /&gt;
|* [[prokka]]&lt;br /&gt;
|* [[pyrad]]&lt;br /&gt;
|* [[python]]&lt;br /&gt;
|* [[qualimap]]&lt;br /&gt;
|* [[quast]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[qiime2]]&lt;br /&gt;
|* [[R]]&lt;br /&gt;
|* [[RAxML]]&lt;br /&gt;
|* [[Repeatmasker]]&lt;br /&gt;
|* [[Repeatmodeler]]&lt;br /&gt;
|* [[rnammer]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[roary]]&lt;br /&gt;
|* [[RSeQC]]&lt;br /&gt;
|* [[samtools]]&lt;br /&gt;
|* [[Satsuma]]&lt;br /&gt;
|* [[sickle]]&lt;br /&gt;
|* [[SPAdes]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[squid]]&lt;br /&gt;
|* [[sra-tools]]&lt;br /&gt;
|* [[srst2]]&lt;br /&gt;
|* [[SSPACE]]&lt;br /&gt;
|* [[stacks]]&lt;br /&gt;
|* [[Thor]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[Tophat]]&lt;br /&gt;
|* [[trimmomatic]]&lt;br /&gt;
|* [[Trinity]]&lt;br /&gt;
|* [[t-coffee]]&lt;br /&gt;
|* [[Unicycler]]&lt;br /&gt;
|* [[velvet]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[ViennaRNA]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Queue Manager Tips =&lt;br /&gt;
A cluster is a shared resource with different users running different types of analyses. Nearly all clusters use a piece of software called a queue manager to fairly share out the resource. The queue manager on marvin is called Grid Engine, and it has several commands available, all beginning with &amp;#039;&amp;#039;&amp;#039;q&amp;#039;&amp;#039;&amp;#039; and with &amp;#039;&amp;#039;&amp;#039;qsub&amp;#039;&amp;#039;&amp;#039; being the most commonly used as it submits a command via a jobscript to be processed. Here are some tips:&lt;br /&gt;
* [[Queue Manager Tips]]&lt;br /&gt;
* [[Queue Manager : shell script command]]&lt;br /&gt;
* [[Queue Manager emailing when jobs run]]&lt;br /&gt;
* [[General Command-line Tips]]&lt;br /&gt;
* [[DRMAA for further Gridengine automation]]&lt;br /&gt;
&lt;br /&gt;
= Data Examples =&lt;br /&gt;
* [[Two Eel Scaffolds]]&lt;br /&gt;
&lt;br /&gt;
= Procedures =&lt;br /&gt;
(short sequence of tasks with a certain short-term goal, often, a simple script)&lt;br /&gt;
* [[Calculating coverage]]&lt;br /&gt;
* [[MinION Coverage sensitivity analysis]]&lt;br /&gt;
&lt;br /&gt;
= Navigating genomic data websites=&lt;br /&gt;
* [[Patric]]&lt;br /&gt;
* [[NCBI]]&lt;br /&gt;
* [[IGSR/1000 Genomes]]&lt;br /&gt;
&lt;br /&gt;
= Explanations=&lt;br /&gt;
* [[ITUcourse]]&lt;br /&gt;
* [[VCF]]&lt;br /&gt;
* [[Maximum Likelihood]]&lt;br /&gt;
* [[SNP Analysis and phylogenetics]]&lt;br /&gt;
* [[Normalization]]&lt;br /&gt;
&lt;br /&gt;
= Pipelines =&lt;br /&gt;
(Workflow with specific end-goals)&lt;br /&gt;
* [[Trinity_Protocol]]&lt;br /&gt;
* [[STAR BEAST]]&lt;br /&gt;
* [[callSNPs.py]]&lt;br /&gt;
* [[pairwiseCallSNPs]]&lt;br /&gt;
* [[mapping.py]]&lt;br /&gt;
* [[Edgen RNAseq]]&lt;br /&gt;
* [[Miseq Prokaryote FASTQ analysis]]&lt;br /&gt;
* [[snpcallphylo]]&lt;br /&gt;
* [[Bottlenose dolphin population genomic analysis]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast 12.09.2017]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast 07.11.2017]]&lt;br /&gt;
* [[Bisulfite Sequencing]]&lt;br /&gt;
* [[microRNA and Salmo Salar]]&lt;br /&gt;
&lt;br /&gt;
=Protocols=&lt;br /&gt;
(Extensive workflows with different with several possible end goals)&lt;br /&gt;
* [[Synthetic Long reads]]&lt;br /&gt;
* [[MinION (Oxford Nanopore)]]&lt;br /&gt;
* [[MinKNOW folders and log files]]&lt;br /&gt;
* [[Research Data Management]]&lt;br /&gt;
* [[MicroRNAs]]&lt;br /&gt;
&lt;br /&gt;
= Tech Reviews =&lt;br /&gt;
* [[SWATH-MS Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
= Cluster Administration =&lt;br /&gt;
* [[StABDMIN]]&lt;br /&gt;
* [[Hardware Issues]]&lt;br /&gt;
* [[marvin and IPMI (remote hardware control)]]&lt;br /&gt;
* [[restart a node]]&lt;br /&gt;
* [[mounting drives]]&lt;br /&gt;
* [[Admin Tips]]&lt;br /&gt;
* [[RedHat]]&lt;br /&gt;
* [[Globus_gridftp]]&lt;br /&gt;
* [[Galaxy Setup]]&lt;br /&gt;
* [[Son of Gridengine]]&lt;br /&gt;
* [[Blas Libraries]]&lt;br /&gt;
* [[CMake]]&lt;br /&gt;
* [[conda bioconda]]&lt;br /&gt;
* [[Users and Groups]]&lt;br /&gt;
* [[Installing software on marvin]]&lt;br /&gt;
* [[emailing]]&lt;br /&gt;
* [[biotime machine]]&lt;br /&gt;
* [[SCAN-pc laptop]]&lt;br /&gt;
* [[node1 issues]]&lt;br /&gt;
* [[6TB storage expansion]]&lt;br /&gt;
* [[PIs storage sacrifice]]&lt;br /&gt;
* [[SAN relocation task]]&lt;br /&gt;
* [[Home directories max-out incident 28.11.2016]]&lt;br /&gt;
* [[Frontend Restart]]&lt;br /&gt;
* [[environment-modules]]&lt;br /&gt;
* [[H: drive on cluster]]&lt;br /&gt;
* [[Incident: Can&amp;#039;t connect to BerkeleyDB]]&lt;br /&gt;
* [[Bioinformatics Wordpress Site]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* [[users disk usage]]&lt;br /&gt;
* [[Updating BLAST databases]]&lt;br /&gt;
* [[Python DRMAA]]&lt;br /&gt;
* [[message of the day]]&lt;br /&gt;
* [[SAN disconnect incident 10.01.2017]]&lt;br /&gt;
* [[Memory repair glitch 16.02.2017]]&lt;br /&gt;
* [[node9 network failure incident 16-20.03.2017]]&lt;br /&gt;
* [[Incorrect rebooting of marvin 19.09.2017]]&lt;br /&gt;
* [[ansible]]&lt;br /&gt;
* [[webstie and word press]]&lt;br /&gt;
* [[allow user access to other peoples data]]&lt;br /&gt;
* [[RAM and RAM slots]]&lt;br /&gt;
* [[ldap is not ldap]]&lt;br /&gt;
* [[reset a password]]&lt;br /&gt;
* [[sending emails from command line examples]]&lt;br /&gt;
&lt;br /&gt;
= Courses =&lt;br /&gt;
&lt;br /&gt;
==I2U4BGA==&lt;br /&gt;
* [[Original schedule]]&lt;br /&gt;
* [[New schedule]]&lt;br /&gt;
* [[Actual schedule]]&lt;br /&gt;
* [[Course itself]]&lt;br /&gt;
* [[Biolinux Source course]]&lt;br /&gt;
* [[Directory Organization Exercise]]&lt;br /&gt;
* [[Glossary]]&lt;br /&gt;
* [[Key Bindings]]&lt;br /&gt;
* [[one-liners]]&lt;br /&gt;
* [[Cheatsheets]]&lt;br /&gt;
* [[Links]]&lt;br /&gt;
* [[pandoc modified manual]]&lt;br /&gt;
* [[Command Line Exercises]]&lt;br /&gt;
&lt;br /&gt;
= hdi2u =&lt;br /&gt;
&lt;br /&gt;
The half-day linux course held on 20th April. Modified version of I2U4BGA.&lt;br /&gt;
&lt;br /&gt;
* [[hdi2u_intro]]&lt;br /&gt;
* [[hdi2u_commandbased_exercises]]&lt;br /&gt;
* [[hdi2u_dirorg_exercise]]&lt;br /&gt;
* [[hdi2u_rendertotsv_exercise]]&lt;br /&gt;
&lt;br /&gt;
= RNAseq for DGE =&lt;br /&gt;
* [[Theoretical background]]&lt;br /&gt;
* [[Quality Control and Preprocessing]]&lt;br /&gt;
* [[Mapping to Reference]]&lt;br /&gt;
* [[Mapping Quality Exercise]]&lt;br /&gt;
* [[Key Aspects of using R]]&lt;br /&gt;
* [[Estimating Gene Count Exercise]]&lt;br /&gt;
* [[Differential Expression Exercise]]&lt;br /&gt;
* [[Functional Analysis Exercise]]&lt;br /&gt;
&lt;br /&gt;
= Introduction to Unix 2017 =&lt;br /&gt;
* [[Introduction_to_Unix_2017]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Templates==&lt;br /&gt;
* [[edgenl2g]]&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3462</id>
		<title>Kennedy manual</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3462"/>
				<updated>2020-05-06T11:48:25Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
====slurm commands====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Temporary quick start info can be found here. This is a work in progress until something official is made up.&lt;br /&gt;
&lt;br /&gt;
 https://github.com/peterthorpe5/How_to_use_Kennedy_HPC&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following are command line commands, or tags you can put in your shell script to achieve certain functionality.&lt;br /&gt;
&lt;br /&gt;
 request_48_thread_1.3TBRAM&lt;br /&gt;
 #!/bin/bash -l  &amp;#039;&amp;#039;&amp;#039;# not the -l is essential now&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
 #SBATCH -J fly_pilon   #jobname&lt;br /&gt;
 #SBATCH -N 1     #node&lt;br /&gt;
 #SBATCH --ntasks-per-node=48&lt;br /&gt;
 #SBATCH --threads-per-core=2&lt;br /&gt;
 #SBATCH -p bigmem&lt;br /&gt;
 #SBATCH --nodelist=kennedy150  # this is the specific node. This one has 1.5TB RAM&lt;br /&gt;
 #SBATCH --mem=1350GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 test_conda_activate&lt;br /&gt;
 #!/bin/bash -l&lt;br /&gt;
 #SBATCH -J conda_test   #jobname&lt;br /&gt;
 #SBATCH -N 1     #node&lt;br /&gt;
 #SBATCH --tasks-per-node=1&lt;br /&gt;
 #SBATCH -p bigmem    # big mem if for the BIOINF community&lt;br /&gt;
 #SBATCH --mail-type=END     # email at the end of the job&lt;br /&gt;
 #SBATCH --mail-user=$USER@st-andrews.ac.uk      # your email address&lt;br /&gt;
&lt;br /&gt;
cd /gpfs1/home/$USER/&lt;br /&gt;
&lt;br /&gt;
pyv=&amp;quot;$(python -V 2&amp;gt;&amp;amp;1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$pyv&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# conda to activate the software&lt;br /&gt;
&lt;br /&gt;
echo $PATH&lt;br /&gt;
&lt;br /&gt;
conda activate spades&lt;br /&gt;
&lt;br /&gt;
pyv=&amp;quot;$(python -V 2&amp;gt;&amp;amp;1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$pyv&amp;quot;&lt;br /&gt;
&lt;br /&gt;
conda deactivate &lt;br /&gt;
&lt;br /&gt;
conda activate python27&lt;br /&gt;
&lt;br /&gt;
pyv=&amp;quot;$(python2 -V 2&amp;gt;&amp;amp;1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$pyv&amp;quot;&lt;br /&gt;
&lt;br /&gt;
 12threads_bigMeme_30G_RAM&lt;br /&gt;
&lt;br /&gt;
!/bin/bash -l  # essential &lt;br /&gt;
#SBATCH -J trimmo   #jobname&lt;br /&gt;
#SBATCH -N 1     #node&lt;br /&gt;
#SBATCH --ntasks-per-node=12&lt;br /&gt;
#SBATCH --threads-per-core=2&lt;br /&gt;
#SBATCH -p bigmem&lt;br /&gt;
#SBATCH --mem=30GB&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3461</id>
		<title>Kennedy manual</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Kennedy_manual&amp;diff=3461"/>
				<updated>2020-05-06T08:07:18Z</updated>
		
		<summary type="html">&lt;p&gt;PeterThorpe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Temporary quick start info can be found here. This is a work in progress until something official is made up.&lt;br /&gt;
&lt;br /&gt;
 https://github.com/peterthorpe5/How_to_use_Kennedy_HPC&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following are command line commands, or tags you can put in your shell script to achieve certain functionality.&lt;br /&gt;
&lt;br /&gt;
 request_48_thread_1.3TBRAM&lt;br /&gt;
#!/bin/bash -l  &amp;#039;&amp;#039;&amp;#039;# not the -l is essential now&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
#SBATCH -J fly_pilon   #jobname&lt;br /&gt;
#SBATCH -N 1     #node&lt;br /&gt;
#SBATCH --ntasks-per-node=48&lt;br /&gt;
#SBATCH --threads-per-core=2&lt;br /&gt;
#SBATCH -p bigmem&lt;br /&gt;
#SBATCH --nodelist=kennedy150  # this is the specific node. This one has 1.5TB RAM&lt;br /&gt;
#SBATCH --mem=1350GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 test_conda_activate&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH -J conda_test   #jobname&lt;br /&gt;
#SBATCH -N 1     #node&lt;br /&gt;
#SBATCH --tasks-per-node=1&lt;br /&gt;
#SBATCH -p bigmem    # big mem if for the BIOINF community&lt;br /&gt;
#SBATCH --mail-type=END     # email at the end of the job&lt;br /&gt;
#SBATCH --mail-user=$USER@st-andrews.ac.uk      # your email address&lt;br /&gt;
&lt;br /&gt;
cd /gpfs1/home/$USER/&lt;br /&gt;
&lt;br /&gt;
pyv=&amp;quot;$(python -V 2&amp;gt;&amp;amp;1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$pyv&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# conda to activate the software&lt;br /&gt;
&lt;br /&gt;
echo $PATH&lt;br /&gt;
&lt;br /&gt;
conda activate spades&lt;br /&gt;
&lt;br /&gt;
pyv=&amp;quot;$(python -V 2&amp;gt;&amp;amp;1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$pyv&amp;quot;&lt;br /&gt;
&lt;br /&gt;
conda deactivate &lt;br /&gt;
&lt;br /&gt;
conda activate python27&lt;br /&gt;
&lt;br /&gt;
pyv=&amp;quot;$(python2 -V 2&amp;gt;&amp;amp;1)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$pyv&amp;quot;&lt;br /&gt;
&lt;br /&gt;
 12threads_bigMeme_30G_RAM&lt;br /&gt;
&lt;br /&gt;
!/bin/bash -l  # essential &lt;br /&gt;
#SBATCH -J trimmo   #jobname&lt;br /&gt;
#SBATCH -N 1     #node&lt;br /&gt;
#SBATCH --ntasks-per-node=12&lt;br /&gt;
#SBATCH --threads-per-core=2&lt;br /&gt;
#SBATCH -p bigmem&lt;br /&gt;
#SBATCH --mem=30GB&lt;/div&gt;</summary>
		<author><name>PeterThorpe</name></author>	</entry>

	</feed>