<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://stab.st-andrews.ac.uk/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jw297</id>
		<title>wiki - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://stab.st-andrews.ac.uk/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jw297"/>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Special:Contributions/Jw297"/>
		<updated>2026-04-18T18:07:40Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.30.0</generator>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Log_files&amp;diff=3411</id>
		<title>Log files</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Log_files&amp;diff=3411"/>
				<updated>2019-05-31T08:21:23Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: Created page with &amp;quot;=Useful log files=  Main useful one is   /var/log/messages  and contains the majority of the useful information the system produces.    There&amp;#039;s some other useful ones dependin...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Useful log files=&lt;br /&gt;
&lt;br /&gt;
Main useful one is &lt;br /&gt;
&lt;br /&gt;
/var/log/messages&lt;br /&gt;
&lt;br /&gt;
and contains the majority of the useful information the system produces. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There&amp;#039;s some other useful ones depending what you&amp;#039;re doing. &lt;br /&gt;
&lt;br /&gt;
Ldap stuff? /var/log/ldap&lt;br /&gt;
&lt;br /&gt;
cron stuff? /var/log/cron&lt;br /&gt;
&lt;br /&gt;
Security/access stuff you&amp;#039;ll either want /var/log/secure or maybe stuff in /var/log/sssd/ folder.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Singularity_with_grid_engine&amp;diff=3408</id>
		<title>Singularity with grid engine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Singularity_with_grid_engine&amp;diff=3408"/>
				<updated>2019-05-28T11:58:53Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We have been working hard in the background to allow users to be able to run Docker images. &lt;br /&gt;
Good news you can now do this via Singularity (Don’t worry this is all installed and should just work). &lt;br /&gt;
Try Conda to install your package of interest first. If it is not on Conda, you can try the following:&lt;br /&gt;
&lt;br /&gt;
Please remember, you must run this through qsub and not directly on the head node. &lt;br /&gt;
&lt;br /&gt;
Full singularity documentation is here; &lt;br /&gt;
 https://www.sylabs.io/guides/3.2/user-guide/&lt;br /&gt;
&lt;br /&gt;
where to get the images from? search here: &lt;br /&gt;
 https://hub.docker.com/&lt;br /&gt;
if you search funannotate you will see the phrase: nextgenusfs/funannotate&lt;br /&gt;
this is what you want&lt;br /&gt;
&lt;br /&gt;
to download the image, simply type:&lt;br /&gt;
 singularity pull docker:name_of_image&lt;br /&gt;
 e.g.  singularity pull docker:nextgenusfs/funannotate&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Please, do not run this on the head node, this must be run through the qsub system. &lt;br /&gt;
&lt;br /&gt;
Example of how to run via qsub:&lt;br /&gt;
 qsub -l singularity -b y singularity run /full_path_to/ubuntu.sif /full_path_to/test_script.sh&lt;br /&gt;
 replace: ubuntu.sif with whatever image you are trying to run&lt;br /&gt;
&lt;br /&gt;
Lets go through that command in more depth:&lt;br /&gt;
 qsub -l singularity -b y singularity run&lt;br /&gt;
&lt;br /&gt;
this is a special command so singularity will run on a specific server, you don&amp;#039;t need to alter this, just copy it. &lt;br /&gt;
  /full_path_to/ubuntu.sif &lt;br /&gt;
this is the image you download for the software you are interested in&lt;br /&gt;
&lt;br /&gt;
  ./test_script.sh&lt;br /&gt;
 this needs to contain the commands you want to run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
example 2, running the image with qsub:&lt;br /&gt;
 qsub -pe multi 8 -l singularity -b y singularity run /full_path/funannotate_latest.sif /full_path/fun_singularity.sh&lt;br /&gt;
The shell must have the current working directory full path in it as cd /ful_path/   &lt;br /&gt;
 putting #!cwd command in your shell scripts will not work!&lt;br /&gt;
 cd /ful_path/&lt;br /&gt;
 -pe multi 8     this asks for 8 cores, just as normal. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notes for Admin:&lt;br /&gt;
To add another node with singularity on:&lt;br /&gt;
&lt;br /&gt;
 qconf -me &amp;lt;nodename&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On the complex_values line remove NONE if present, and add &amp;quot;singularity=TRUE&amp;quot;&lt;br /&gt;
Followed guide here: https://blogs.univa.com/2019/01/using-univa-grid-engine-with-singularity/&lt;br /&gt;
now a request-able resource with &amp;quot;-l singularity&amp;quot; to make sure you get a node with singularity on&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Mounting_marvin_remotely&amp;diff=3396</id>
		<title>Mounting marvin remotely</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Mounting_marvin_remotely&amp;diff=3396"/>
				<updated>2019-05-23T13:30:25Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Unix */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Note==&lt;br /&gt;
If you find password issues/Error 13 Access Denied messages it is likely because your password on marvin is different to the password that is in the ldap records on marvin. To reset your ldap password get an admin to run  &amp;#039;&amp;#039;&amp;#039;smbldap-passwd &amp;lt;username&amp;gt;&amp;#039;&amp;#039;&amp;#039; to reset the password to a known value, then get the user to set their password using &amp;#039;&amp;#039;&amp;#039;smbldap-passwd&amp;#039;&amp;#039;&amp;#039; **not** &amp;#039;&amp;#039;&amp;#039;passwd&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Unix=&lt;br /&gt;
&lt;br /&gt;
Firstly you&amp;#039;ll need to make a .smbcredentials file in the format:&lt;br /&gt;
&lt;br /&gt;
 username=&amp;lt;username&amp;gt;&lt;br /&gt;
 password=**********&lt;br /&gt;
&lt;br /&gt;
Change the permissions:&lt;br /&gt;
 chmod 600 .smbcredentials&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now you can mount it from the command line:&lt;br /&gt;
 sudo mount -t cifs -o vers=2.0,credentials=/home/jw279/.smbcredentials,uid=1000,gid=1000 --verbose //marvin.st-andrews.ac.uk/jw279 /home/jw279/Marvin/&lt;br /&gt;
&lt;br /&gt;
changing /home/jw279/.smbcredentials to the path of your .smbcredentials file, replacing jw279 in //marvin.st-andrews.ac.uk/jw279 with your username and /home/jw279/Marvin to the desired mount location.&lt;br /&gt;
&lt;br /&gt;
Alternatively, add a line to /etc/fstab&lt;br /&gt;
&lt;br /&gt;
 //marvin.st-andrews.ac.uk/jw279 /home/jw279/Marvin  cifs  vers=2.0,credentials=/home/jw279/.smbcredentials,uid=1000,gid=1000   0   0&lt;br /&gt;
&lt;br /&gt;
replacing jw279 in //marvin.st-andrews.ac.uk/jw279 with your username and /home/jw279/Marvin to the desired mount location.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Singularity_with_grid_engine&amp;diff=3395</id>
		<title>Singularity with grid engine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Singularity_with_grid_engine&amp;diff=3395"/>
				<updated>2019-05-23T11:42:55Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;JW may 2019&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Followed guide here: https://blogs.univa.com/2019/01/using-univa-grid-engine-with-singularity/&lt;br /&gt;
&lt;br /&gt;
Pete installed singularity on phylo&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
now a requestable resource with &amp;quot;-l singularity&amp;quot; to make sure you get a node with singularity on&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 qsub -l singularity -b y singularity run ubuntu.sif ./test_script.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To add another node with singularity on:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
qconf -me &amp;lt;nodename&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On the complex_values line remove NONE if present, and add &amp;quot;singularity=TRUE&amp;quot;&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Singularity_with_grid_engine&amp;diff=3394</id>
		<title>Singularity with grid engine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Singularity_with_grid_engine&amp;diff=3394"/>
				<updated>2019-05-23T11:39:37Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: Created page with &amp;quot;JW may 2019   Followed guide here: https://blogs.univa.com/2019/01/using-univa-grid-engine-with-singularity/  Pete installed singularity on phylo   now a requestable resource...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;JW may 2019&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Followed guide here: https://blogs.univa.com/2019/01/using-univa-grid-engine-with-singularity/&lt;br /&gt;
&lt;br /&gt;
Pete installed singularity on phylo&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
now a requestable resource with &amp;quot;-l singularity&amp;quot; to make sure you get a node with singularity on&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 qsub -l singularity -b y singularity run ubuntu.sif ./test_script.sh&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Mounting_marvin_remotely&amp;diff=3393</id>
		<title>Mounting marvin remotely</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Mounting_marvin_remotely&amp;diff=3393"/>
				<updated>2019-05-23T10:25:51Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Note==&lt;br /&gt;
If you find password issues/Error 13 Access Denied messages it is likely because your password on marvin is different to the password that is in the ldap records on marvin. To reset your ldap password get an admin to run  &amp;#039;&amp;#039;&amp;#039;smbldap-passwd &amp;lt;username&amp;gt;&amp;#039;&amp;#039;&amp;#039; to reset the password to a known value, then get the user to set their password using &amp;#039;&amp;#039;&amp;#039;smbldap-passwd&amp;#039;&amp;#039;&amp;#039; **not** &amp;#039;&amp;#039;&amp;#039;passwd&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Unix=&lt;br /&gt;
&lt;br /&gt;
Firstly you&amp;#039;ll need to make a .smbcredentials file in the format:&lt;br /&gt;
&lt;br /&gt;
 username=&amp;lt;username&amp;gt;&lt;br /&gt;
 password=**********&lt;br /&gt;
&lt;br /&gt;
Change the permissions:&lt;br /&gt;
 chmod 600 .smbcredentials&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now you can mount it from the command line:&lt;br /&gt;
 sudo mount -t cifs -o vers=1.0,credentials=/home/jw279/.smbcredentials,uid=1000,gid=1000 --verbose //marvin.st-andrews.ac.uk/jw279 /home/jw279/Marvin/&lt;br /&gt;
&lt;br /&gt;
changing /home/jw279/.smbcredentials to the path of your .smbcredentials file, replacing jw279 in //marvin.st-andrews.ac.uk/jw279 with your username and /home/jw279/Marvin to the desired mount location.&lt;br /&gt;
&lt;br /&gt;
Alternatively, add a line to /etc/fstab&lt;br /&gt;
&lt;br /&gt;
 //marvin.st-andrews.ac.uk/jw279 /home/jw279/Marvin  cifs  vers=2.0,credentials=/home/jw279/.smbcredentials,uid=1000,gid=1000   0   0&lt;br /&gt;
&lt;br /&gt;
replacing jw279 in //marvin.st-andrews.ac.uk/jw279 with your username and /home/jw279/Marvin to the desired mount location.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3392</id>
		<title>StABDMIN</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3392"/>
				<updated>2019-05-13T12:44:09Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* How it works */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=StABDMIN Site=&lt;br /&gt;
&lt;br /&gt;
Currently running at: marvin.st-andrews.ac.uk on port 80.&lt;br /&gt;
&lt;br /&gt;
The StABDMIN site was born of the need of a way to track the users on marvin, the PIs they&amp;#039;re associated with, the funded and unfunded grants they hold and the data they use. &lt;br /&gt;
&lt;br /&gt;
It was written by Joe in about 2 weeks in 2019 and he apologises sincerely for the lack of comments in the code, and absence of tests. It&amp;#039;s still a damn site better than what was there before (i.e. nothing). &lt;br /&gt;
&lt;br /&gt;
=Keeping it running=&lt;br /&gt;
&lt;br /&gt;
The database is called StABDMIN, the database user is StABDMIN and the password is stored in /etc/mysql/stabdmin.cnf. &lt;br /&gt;
&lt;br /&gt;
The code for the project is in /storage/home/users/StABDMIN. The site is set to store static files (css, js etc) in /var/www/StABDMIN/. &lt;br /&gt;
&lt;br /&gt;
Currently static files are being server using the python package &amp;quot;whitenoise&amp;quot;, but this ought to be done with nginx or apache in the long run. It&amp;#039;s also not served over https currently as marvin can&amp;#039;t cope with a modern apache (mod_wsgi was compiled with python 2.6). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Restarting the server==&lt;br /&gt;
As root: &lt;br /&gt;
&lt;br /&gt;
*To start the webserver navigate to /storage/home/users/StABDMIN/StABDMIN&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;module load python/3.6.4&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;sh run_webserver.sh&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And this will run gunicorn using&lt;br /&gt;
 gunicorn StABDMIN.wsgi:application  --pid StABDMIN.pid -b marvin.st-andrews.ac.uk:80 -n StABDMIN -D&lt;br /&gt;
&lt;br /&gt;
You can stop it with the process id in StABDMIN.pid file, and/or by using pkill gunicorn&lt;br /&gt;
&lt;br /&gt;
==Importing data into the backend==&lt;br /&gt;
&lt;br /&gt;
The script /storage/home/users/StABDMIN/StABDMIN/importUsageFromFiles.py is set into cron to run everyday at 5.05am. It relies on the result of the /mnt/system_usage/ scripts written by Peter Thorpe. It loads the data from the files created by this script into the system for tracking user data usage and total disk use/free space. &lt;br /&gt;
&lt;br /&gt;
 module load python/3.6.4&lt;br /&gt;
 python3 importUsageFromFiles.py&lt;br /&gt;
&lt;br /&gt;
will load the data in manually. &lt;br /&gt;
&lt;br /&gt;
To add users/PIs/Grants, you need to use the ADMIN site, access from the &amp;quot;Admin&amp;quot; link on the navigation bar of the website. &lt;br /&gt;
&lt;br /&gt;
If a new PI has a new Users and a new Grant, the PI &amp;#039;&amp;#039;&amp;#039;must&amp;#039;&amp;#039;&amp;#039; be created first. &lt;br /&gt;
&lt;br /&gt;
If a new bioinformatician joins, assign them the PI &amp;quot;StABU&amp;quot; and &amp;#039;&amp;#039;&amp;#039;add them to the bioinformatician table&amp;#039;&amp;#039;&amp;#039;. This allows tracking of their use within the StABU PI, and also allows them to be assigned as primary bioinformaticians on grants.&lt;br /&gt;
&lt;br /&gt;
=How it works=&lt;br /&gt;
&lt;br /&gt;
The whole thing is based on the python package Django, but uses javascript for loading data into datatables (javascript package that makes the tables pretty. All javascript is in the template html file for the page in templates/), using JQuery (js again) ajax calls. The plots are created by D3 (javascript plotting library), again from data from ajax calls. &lt;br /&gt;
&lt;br /&gt;
There are 5 tables in the database directly used by django. &lt;br /&gt;
&lt;br /&gt;
Pis stores all the PI information. &lt;br /&gt;
GrantSubmissions stores the grants information, whether it&amp;#039;s proposals, expired, or funded. &lt;br /&gt;
Users stores the information on users. &lt;br /&gt;
help notes stores notes on help given, meaning we can track what we&amp;#039;ve done for people. These can be assigned to users, PIs and/or grants. &lt;br /&gt;
Bioinformaticians is a way of highlighting user instances as bioinformaticians so we can assign them to grants.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Files and Folders====&lt;br /&gt;
*accounts - has the code of the login side of the site&lt;br /&gt;
*cronTask.sh - attempt to get cron to load the data&lt;br /&gt;
*importUsageFromFiles.py - script to load the data in from /mnt/system-usage/&lt;br /&gt;
*manage.py - core django management script. &lt;br /&gt;
*mysql - password for mysql database&lt;br /&gt;
*run_webserver.sh - script that&amp;#039;ll run a daemonized gunicorn webserver for running the website. if it&amp;#039;s down, run this. &lt;br /&gt;
*StADBMIN - core of the site is here, settings.py and urls.py most useful&lt;br /&gt;
*StADBDMIN.pid - process id of gunicorn webserver running the site&lt;br /&gt;
*static - static files, populate using &amp;quot;python3 manage.py collectstatic&amp;quot; to get the static files from the apps together&lt;br /&gt;
*templates - template html files the site servers for the main views. The javascript for the plots are in here.&lt;br /&gt;
*UsersGrants - the core code for the site. All of the functions that actually run the site are in view.py, the urls are managed in urls.py&lt;br /&gt;
*usersImport.yaml - this was the initial import of data to the database. likely outdated structure already due to changes, but kept incase the database is lost&lt;br /&gt;
*wsgi.py - manages the wsgi stuff. I think this isn&amp;#039;t needed.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3391</id>
		<title>StABDMIN</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3391"/>
				<updated>2019-05-13T12:41:59Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* How it works */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=StABDMIN Site=&lt;br /&gt;
&lt;br /&gt;
Currently running at: marvin.st-andrews.ac.uk on port 80.&lt;br /&gt;
&lt;br /&gt;
The StABDMIN site was born of the need of a way to track the users on marvin, the PIs they&amp;#039;re associated with, the funded and unfunded grants they hold and the data they use. &lt;br /&gt;
&lt;br /&gt;
It was written by Joe in about 2 weeks in 2019 and he apologises sincerely for the lack of comments in the code, and absence of tests. It&amp;#039;s still a damn site better than what was there before (i.e. nothing). &lt;br /&gt;
&lt;br /&gt;
=Keeping it running=&lt;br /&gt;
&lt;br /&gt;
The database is called StABDMIN, the database user is StABDMIN and the password is stored in /etc/mysql/stabdmin.cnf. &lt;br /&gt;
&lt;br /&gt;
The code for the project is in /storage/home/users/StABDMIN. The site is set to store static files (css, js etc) in /var/www/StABDMIN/. &lt;br /&gt;
&lt;br /&gt;
Currently static files are being server using the python package &amp;quot;whitenoise&amp;quot;, but this ought to be done with nginx or apache in the long run. It&amp;#039;s also not served over https currently as marvin can&amp;#039;t cope with a modern apache (mod_wsgi was compiled with python 2.6). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Restarting the server==&lt;br /&gt;
As root: &lt;br /&gt;
&lt;br /&gt;
*To start the webserver navigate to /storage/home/users/StABDMIN/StABDMIN&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;module load python/3.6.4&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;sh run_webserver.sh&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And this will run gunicorn using&lt;br /&gt;
 gunicorn StABDMIN.wsgi:application  --pid StABDMIN.pid -b marvin.st-andrews.ac.uk:80 -n StABDMIN -D&lt;br /&gt;
&lt;br /&gt;
You can stop it with the process id in StABDMIN.pid file, and/or by using pkill gunicorn&lt;br /&gt;
&lt;br /&gt;
==Importing data into the backend==&lt;br /&gt;
&lt;br /&gt;
The script /storage/home/users/StABDMIN/StABDMIN/importUsageFromFiles.py is set into cron to run everyday at 5.05am. It relies on the result of the /mnt/system_usage/ scripts written by Peter Thorpe. It loads the data from the files created by this script into the system for tracking user data usage and total disk use/free space. &lt;br /&gt;
&lt;br /&gt;
 module load python/3.6.4&lt;br /&gt;
 python3 importUsageFromFiles.py&lt;br /&gt;
&lt;br /&gt;
will load the data in manually. &lt;br /&gt;
&lt;br /&gt;
To add users/PIs/Grants, you need to use the ADMIN site, access from the &amp;quot;Admin&amp;quot; link on the navigation bar of the website. &lt;br /&gt;
&lt;br /&gt;
If a new PI has a new Users and a new Grant, the PI &amp;#039;&amp;#039;&amp;#039;must&amp;#039;&amp;#039;&amp;#039; be created first. &lt;br /&gt;
&lt;br /&gt;
If a new bioinformatician joins, assign them the PI &amp;quot;StABU&amp;quot; and &amp;#039;&amp;#039;&amp;#039;add them to the bioinformatician table&amp;#039;&amp;#039;&amp;#039;. This allows tracking of their use within the StABU PI, and also allows them to be assigned as primary bioinformaticians on grants.&lt;br /&gt;
&lt;br /&gt;
=How it works=&lt;br /&gt;
&lt;br /&gt;
The whole thing is based on the python package Django, but uses javascript for loading data into datatables (javascript package that makes the tables pretty), using JQuery (js again) ajax calls. The plots are created by D3 (javascript plotting library), again from data from ajax calls. &lt;br /&gt;
&lt;br /&gt;
There are 5 tables in the database directly used by django. &lt;br /&gt;
&lt;br /&gt;
Pis stores all the PI information. &lt;br /&gt;
GrantSubmissions stores the grants information, whether it&amp;#039;s proposals, expired, or funded. &lt;br /&gt;
Users stores the information on users. &lt;br /&gt;
help notes stores notes on help given, meaning we can track what we&amp;#039;ve done for people. These can be assigned to users, PIs and/or grants. &lt;br /&gt;
Bioinformaticians is a way of highlighting user instances as bioinformaticians so we can assign them to grants.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Files and Folders====&lt;br /&gt;
*accounts - has the code of the login side of the site&lt;br /&gt;
*cronTask.sh - attempt to get cron to load the data&lt;br /&gt;
*importUsageFromFiles.py - script to load the data in from /mnt/system-usage/&lt;br /&gt;
*manage.py - core django management script. &lt;br /&gt;
*mysql - password for mysql database&lt;br /&gt;
*run_webserver.sh - script that&amp;#039;ll run a daemonized gunicorn webserver for running the website. if it&amp;#039;s down, run this. &lt;br /&gt;
*StADBMIN - core of the site is here, settings.py and urls.py most useful&lt;br /&gt;
*StADBDMIN.pid - process id of gunicorn webserver running the site&lt;br /&gt;
*static - static files, populate using &amp;quot;python3 manage.py collectstatic&amp;quot; to get the static files from the apps together&lt;br /&gt;
*templates - template html files the site servers for the main views. The javascript for the plots are in here.&lt;br /&gt;
*UsersGrants - the core code for the site. All of the functions that actually run the site are in view.py, the urls are managed in urls.py&lt;br /&gt;
*usersImport.yaml - this was the initial import of data to the database. likely outdated structure already due to changes, but kept incase the database is lost&lt;br /&gt;
*wsgi.py - manages the wsgi stuff. I think this isn&amp;#039;t needed.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3386</id>
		<title>StABDMIN</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3386"/>
				<updated>2019-05-01T12:46:10Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Restarting the server */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=StABDMIN Site=&lt;br /&gt;
&lt;br /&gt;
Currently running at: marvin.st-andrews.ac.uk on port 80.&lt;br /&gt;
&lt;br /&gt;
The StABDMIN site was born of the need of a way to track the users on marvin, the PIs they&amp;#039;re associated with, the funded and unfunded grants they hold and the data they use. &lt;br /&gt;
&lt;br /&gt;
It was written by Joe in about 2 weeks in 2019 and he apologises sincerely for the lack of comments in the code, and absence of tests. It&amp;#039;s still a damn site better than what was there before (i.e. nothing). &lt;br /&gt;
&lt;br /&gt;
=Keeping it running=&lt;br /&gt;
&lt;br /&gt;
The database is called StABDMIN, the database user is StABDMIN and the password is stored in /etc/mysql/stabdmin.cnf. &lt;br /&gt;
&lt;br /&gt;
The code for the project is in /storage/home/users/StABDMIN. The site is set to store static files (css, js etc) in /var/www/StABDMIN/. &lt;br /&gt;
&lt;br /&gt;
Currently static files are being server using the python package &amp;quot;whitenoise&amp;quot;, but this ought to be done with nginx or apache in the long run. It&amp;#039;s also not served over https currently as marvin can&amp;#039;t cope with a modern apache (mod_wsgi was compiled with python 2.6). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Restarting the server==&lt;br /&gt;
As root: &lt;br /&gt;
&lt;br /&gt;
*To start the webserver navigate to /storage/home/users/StABDMIN/StABDMIN&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;module load python/3.6.4&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;sh run_webserver.sh&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And this will run gunicorn using&lt;br /&gt;
 gunicorn StABDMIN.wsgi:application  --pid StABDMIN.pid -b marvin.st-andrews.ac.uk:80 -n StABDMIN -D&lt;br /&gt;
&lt;br /&gt;
You can stop it with the process id in StABDMIN.pid file, and/or by using pkill gunicorn&lt;br /&gt;
&lt;br /&gt;
==Importing data into the backend==&lt;br /&gt;
&lt;br /&gt;
The script /storage/home/users/StABDMIN/StABDMIN/importUsageFromFiles.py is set into cron to run everyday at 5.05am. It relies on the result of the /mnt/system_usage/ scripts written by Peter Thorpe. It loads the data from the files created by this script into the system for tracking user data usage and total disk use/free space. &lt;br /&gt;
&lt;br /&gt;
 module load python/3.6.4&lt;br /&gt;
 python3 importUsageFromFiles.py&lt;br /&gt;
&lt;br /&gt;
will load the data in manually. &lt;br /&gt;
&lt;br /&gt;
To add users/PIs/Grants, you need to use the ADMIN site, access from the &amp;quot;Admin&amp;quot; link on the navigation bar of the website. &lt;br /&gt;
&lt;br /&gt;
If a new PI has a new Users and a new Grant, the PI &amp;#039;&amp;#039;&amp;#039;must&amp;#039;&amp;#039;&amp;#039; be created first. &lt;br /&gt;
&lt;br /&gt;
If a new bioinformatician joins, assign them the PI &amp;quot;StABU&amp;quot; and &amp;#039;&amp;#039;&amp;#039;add them to the bioinformatician table&amp;#039;&amp;#039;&amp;#039;. This allows tracking of their use within the StABU PI, and also allows them to be assigned as primary bioinformaticians on grants.&lt;br /&gt;
&lt;br /&gt;
=How it works=&lt;br /&gt;
&lt;br /&gt;
The whole thing is based on the python package Django, but uses javascript for loading data into datatables (javascript package that makes the tables pretty), using JQuery (js again) ajax calls. The plots are created by D3 (javascript plotting library), again from data from ajax calls. &lt;br /&gt;
&lt;br /&gt;
There are 5 tables in the database directly used by django. &lt;br /&gt;
&lt;br /&gt;
Pis stores all the PI information. &lt;br /&gt;
GrantSubmissions stores the grants information, whether it&amp;#039;s proposals, expired, or funded. &lt;br /&gt;
Users stores the information on users. &lt;br /&gt;
help notes stores notes on help given, meaning we can track what we&amp;#039;ve done for people. These can be assigned to users, PIs and/or grants. &lt;br /&gt;
Bioinformaticians is a way of highlighting user instances as bioinformaticians so we can assign them to grants.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3385</id>
		<title>StABDMIN</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3385"/>
				<updated>2019-05-01T12:35:24Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Importing data into the backend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=StABDMIN Site=&lt;br /&gt;
&lt;br /&gt;
Currently running at: marvin.st-andrews.ac.uk on port 80.&lt;br /&gt;
&lt;br /&gt;
The StABDMIN site was born of the need of a way to track the users on marvin, the PIs they&amp;#039;re associated with, the funded and unfunded grants they hold and the data they use. &lt;br /&gt;
&lt;br /&gt;
It was written by Joe in about 2 weeks in 2019 and he apologises sincerely for the lack of comments in the code, and absence of tests. It&amp;#039;s still a damn site better than what was there before (i.e. nothing). &lt;br /&gt;
&lt;br /&gt;
=Keeping it running=&lt;br /&gt;
&lt;br /&gt;
The database is called StABDMIN, the database user is StABDMIN and the password is stored in /etc/mysql/stabdmin.cnf. &lt;br /&gt;
&lt;br /&gt;
The code for the project is in /storage/home/users/StABDMIN. The site is set to store static files (css, js etc) in /var/www/StABDMIN/. &lt;br /&gt;
&lt;br /&gt;
Currently static files are being server using the python package &amp;quot;whitenoise&amp;quot;, but this ought to be done with nginx or apache in the long run. It&amp;#039;s also not served over https currently as marvin can&amp;#039;t cope with a modern apache (mod_wsgi was compiled with python 2.6). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Restarting the server==&lt;br /&gt;
As root: &lt;br /&gt;
&lt;br /&gt;
*To start the webserver navigate to /storage/home/users/StABDMIN/StABDMIN&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;module load python/3.6.4&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;sh run_webserver.sh&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And this will run gunicorn using&lt;br /&gt;
 gunicorn StABDMIN.wsgi:application  --pid StABDMIN.pid -b marvin.st-andrews.ac.uk:80 -n StABDMIN -D&lt;br /&gt;
&lt;br /&gt;
==Importing data into the backend==&lt;br /&gt;
&lt;br /&gt;
The script /storage/home/users/StABDMIN/StABDMIN/importUsageFromFiles.py is set into cron to run everyday at 5.05am. It relies on the result of the /mnt/system_usage/ scripts written by Peter Thorpe. It loads the data from the files created by this script into the system for tracking user data usage and total disk use/free space. &lt;br /&gt;
&lt;br /&gt;
 module load python/3.6.4&lt;br /&gt;
 python3 importUsageFromFiles.py&lt;br /&gt;
&lt;br /&gt;
will load the data in manually. &lt;br /&gt;
&lt;br /&gt;
To add users/PIs/Grants, you need to use the ADMIN site, access from the &amp;quot;Admin&amp;quot; link on the navigation bar of the website. &lt;br /&gt;
&lt;br /&gt;
If a new PI has a new Users and a new Grant, the PI &amp;#039;&amp;#039;&amp;#039;must&amp;#039;&amp;#039;&amp;#039; be created first. &lt;br /&gt;
&lt;br /&gt;
If a new bioinformatician joins, assign them the PI &amp;quot;StABU&amp;quot; and &amp;#039;&amp;#039;&amp;#039;add them to the bioinformatician table&amp;#039;&amp;#039;&amp;#039;. This allows tracking of their use within the StABU PI, and also allows them to be assigned as primary bioinformaticians on grants.&lt;br /&gt;
&lt;br /&gt;
=How it works=&lt;br /&gt;
&lt;br /&gt;
The whole thing is based on the python package Django, but uses javascript for loading data into datatables (javascript package that makes the tables pretty), using JQuery (js again) ajax calls. The plots are created by D3 (javascript plotting library), again from data from ajax calls. &lt;br /&gt;
&lt;br /&gt;
There are 5 tables in the database directly used by django. &lt;br /&gt;
&lt;br /&gt;
Pis stores all the PI information. &lt;br /&gt;
GrantSubmissions stores the grants information, whether it&amp;#039;s proposals, expired, or funded. &lt;br /&gt;
Users stores the information on users. &lt;br /&gt;
help notes stores notes on help given, meaning we can track what we&amp;#039;ve done for people. These can be assigned to users, PIs and/or grants. &lt;br /&gt;
Bioinformaticians is a way of highlighting user instances as bioinformaticians so we can assign them to grants.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3384</id>
		<title>StABDMIN</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3384"/>
				<updated>2019-05-01T12:34:46Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Restarting the server */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=StABDMIN Site=&lt;br /&gt;
&lt;br /&gt;
Currently running at: marvin.st-andrews.ac.uk on port 80.&lt;br /&gt;
&lt;br /&gt;
The StABDMIN site was born of the need of a way to track the users on marvin, the PIs they&amp;#039;re associated with, the funded and unfunded grants they hold and the data they use. &lt;br /&gt;
&lt;br /&gt;
It was written by Joe in about 2 weeks in 2019 and he apologises sincerely for the lack of comments in the code, and absence of tests. It&amp;#039;s still a damn site better than what was there before (i.e. nothing). &lt;br /&gt;
&lt;br /&gt;
=Keeping it running=&lt;br /&gt;
&lt;br /&gt;
The database is called StABDMIN, the database user is StABDMIN and the password is stored in /etc/mysql/stabdmin.cnf. &lt;br /&gt;
&lt;br /&gt;
The code for the project is in /storage/home/users/StABDMIN. The site is set to store static files (css, js etc) in /var/www/StABDMIN/. &lt;br /&gt;
&lt;br /&gt;
Currently static files are being server using the python package &amp;quot;whitenoise&amp;quot;, but this ought to be done with nginx or apache in the long run. It&amp;#039;s also not served over https currently as marvin can&amp;#039;t cope with a modern apache (mod_wsgi was compiled with python 2.6). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Restarting the server==&lt;br /&gt;
As root: &lt;br /&gt;
&lt;br /&gt;
*To start the webserver navigate to /storage/home/users/StABDMIN/StABDMIN&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;module load python/3.6.4&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;sh run_webserver.sh&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And this will run gunicorn using&lt;br /&gt;
 gunicorn StABDMIN.wsgi:application  --pid StABDMIN.pid -b marvin.st-andrews.ac.uk:80 -n StABDMIN -D&lt;br /&gt;
&lt;br /&gt;
==Importing data into the backend==&lt;br /&gt;
&lt;br /&gt;
The script /storage/home/users/StABDMIN/StABDMIN/importUsageFromFiles.py is set into cron to run everyday at 5.05am. It relies on the result of the /mnt/system_usage/ scripts written by Peter Thorpe. It loads the data from the files created by this script into the system for tracking user data usage and total disk use/free space. &lt;br /&gt;
&lt;br /&gt;
To add users/PIs/Grants, you need to use the ADMIN site, access from the &amp;quot;Admin&amp;quot; link on the navigation bar of the website. &lt;br /&gt;
&lt;br /&gt;
If a new PI has a new Users and a new Grant, the PI &amp;#039;&amp;#039;&amp;#039;must&amp;#039;&amp;#039;&amp;#039; be created first. &lt;br /&gt;
&lt;br /&gt;
If a new bioinformatician joins, assign them the PI &amp;quot;StABU&amp;quot; and &amp;#039;&amp;#039;&amp;#039;add them to the bioinformatician table&amp;#039;&amp;#039;&amp;#039;. This allows tracking of their use within the StABU PI, and also allows them to be assigned as primary bioinformaticians on grants.&lt;br /&gt;
&lt;br /&gt;
=How it works=&lt;br /&gt;
&lt;br /&gt;
The whole thing is based on the python package Django, but uses javascript for loading data into datatables (javascript package that makes the tables pretty), using JQuery (js again) ajax calls. The plots are created by D3 (javascript plotting library), again from data from ajax calls. &lt;br /&gt;
&lt;br /&gt;
There are 5 tables in the database directly used by django. &lt;br /&gt;
&lt;br /&gt;
Pis stores all the PI information. &lt;br /&gt;
GrantSubmissions stores the grants information, whether it&amp;#039;s proposals, expired, or funded. &lt;br /&gt;
Users stores the information on users. &lt;br /&gt;
help notes stores notes on help given, meaning we can track what we&amp;#039;ve done for people. These can be assigned to users, PIs and/or grants. &lt;br /&gt;
Bioinformaticians is a way of highlighting user instances as bioinformaticians so we can assign them to grants.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3383</id>
		<title>StABDMIN</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3383"/>
				<updated>2019-05-01T12:34:42Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Importing data into the backend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=StABDMIN Site=&lt;br /&gt;
&lt;br /&gt;
Currently running at: marvin.st-andrews.ac.uk on port 80.&lt;br /&gt;
&lt;br /&gt;
The StABDMIN site was born of the need of a way to track the users on marvin, the PIs they&amp;#039;re associated with, the funded and unfunded grants they hold and the data they use. &lt;br /&gt;
&lt;br /&gt;
It was written by Joe in about 2 weeks in 2019 and he apologises sincerely for the lack of comments in the code, and absence of tests. It&amp;#039;s still a damn site better than what was there before (i.e. nothing). &lt;br /&gt;
&lt;br /&gt;
=Keeping it running=&lt;br /&gt;
&lt;br /&gt;
The database is called StABDMIN, the database user is StABDMIN and the password is stored in /etc/mysql/stabdmin.cnf. &lt;br /&gt;
&lt;br /&gt;
The code for the project is in /storage/home/users/StABDMIN. The site is set to store static files (css, js etc) in /var/www/StABDMIN/. &lt;br /&gt;
&lt;br /&gt;
Currently static files are being server using the python package &amp;quot;whitenoise&amp;quot;, but this ought to be done with nginx or apache in the long run. It&amp;#039;s also not served over https currently as marvin can&amp;#039;t cope with a modern apache (mod_wsgi was compiled with python 2.6). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Restarting the server==&lt;br /&gt;
As root: &lt;br /&gt;
&lt;br /&gt;
*To start the webserver navigate to /storage/home/users/StABDMIN/StABDMIN&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;module load python/3.6.4&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;sh run_webserver.sh&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Importing data into the backend==&lt;br /&gt;
&lt;br /&gt;
The script /storage/home/users/StABDMIN/StABDMIN/importUsageFromFiles.py is set into cron to run everyday at 5.05am. It relies on the result of the /mnt/system_usage/ scripts written by Peter Thorpe. It loads the data from the files created by this script into the system for tracking user data usage and total disk use/free space. &lt;br /&gt;
&lt;br /&gt;
To add users/PIs/Grants, you need to use the ADMIN site, access from the &amp;quot;Admin&amp;quot; link on the navigation bar of the website. &lt;br /&gt;
&lt;br /&gt;
If a new PI has a new Users and a new Grant, the PI &amp;#039;&amp;#039;&amp;#039;must&amp;#039;&amp;#039;&amp;#039; be created first. &lt;br /&gt;
&lt;br /&gt;
If a new bioinformatician joins, assign them the PI &amp;quot;StABU&amp;quot; and &amp;#039;&amp;#039;&amp;#039;add them to the bioinformatician table&amp;#039;&amp;#039;&amp;#039;. This allows tracking of their use within the StABU PI, and also allows them to be assigned as primary bioinformaticians on grants.&lt;br /&gt;
&lt;br /&gt;
=How it works=&lt;br /&gt;
&lt;br /&gt;
The whole thing is based on the python package Django, but uses javascript for loading data into datatables (javascript package that makes the tables pretty), using JQuery (js again) ajax calls. The plots are created by D3 (javascript plotting library), again from data from ajax calls. &lt;br /&gt;
&lt;br /&gt;
There are 5 tables in the database directly used by django. &lt;br /&gt;
&lt;br /&gt;
Pis stores all the PI information. &lt;br /&gt;
GrantSubmissions stores the grants information, whether it&amp;#039;s proposals, expired, or funded. &lt;br /&gt;
Users stores the information on users. &lt;br /&gt;
help notes stores notes on help given, meaning we can track what we&amp;#039;ve done for people. These can be assigned to users, PIs and/or grants. &lt;br /&gt;
Bioinformaticians is a way of highlighting user instances as bioinformaticians so we can assign them to grants.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Main_Page&amp;diff=3382</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Main_Page&amp;diff=3382"/>
				<updated>2019-05-01T12:29:51Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Cluster Administration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Usage of Cluster=&lt;br /&gt;
* [[Cluster Manual]]&lt;br /&gt;
* [[Why a Queue Manager?]]&lt;br /&gt;
* [[Available Software]]&lt;br /&gt;
* [[how to use the cluster training course]]&lt;br /&gt;
&lt;br /&gt;
= Documented Programs =&lt;br /&gt;
&lt;br /&gt;
The following can be seen as extra notes referring to these programs usage on the marvin cluster, with an emphais on example use-cases. Most, if not all, will have their own special websites, with more detailed manuals and further information.&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;width:85%&amp;quot;&lt;br /&gt;
|* [[abacas]]&lt;br /&gt;
|* [[albacore]]&lt;br /&gt;
|* [[ariba]]&lt;br /&gt;
|* [[aspera]]&lt;br /&gt;
|* [[assembly-stats]]&lt;br /&gt;
|* [[augustus]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BamQC]]&lt;br /&gt;
|* [[bamtools]]&lt;br /&gt;
|* [[banjo]]&lt;br /&gt;
|* [[bcftools]]&lt;br /&gt;
|* [[bedtools]]&lt;br /&gt;
|* [[bgenie]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BLAST]]&lt;br /&gt;
|* [[Blat]]&lt;br /&gt;
|* [[blast2go: b2g4pipe]]&lt;br /&gt;
|* [[bowtie]]&lt;br /&gt;
|* [[bowtie2]]&lt;br /&gt;
|* [[bwa]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BUSCO]]&lt;br /&gt;
|* [[CAFE]]&lt;br /&gt;
|* [[canu]]&lt;br /&gt;
|* [[cd-hit]]&lt;br /&gt;
|* [[cegma]]&lt;br /&gt;
|* [[clustal]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[cramtools]]&lt;br /&gt;
|* [[conda]]&lt;br /&gt;
|* [[deeptools]]&lt;br /&gt;
|* [[detonate]]&lt;br /&gt;
|* [[diamond]]&lt;br /&gt;
|* [[ea-utils]]&lt;br /&gt;
|* [[ensembl]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[ETE]]&lt;br /&gt;
|* [[FASTQC and MultiQC]]&lt;br /&gt;
|* [[Archaeopteryx and Forester]]&lt;br /&gt;
|* [[GapFiller]]&lt;br /&gt;
|* [[GenomeTools]]&lt;br /&gt;
|* [[gubbins]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[JBrowse]]&lt;br /&gt;
|* [[kallisto]]&lt;br /&gt;
|* [[kentUtils]]&lt;br /&gt;
|* [[last]]&lt;br /&gt;
|* [[lastz]]&lt;br /&gt;
|* [[macs2]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[Mash]]&lt;br /&gt;
|* [[mega]]&lt;br /&gt;
|* [[meryl]]&lt;br /&gt;
|* [[MUMmer]]&lt;br /&gt;
|* [[NanoSim]]&lt;br /&gt;
|* [[nseq]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[OrthoFinder]]&lt;br /&gt;
|* [[PASA]]&lt;br /&gt;
|* [[perl]]&lt;br /&gt;
|* [[PGAP]]&lt;br /&gt;
|* [[picard-tools]]&lt;br /&gt;
|* [[poRe]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[poretools]]&lt;br /&gt;
|* [[prokka]]&lt;br /&gt;
|* [[pyrad]]&lt;br /&gt;
|* [[python]]&lt;br /&gt;
|* [[qualimap]]&lt;br /&gt;
|* [[quast]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[qiime2]]&lt;br /&gt;
|* [[R]]&lt;br /&gt;
|* [[RAxML]]&lt;br /&gt;
|* [[Repeatmasker]]&lt;br /&gt;
|* [[Repeatmodeler]]&lt;br /&gt;
|* [[rnammer]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[roary]]&lt;br /&gt;
|* [[RSeQC]]&lt;br /&gt;
|* [[samtools]]&lt;br /&gt;
|* [[Satsuma]]&lt;br /&gt;
|* [[sickle]]&lt;br /&gt;
|* [[SPAdes]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[squid]]&lt;br /&gt;
|* [[sra-tools]]&lt;br /&gt;
|* [[srst2]]&lt;br /&gt;
|* [[SSPACE]]&lt;br /&gt;
|* [[stacks]]&lt;br /&gt;
|* [[Thor]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[Tophat]]&lt;br /&gt;
|* [[trimmomatic]]&lt;br /&gt;
|* [[Trinity]]&lt;br /&gt;
|* [[t-coffee]]&lt;br /&gt;
|* [[Unicycler]]&lt;br /&gt;
|* [[velvet]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[ViennaRNA]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Queue Manager Tips =&lt;br /&gt;
A cluster is a shared resource with different users running different types of analyses. Nearly all clusters use a piece of software called a queue manager to fairly share out the resource. The queue manager on marvin is called Grid Engine, and it has several commands available, all beginning with &amp;#039;&amp;#039;&amp;#039;q&amp;#039;&amp;#039;&amp;#039; and with &amp;#039;&amp;#039;&amp;#039;qsub&amp;#039;&amp;#039;&amp;#039; being the most commonly used as it submits a command via a jobscript to be processed. Here are some tips:&lt;br /&gt;
* [[Queue Manager Tips]]&lt;br /&gt;
* [[General Command-line Tips]]&lt;br /&gt;
* [[DRMAA for further Gridengine automation]]&lt;br /&gt;
&lt;br /&gt;
= Data Examples =&lt;br /&gt;
* [[Two Eel Scaffolds]]&lt;br /&gt;
&lt;br /&gt;
= Procedures =&lt;br /&gt;
(short sequence of tasks with a certain short-term goal, often, a simple script)&lt;br /&gt;
* [[Calculating coverage]]&lt;br /&gt;
* [[MinION Coverage sensitivity analysis]]&lt;br /&gt;
&lt;br /&gt;
= Navigating genomic data websites=&lt;br /&gt;
* [[Patric]]&lt;br /&gt;
* [[NCBI]]&lt;br /&gt;
* [[IGSR/1000 Genomes]]&lt;br /&gt;
&lt;br /&gt;
= Explanations=&lt;br /&gt;
* [[ITUcourse]]&lt;br /&gt;
* [[VCF]]&lt;br /&gt;
* [[Maximum Likelihood]]&lt;br /&gt;
* [[SNP Analysis and phylogenetics]]&lt;br /&gt;
* [[Normalization]]&lt;br /&gt;
&lt;br /&gt;
= Pipelines =&lt;br /&gt;
(Workflow with specific end-goals)&lt;br /&gt;
* [[Trinity_Protocol]]&lt;br /&gt;
* [[STAR BEAST]]&lt;br /&gt;
* [[callSNPs.py]]&lt;br /&gt;
* [[pairwiseCallSNPs]]&lt;br /&gt;
* [[mapping.py]]&lt;br /&gt;
* [[Edgen RNAseq]]&lt;br /&gt;
* [[Miseq Prokaryote FASTQ analysis]]&lt;br /&gt;
* [[snpcallphylo]]&lt;br /&gt;
* [[Bottlenose dolphin population genomic analysis]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast 12.09.2017]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast 07.11.2017]]&lt;br /&gt;
* [[Bisulfite Sequencing]]&lt;br /&gt;
* [[microRNA and Salmo Salar]]&lt;br /&gt;
&lt;br /&gt;
=Protocols=&lt;br /&gt;
(Extensive workflows with different with several possible end goals)&lt;br /&gt;
* [[Synthetic Long reads]]&lt;br /&gt;
* [[MinION (Oxford Nanopore)]]&lt;br /&gt;
* [[MinKNOW folders and log files]]&lt;br /&gt;
* [[Research Data Management]]&lt;br /&gt;
* [[MicroRNAs]]&lt;br /&gt;
&lt;br /&gt;
= Tech Reviews =&lt;br /&gt;
* [[SWATH-MS Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
= Cluster Administration =&lt;br /&gt;
* [[StABDMIN]]&lt;br /&gt;
* [[Hardware Issues]]&lt;br /&gt;
* [[marvin and IPMI (remote hardware control)]]&lt;br /&gt;
* [[restart a node]]&lt;br /&gt;
* [[Admin Tips]]&lt;br /&gt;
* [[RedHat]]&lt;br /&gt;
* [[Globus_gridftp]]&lt;br /&gt;
* [[Galaxy Setup]]&lt;br /&gt;
* [[Son of Gridengine]]&lt;br /&gt;
* [[Blas Libraries]]&lt;br /&gt;
* [[CMake]]&lt;br /&gt;
* [[conda bioconda]]&lt;br /&gt;
* [[Users and Groups]]&lt;br /&gt;
* [[Installing software on marvin]]&lt;br /&gt;
* [[emailing]]&lt;br /&gt;
* [[biotime machine]]&lt;br /&gt;
* [[SCAN-pc laptop]]&lt;br /&gt;
* [[node1 issues]]&lt;br /&gt;
* [[6TB storage expansion]]&lt;br /&gt;
* [[PIs storage sacrifice]]&lt;br /&gt;
* [[SAN relocation task]]&lt;br /&gt;
* [[Home directories max-out incident 28.11.2016]]&lt;br /&gt;
* [[Frontend Restart]]&lt;br /&gt;
* [[environment-modules]]&lt;br /&gt;
* [[H: drive on cluster]]&lt;br /&gt;
* [[Incident: Can&amp;#039;t connect to BerkeleyDB]]&lt;br /&gt;
* [[Bioinformatics Wordpress Site]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* [[users disk usage]]&lt;br /&gt;
* [[Updating BLAST databases]]&lt;br /&gt;
* [[Python DRMAA]]&lt;br /&gt;
* [[message of the day]]&lt;br /&gt;
* [[SAN disconnect incident 10.01.2017]]&lt;br /&gt;
* [[Memory repair glitch 16.02.2017]]&lt;br /&gt;
* [[node9 network failure incident 16-20.03.2017]]&lt;br /&gt;
* [[Incorrect rebooting of marvin 19.09.2017]]&lt;br /&gt;
&lt;br /&gt;
= Courses =&lt;br /&gt;
&lt;br /&gt;
==I2U4BGA==&lt;br /&gt;
* [[Original schedule]]&lt;br /&gt;
* [[New schedule]]&lt;br /&gt;
* [[Actual schedule]]&lt;br /&gt;
* [[Course itself]]&lt;br /&gt;
* [[Biolinux Source course]]&lt;br /&gt;
* [[Directory Organization Exercise]]&lt;br /&gt;
* [[Glossary]]&lt;br /&gt;
* [[Key Bindings]]&lt;br /&gt;
* [[one-liners]]&lt;br /&gt;
* [[Cheatsheets]]&lt;br /&gt;
* [[Links]]&lt;br /&gt;
* [[pandoc modified manual]]&lt;br /&gt;
* [[Command Line Exercises]]&lt;br /&gt;
&lt;br /&gt;
= hdi2u =&lt;br /&gt;
&lt;br /&gt;
The half-day linux course held on 20th April. Modified version of I2U4BGA.&lt;br /&gt;
&lt;br /&gt;
* [[hdi2u_intro]]&lt;br /&gt;
* [[hdi2u_commandbased_exercises]]&lt;br /&gt;
* [[hdi2u_dirorg_exercise]]&lt;br /&gt;
* [[hdi2u_rendertotsv_exercise]]&lt;br /&gt;
&lt;br /&gt;
= RNAseq for DGE =&lt;br /&gt;
* [[Theoretical background]]&lt;br /&gt;
* [[Quality Control and Preprocessing]]&lt;br /&gt;
* [[Mapping to Reference]]&lt;br /&gt;
* [[Mapping Quality Exercise]]&lt;br /&gt;
* [[Key Aspects of using R]]&lt;br /&gt;
* [[Estimating Gene Count Exercise]]&lt;br /&gt;
* [[Differential Expression Exercise]]&lt;br /&gt;
* [[Functional Analysis Exercise]]&lt;br /&gt;
&lt;br /&gt;
= Introduction to Unix 2017 =&lt;br /&gt;
* [[Introduction_to_Unix_2017]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Templates==&lt;br /&gt;
* [[edgenl2g]]&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Main_Page&amp;diff=3381</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Main_Page&amp;diff=3381"/>
				<updated>2019-05-01T12:29:42Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Cluster Administration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Usage of Cluster=&lt;br /&gt;
* [[Cluster Manual]]&lt;br /&gt;
* [[Why a Queue Manager?]]&lt;br /&gt;
* [[Available Software]]&lt;br /&gt;
* [[how to use the cluster training course]]&lt;br /&gt;
&lt;br /&gt;
= Documented Programs =&lt;br /&gt;
&lt;br /&gt;
The following can be seen as extra notes referring to these programs usage on the marvin cluster, with an emphais on example use-cases. Most, if not all, will have their own special websites, with more detailed manuals and further information.&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;width:85%&amp;quot;&lt;br /&gt;
|* [[abacas]]&lt;br /&gt;
|* [[albacore]]&lt;br /&gt;
|* [[ariba]]&lt;br /&gt;
|* [[aspera]]&lt;br /&gt;
|* [[assembly-stats]]&lt;br /&gt;
|* [[augustus]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BamQC]]&lt;br /&gt;
|* [[bamtools]]&lt;br /&gt;
|* [[banjo]]&lt;br /&gt;
|* [[bcftools]]&lt;br /&gt;
|* [[bedtools]]&lt;br /&gt;
|* [[bgenie]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BLAST]]&lt;br /&gt;
|* [[Blat]]&lt;br /&gt;
|* [[blast2go: b2g4pipe]]&lt;br /&gt;
|* [[bowtie]]&lt;br /&gt;
|* [[bowtie2]]&lt;br /&gt;
|* [[bwa]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[BUSCO]]&lt;br /&gt;
|* [[CAFE]]&lt;br /&gt;
|* [[canu]]&lt;br /&gt;
|* [[cd-hit]]&lt;br /&gt;
|* [[cegma]]&lt;br /&gt;
|* [[clustal]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[cramtools]]&lt;br /&gt;
|* [[conda]]&lt;br /&gt;
|* [[deeptools]]&lt;br /&gt;
|* [[detonate]]&lt;br /&gt;
|* [[diamond]]&lt;br /&gt;
|* [[ea-utils]]&lt;br /&gt;
|* [[ensembl]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[ETE]]&lt;br /&gt;
|* [[FASTQC and MultiQC]]&lt;br /&gt;
|* [[Archaeopteryx and Forester]]&lt;br /&gt;
|* [[GapFiller]]&lt;br /&gt;
|* [[GenomeTools]]&lt;br /&gt;
|* [[gubbins]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[JBrowse]]&lt;br /&gt;
|* [[kallisto]]&lt;br /&gt;
|* [[kentUtils]]&lt;br /&gt;
|* [[last]]&lt;br /&gt;
|* [[lastz]]&lt;br /&gt;
|* [[macs2]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[Mash]]&lt;br /&gt;
|* [[mega]]&lt;br /&gt;
|* [[meryl]]&lt;br /&gt;
|* [[MUMmer]]&lt;br /&gt;
|* [[NanoSim]]&lt;br /&gt;
|* [[nseq]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[OrthoFinder]]&lt;br /&gt;
|* [[PASA]]&lt;br /&gt;
|* [[perl]]&lt;br /&gt;
|* [[PGAP]]&lt;br /&gt;
|* [[picard-tools]]&lt;br /&gt;
|* [[poRe]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[poretools]]&lt;br /&gt;
|* [[prokka]]&lt;br /&gt;
|* [[pyrad]]&lt;br /&gt;
|* [[python]]&lt;br /&gt;
|* [[qualimap]]&lt;br /&gt;
|* [[quast]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[qiime2]]&lt;br /&gt;
|* [[R]]&lt;br /&gt;
|* [[RAxML]]&lt;br /&gt;
|* [[Repeatmasker]]&lt;br /&gt;
|* [[Repeatmodeler]]&lt;br /&gt;
|* [[rnammer]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[roary]]&lt;br /&gt;
|* [[RSeQC]]&lt;br /&gt;
|* [[samtools]]&lt;br /&gt;
|* [[Satsuma]]&lt;br /&gt;
|* [[sickle]]&lt;br /&gt;
|* [[SPAdes]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[squid]]&lt;br /&gt;
|* [[sra-tools]]&lt;br /&gt;
|* [[srst2]]&lt;br /&gt;
|* [[SSPACE]]&lt;br /&gt;
|* [[stacks]]&lt;br /&gt;
|* [[Thor]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[Tophat]]&lt;br /&gt;
|* [[trimmomatic]]&lt;br /&gt;
|* [[Trinity]]&lt;br /&gt;
|* [[t-coffee]]&lt;br /&gt;
|* [[Unicycler]]&lt;br /&gt;
|* [[velvet]]&lt;br /&gt;
|-&lt;br /&gt;
|* [[ViennaRNA]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Queue Manager Tips =&lt;br /&gt;
A cluster is a shared resource with different users running different types of analyses. Nearly all clusters use a piece of software called a queue manager to fairly share out the resource. The queue manager on marvin is called Grid Engine, and it has several commands available, all beginning with &amp;#039;&amp;#039;&amp;#039;q&amp;#039;&amp;#039;&amp;#039; and with &amp;#039;&amp;#039;&amp;#039;qsub&amp;#039;&amp;#039;&amp;#039; being the most commonly used as it submits a command via a jobscript to be processed. Here are some tips:&lt;br /&gt;
* [[Queue Manager Tips]]&lt;br /&gt;
* [[General Command-line Tips]]&lt;br /&gt;
* [[DRMAA for further Gridengine automation]]&lt;br /&gt;
&lt;br /&gt;
= Data Examples =&lt;br /&gt;
* [[Two Eel Scaffolds]]&lt;br /&gt;
&lt;br /&gt;
= Procedures =&lt;br /&gt;
(short sequence of tasks with a certain short-term goal, often, a simple script)&lt;br /&gt;
* [[Calculating coverage]]&lt;br /&gt;
* [[MinION Coverage sensitivity analysis]]&lt;br /&gt;
&lt;br /&gt;
= Navigating genomic data websites=&lt;br /&gt;
* [[Patric]]&lt;br /&gt;
* [[NCBI]]&lt;br /&gt;
* [[IGSR/1000 Genomes]]&lt;br /&gt;
&lt;br /&gt;
= Explanations=&lt;br /&gt;
* [[ITUcourse]]&lt;br /&gt;
* [[VCF]]&lt;br /&gt;
* [[Maximum Likelihood]]&lt;br /&gt;
* [[SNP Analysis and phylogenetics]]&lt;br /&gt;
* [[Normalization]]&lt;br /&gt;
&lt;br /&gt;
= Pipelines =&lt;br /&gt;
(Workflow with specific end-goals)&lt;br /&gt;
* [[Trinity_Protocol]]&lt;br /&gt;
* [[STAR BEAST]]&lt;br /&gt;
* [[callSNPs.py]]&lt;br /&gt;
* [[pairwiseCallSNPs]]&lt;br /&gt;
* [[mapping.py]]&lt;br /&gt;
* [[Edgen RNAseq]]&lt;br /&gt;
* [[Miseq Prokaryote FASTQ analysis]]&lt;br /&gt;
* [[snpcallphylo]]&lt;br /&gt;
* [[Bottlenose dolphin population genomic analysis]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast 12.09.2017]]&lt;br /&gt;
* [[ChIP-Seq Top2 in Yeast 07.11.2017]]&lt;br /&gt;
* [[Bisulfite Sequencing]]&lt;br /&gt;
* [[microRNA and Salmo Salar]]&lt;br /&gt;
&lt;br /&gt;
=Protocols=&lt;br /&gt;
(Extensive workflows with different with several possible end goals)&lt;br /&gt;
* [[Synthetic Long reads]]&lt;br /&gt;
* [[MinION (Oxford Nanopore)]]&lt;br /&gt;
* [[MinKNOW folders and log files]]&lt;br /&gt;
* [[Research Data Management]]&lt;br /&gt;
* [[MicroRNAs]]&lt;br /&gt;
&lt;br /&gt;
= Tech Reviews =&lt;br /&gt;
* [[SWATH-MS Data Analysis]]&lt;br /&gt;
&lt;br /&gt;
= Cluster Administration =&lt;br /&gt;
* [[StABDMIN Site]]&lt;br /&gt;
* [[Hardware Issues]]&lt;br /&gt;
* [[marvin and IPMI (remote hardware control)]]&lt;br /&gt;
* [[restart a node]]&lt;br /&gt;
* [[Admin Tips]]&lt;br /&gt;
* [[RedHat]]&lt;br /&gt;
* [[Globus_gridftp]]&lt;br /&gt;
* [[Galaxy Setup]]&lt;br /&gt;
* [[Son of Gridengine]]&lt;br /&gt;
* [[Blas Libraries]]&lt;br /&gt;
* [[CMake]]&lt;br /&gt;
* [[conda bioconda]]&lt;br /&gt;
* [[Users and Groups]]&lt;br /&gt;
* [[Installing software on marvin]]&lt;br /&gt;
* [[emailing]]&lt;br /&gt;
* [[biotime machine]]&lt;br /&gt;
* [[SCAN-pc laptop]]&lt;br /&gt;
* [[node1 issues]]&lt;br /&gt;
* [[6TB storage expansion]]&lt;br /&gt;
* [[PIs storage sacrifice]]&lt;br /&gt;
* [[SAN relocation task]]&lt;br /&gt;
* [[Home directories max-out incident 28.11.2016]]&lt;br /&gt;
* [[Frontend Restart]]&lt;br /&gt;
* [[environment-modules]]&lt;br /&gt;
* [[H: drive on cluster]]&lt;br /&gt;
* [[Incident: Can&amp;#039;t connect to BerkeleyDB]]&lt;br /&gt;
* [[Bioinformatics Wordpress Site]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* [[users disk usage]]&lt;br /&gt;
* [[Updating BLAST databases]]&lt;br /&gt;
* [[Python DRMAA]]&lt;br /&gt;
* [[message of the day]]&lt;br /&gt;
* [[SAN disconnect incident 10.01.2017]]&lt;br /&gt;
* [[Memory repair glitch 16.02.2017]]&lt;br /&gt;
* [[node9 network failure incident 16-20.03.2017]]&lt;br /&gt;
* [[Incorrect rebooting of marvin 19.09.2017]]&lt;br /&gt;
&lt;br /&gt;
= Courses =&lt;br /&gt;
&lt;br /&gt;
==I2U4BGA==&lt;br /&gt;
* [[Original schedule]]&lt;br /&gt;
* [[New schedule]]&lt;br /&gt;
* [[Actual schedule]]&lt;br /&gt;
* [[Course itself]]&lt;br /&gt;
* [[Biolinux Source course]]&lt;br /&gt;
* [[Directory Organization Exercise]]&lt;br /&gt;
* [[Glossary]]&lt;br /&gt;
* [[Key Bindings]]&lt;br /&gt;
* [[one-liners]]&lt;br /&gt;
* [[Cheatsheets]]&lt;br /&gt;
* [[Links]]&lt;br /&gt;
* [[pandoc modified manual]]&lt;br /&gt;
* [[Command Line Exercises]]&lt;br /&gt;
&lt;br /&gt;
= hdi2u =&lt;br /&gt;
&lt;br /&gt;
The half-day linux course held on 20th April. Modified version of I2U4BGA.&lt;br /&gt;
&lt;br /&gt;
* [[hdi2u_intro]]&lt;br /&gt;
* [[hdi2u_commandbased_exercises]]&lt;br /&gt;
* [[hdi2u_dirorg_exercise]]&lt;br /&gt;
* [[hdi2u_rendertotsv_exercise]]&lt;br /&gt;
&lt;br /&gt;
= RNAseq for DGE =&lt;br /&gt;
* [[Theoretical background]]&lt;br /&gt;
* [[Quality Control and Preprocessing]]&lt;br /&gt;
* [[Mapping to Reference]]&lt;br /&gt;
* [[Mapping Quality Exercise]]&lt;br /&gt;
* [[Key Aspects of using R]]&lt;br /&gt;
* [[Estimating Gene Count Exercise]]&lt;br /&gt;
* [[Differential Expression Exercise]]&lt;br /&gt;
* [[Functional Analysis Exercise]]&lt;br /&gt;
&lt;br /&gt;
= Introduction to Unix 2017 =&lt;br /&gt;
* [[Introduction_to_Unix_2017]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Templates==&lt;br /&gt;
* [[edgenl2g]]&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3380</id>
		<title>StABDMIN</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3380"/>
				<updated>2019-05-01T10:16:55Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=StABDMIN Site=&lt;br /&gt;
&lt;br /&gt;
Currently running at: marvin.st-andrews.ac.uk on port 80.&lt;br /&gt;
&lt;br /&gt;
The StABDMIN site was born of the need of a way to track the users on marvin, the PIs they&amp;#039;re associated with, the funded and unfunded grants they hold and the data they use. &lt;br /&gt;
&lt;br /&gt;
It was written by Joe in about 2 weeks in 2019 and he apologises sincerely for the lack of comments in the code, and absence of tests. It&amp;#039;s still a damn site better than what was there before (i.e. nothing). &lt;br /&gt;
&lt;br /&gt;
=Keeping it running=&lt;br /&gt;
&lt;br /&gt;
The database is called StABDMIN, the database user is StABDMIN and the password is stored in /etc/mysql/stabdmin.cnf. &lt;br /&gt;
&lt;br /&gt;
The code for the project is in /storage/home/users/StABDMIN. The site is set to store static files (css, js etc) in /var/www/StABDMIN/. &lt;br /&gt;
&lt;br /&gt;
Currently static files are being server using the python package &amp;quot;whitenoise&amp;quot;, but this ought to be done with nginx or apache in the long run. It&amp;#039;s also not served over https currently as marvin can&amp;#039;t cope with a modern apache (mod_wsgi was compiled with python 2.6). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Restarting the server==&lt;br /&gt;
As root: &lt;br /&gt;
&lt;br /&gt;
*To start the webserver navigate to /storage/home/users/StABDMIN/StABDMIN&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;module load python/3.6.4&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;sh run_webserver.sh&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Importing data into the backend==&lt;br /&gt;
&lt;br /&gt;
The script /storage/home/users/StABDMIN/StABDMIN/importUsageFromFiles.py is set into cron to run everyday at 5.05am. It relies on the result of the /mnt/system_usage/ scripts written by Peter Thorpe. It loads the data from the files created by this script into the system for tracking user data usage and total disk use/free space. &lt;br /&gt;
&lt;br /&gt;
To add users/PIs/Grants, you need to use the ADMIN site, access from the &amp;quot;Admin&amp;quot; link on the navigation bar of the website. &lt;br /&gt;
&lt;br /&gt;
If a new PI has a new Users and a new Grant, the PI &amp;#039;&amp;#039;&amp;#039;must&amp;#039;&amp;#039;&amp;#039; be created first. &lt;br /&gt;
&lt;br /&gt;
If a new bioinformatician joins, assign them the PI &amp;quot;StABU&amp;quot; and &amp;#039;&amp;#039;&amp;#039;add them to the bioinformatician table&amp;#039;&amp;#039;&amp;#039;. This allows tracking of their use within the StABU PI, and also allows them to be assigned as primary bioinformaticians on grants. &lt;br /&gt;
&lt;br /&gt;
And this will run gunicorn using&lt;br /&gt;
 gunicorn StABDMIN.wsgi:application  --pid StABDMIN.pid -b marvin.st-andrews.ac.uk:80 -n StABDMIN -D&lt;br /&gt;
&lt;br /&gt;
=How it works=&lt;br /&gt;
&lt;br /&gt;
The whole thing is based on the python package Django, but uses javascript for loading data into datatables (javascript package that makes the tables pretty), using JQuery (js again) ajax calls. The plots are created by D3 (javascript plotting library), again from data from ajax calls. &lt;br /&gt;
&lt;br /&gt;
There are 5 tables in the database directly used by django. &lt;br /&gt;
&lt;br /&gt;
Pis stores all the PI information. &lt;br /&gt;
GrantSubmissions stores the grants information, whether it&amp;#039;s proposals, expired, or funded. &lt;br /&gt;
Users stores the information on users. &lt;br /&gt;
help notes stores notes on help given, meaning we can track what we&amp;#039;ve done for people. These can be assigned to users, PIs and/or grants. &lt;br /&gt;
Bioinformaticians is a way of highlighting user instances as bioinformaticians so we can assign them to grants.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3379</id>
		<title>StABDMIN</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3379"/>
				<updated>2019-05-01T10:01:46Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=StABDMIN Site=&lt;br /&gt;
&lt;br /&gt;
Currently running at: marvin.st-andrews.ac.uk on port 80.&lt;br /&gt;
&lt;br /&gt;
The StABDMIN site was born of the need of a way to track the users on marvin, the PIs they&amp;#039;re associated with, the funded and unfunded grants they hold and the data they use. &lt;br /&gt;
&lt;br /&gt;
It was written by Joe in about 2 weeks in 2019 and he apologises sincerely for the lack of comments in the code, and absence of tests. It&amp;#039;s still a damn site better than what was there before (i.e. nothing). &lt;br /&gt;
&lt;br /&gt;
=Keeping it running=&lt;br /&gt;
&lt;br /&gt;
The database is called StABDMIN, the database user is StABDMIN and the password is stored in /etc/mysql/stabdmin.cnf. &lt;br /&gt;
&lt;br /&gt;
The code for the project is in /storage/home/users/StABDMIN. The site is set to store static files (css, js etc) in /var/www/StABDMIN/. &lt;br /&gt;
&lt;br /&gt;
Currently static files are being server using the python package &amp;quot;whitenoise&amp;quot;, but this ought to be done with nginx or apache in the long run. It&amp;#039;s also not served over https currently as marvin can&amp;#039;t cope with a modern apache (mod_wsgi was compiled with python 2.6). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Restarting the server==&lt;br /&gt;
As root: &lt;br /&gt;
&lt;br /&gt;
*To start the webserver navigate to /storage/home/users/StABDMIN/StABDMIN&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;module load python/3.6.4&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;sh run_webserver.sh&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Importing data into the backend==&lt;br /&gt;
&lt;br /&gt;
The script /storage/home/users/StABDMIN/StABDMIN/importUsageFromFiles.py is set into cron to run everyday at 5.05am. It relies on the result of the /mnt/system_usage/ scripts written by Peter Thorpe. It loads the data from the files created by this script into the system for tracking user data usage and total disk use/free space. &lt;br /&gt;
&lt;br /&gt;
To add users/PIs/Grants, you need to use the ADMIN site, access from the &amp;quot;Admin&amp;quot; link on the navigation bar of the website. &lt;br /&gt;
&lt;br /&gt;
If a new PI has a new Users and a new Grant, the PI &amp;#039;&amp;#039;&amp;#039;must&amp;#039;&amp;#039;&amp;#039; be created first. &lt;br /&gt;
&lt;br /&gt;
If a new bioinformatician joins, assign them the PI &amp;quot;StABU&amp;quot; and &amp;#039;&amp;#039;&amp;#039;add them to the bioinformatician table&amp;#039;&amp;#039;&amp;#039;. This allows tracking of their use within the StABU PI, and also allows them to be assigned as primary bioinformaticians on grants. &lt;br /&gt;
&lt;br /&gt;
And this will run gunicorn using&lt;br /&gt;
 gunicorn StABDMIN.wsgi:application  --pid StABDMIN.pid -b marvin.st-andrews.ac.uk:80 -n StABDMIN -D&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3378</id>
		<title>StABDMIN</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3378"/>
				<updated>2019-05-01T09:53:33Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Keeping it running */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=StABDMIN Site=&lt;br /&gt;
&lt;br /&gt;
Currently running at: marvin.st-andrews.ac.uk on port 80.&lt;br /&gt;
&lt;br /&gt;
The StABDMIN site was born of the need of a way to track the users on marvin, the PIs they&amp;#039;re associated with, the funded and unfunded grants they hold and the data they use. &lt;br /&gt;
&lt;br /&gt;
It was written by Joe in about 2 weeks in 2019 and he apologises sincerely for the lack of comments in the code, and absence of tests. It&amp;#039;s still a damn site better than what was there before (i.e. nothing). &lt;br /&gt;
&lt;br /&gt;
=Keeping it running=&lt;br /&gt;
&lt;br /&gt;
The database is called StABDMIN, the database user is StABDMIN and the password is stored in /etc/mysql/stabdmin.cnf. &lt;br /&gt;
&lt;br /&gt;
The code for the project is in /storage/home/users/StABDMIN. The site is set to store static files (css, js etc) in /var/www/StABDMIN/. &lt;br /&gt;
&lt;br /&gt;
Currently static files are being server using the python package &amp;quot;whitenoise&amp;quot;, but this ought to be done with nginx or apache in the long run. It&amp;#039;s also not served over https currently as marvin can&amp;#039;t cope with a modern apache (mod_wsgi was compiled with python 2.6). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Restarting the server==&lt;br /&gt;
As root: &lt;br /&gt;
&lt;br /&gt;
*To start the webserver navigate to /storage/home/users/StABDMIN/StABDMIN&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;module load python/3.6.4&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;sh run_webserver.sh&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And this will run gunicorn using&lt;br /&gt;
 gunicorn StABDMIN.wsgi:application  --pid StABDMIN.pid -b marvin.st-andrews.ac.uk:80 -n StABDMIN -D&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3377</id>
		<title>StABDMIN</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3377"/>
				<updated>2019-05-01T09:50:22Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Restarting the server */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=StABDMIN Site=&lt;br /&gt;
&lt;br /&gt;
Currently running at: marvin.st-andrews.ac.uk on port 80.&lt;br /&gt;
&lt;br /&gt;
The StABDMIN site was born of the need of a way to track the users on marvin, the PIs they&amp;#039;re associated with, the funded and unfunded grants they hold and the data they use. &lt;br /&gt;
&lt;br /&gt;
It was written by Joe in about 2 weeks in 2019 and he apologises sincerely for the lack of comments in the code, and absence of tests. It&amp;#039;s still a damn site better than what was there before (i.e. nothing). &lt;br /&gt;
&lt;br /&gt;
=Keeping it running=&lt;br /&gt;
&lt;br /&gt;
The database is called StABDMIN, the database user is StABDMIN and the password is stored in /etc/mysql/stabdmin.cnf. &lt;br /&gt;
&lt;br /&gt;
The code for the project is in /storage/home/users/StABDMIN. The site is set to store static files (css, js etc) in /var/www/StABDMIN/. &lt;br /&gt;
&lt;br /&gt;
==Restarting the server==&lt;br /&gt;
As root: &lt;br /&gt;
&lt;br /&gt;
*To start the webserver navigate to /storage/home/users/StABDMIN/StABDMIN&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;module load python/3.6.4&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* run &amp;#039;&amp;#039;&amp;#039;sh run_webserver.sh&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And this will run gunicorn using&lt;br /&gt;
 gunicorn StABDMIN.wsgi:application  --pid StABDMIN.pid -b marvin.st-andrews.ac.uk:80 -n StABDMIN -D&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3376</id>
		<title>StABDMIN</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3376"/>
				<updated>2019-05-01T09:49:40Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Restarting the server */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=StABDMIN Site=&lt;br /&gt;
&lt;br /&gt;
Currently running at: marvin.st-andrews.ac.uk on port 80.&lt;br /&gt;
&lt;br /&gt;
The StABDMIN site was born of the need of a way to track the users on marvin, the PIs they&amp;#039;re associated with, the funded and unfunded grants they hold and the data they use. &lt;br /&gt;
&lt;br /&gt;
It was written by Joe in about 2 weeks in 2019 and he apologises sincerely for the lack of comments in the code, and absence of tests. It&amp;#039;s still a damn site better than what was there before (i.e. nothing). &lt;br /&gt;
&lt;br /&gt;
=Keeping it running=&lt;br /&gt;
&lt;br /&gt;
The database is called StABDMIN, the database user is StABDMIN and the password is stored in /etc/mysql/stabdmin.cnf. &lt;br /&gt;
&lt;br /&gt;
The code for the project is in /storage/home/users/StABDMIN. The site is set to store static files (css, js etc) in /var/www/StABDMIN/. &lt;br /&gt;
&lt;br /&gt;
==Restarting the server==&lt;br /&gt;
As root: &lt;br /&gt;
&lt;br /&gt;
*To start the webserver navigate to /storage/home/users/StABDMIN/StABDMIN&lt;br /&gt;
* run ```module load python/3.6.4```&lt;br /&gt;
* run ```sh run_webserver.sh```&lt;br /&gt;
&lt;br /&gt;
And this will run gunicorn using&lt;br /&gt;
 gunicorn StABDMIN.wsgi:application  --pid StABDMIN.pid -b marvin.st-andrews.ac.uk:80 -n StABDMIN -D&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3375</id>
		<title>StABDMIN</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3375"/>
				<updated>2019-05-01T09:49:26Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Restarting the server */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=StABDMIN Site=&lt;br /&gt;
&lt;br /&gt;
Currently running at: marvin.st-andrews.ac.uk on port 80.&lt;br /&gt;
&lt;br /&gt;
The StABDMIN site was born of the need of a way to track the users on marvin, the PIs they&amp;#039;re associated with, the funded and unfunded grants they hold and the data they use. &lt;br /&gt;
&lt;br /&gt;
It was written by Joe in about 2 weeks in 2019 and he apologises sincerely for the lack of comments in the code, and absence of tests. It&amp;#039;s still a damn site better than what was there before (i.e. nothing). &lt;br /&gt;
&lt;br /&gt;
=Keeping it running=&lt;br /&gt;
&lt;br /&gt;
The database is called StABDMIN, the database user is StABDMIN and the password is stored in /etc/mysql/stabdmin.cnf. &lt;br /&gt;
&lt;br /&gt;
The code for the project is in /storage/home/users/StABDMIN. The site is set to store static files (css, js etc) in /var/www/StABDMIN/. &lt;br /&gt;
&lt;br /&gt;
==Restarting the server==&lt;br /&gt;
As root: &lt;br /&gt;
&lt;br /&gt;
*To start the webserver navigate to /storage/home/users/StABDMIN/StABDMIN&lt;br /&gt;
* run `module load python/3.6.4`&lt;br /&gt;
* run `sh run_webserver.sh`&lt;br /&gt;
&lt;br /&gt;
And this will run gunicorn using&lt;br /&gt;
 gunicorn StABDMIN.wsgi:application  --pid StABDMIN.pid -b marvin.st-andrews.ac.uk:80 -n StABDMIN -D&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3374</id>
		<title>StABDMIN</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=StABDMIN&amp;diff=3374"/>
				<updated>2019-05-01T09:49:05Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: Created page with &amp;quot;=StABDMIN Site=  Currently running at: marvin.st-andrews.ac.uk on port 80.  The StABDMIN site was born of the need of a way to track the users on marvin, the PIs they&amp;#039;re assoc...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=StABDMIN Site=&lt;br /&gt;
&lt;br /&gt;
Currently running at: marvin.st-andrews.ac.uk on port 80.&lt;br /&gt;
&lt;br /&gt;
The StABDMIN site was born of the need of a way to track the users on marvin, the PIs they&amp;#039;re associated with, the funded and unfunded grants they hold and the data they use. &lt;br /&gt;
&lt;br /&gt;
It was written by Joe in about 2 weeks in 2019 and he apologises sincerely for the lack of comments in the code, and absence of tests. It&amp;#039;s still a damn site better than what was there before (i.e. nothing). &lt;br /&gt;
&lt;br /&gt;
=Keeping it running=&lt;br /&gt;
&lt;br /&gt;
The database is called StABDMIN, the database user is StABDMIN and the password is stored in /etc/mysql/stabdmin.cnf. &lt;br /&gt;
&lt;br /&gt;
The code for the project is in /storage/home/users/StABDMIN. The site is set to store static files (css, js etc) in /var/www/StABDMIN/. &lt;br /&gt;
&lt;br /&gt;
==Restarting the server==&lt;br /&gt;
As root: &lt;br /&gt;
&lt;br /&gt;
1. To start the webserver navigate to /storage/home/users/StABDMIN/StABDMIN&lt;br /&gt;
2. run `module load python/3.6.4`&lt;br /&gt;
3. run `sh run_webserver.sh`&lt;br /&gt;
&lt;br /&gt;
And this will run gunicorn using&lt;br /&gt;
 gunicorn StABDMIN.wsgi:application  --pid StABDMIN.pid -b marvin.st-andrews.ac.uk:80 -n StABDMIN -D&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Mysql&amp;diff=3373</id>
		<title>Mysql</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Mysql&amp;diff=3373"/>
				<updated>2019-04-11T09:38:16Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: Created page with &amp;quot;If the mysql command doesn&amp;#039;t work use   service mysqld start   to start it.     Root password is the bioinformatics root password.&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;If the mysql command doesn&amp;#039;t work use&lt;br /&gt;
&lt;br /&gt;
 service mysqld start &lt;br /&gt;
&lt;br /&gt;
to start it. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Root password is the bioinformatics root password.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Bioinformatics_Wordpress_Site&amp;diff=3372</id>
		<title>Bioinformatics Wordpress Site</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Bioinformatics_Wordpress_Site&amp;diff=3372"/>
				<updated>2019-03-27T11:13:05Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Creating a blog post */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Creating a blog post = &lt;br /&gt;
Navigate to the bioinformatics wordpress admin (wp-admin) page: [http://med.st-andrews.ac.uk/bioinformatics/wp-admin]&lt;br /&gt;
&lt;br /&gt;
Log in. Contact Steve Smart if you don&amp;#039;t have an account? &lt;br /&gt;
&lt;br /&gt;
Click on posts on the left side and select new post at the top&lt;br /&gt;
&lt;br /&gt;
[[File:Newpost.png]]&lt;br /&gt;
&lt;br /&gt;
Write the post, and click update (or publish). It gives you the link to use at the top of the editing page&lt;br /&gt;
&lt;br /&gt;
[[File:Updatepost.png]]&lt;br /&gt;
&lt;br /&gt;
You&amp;#039;ll need a visual form builder form to add registration details to the page. The best way to do this is probably using the &amp;quot;duplicate&amp;quot; function and using a previous form. You&amp;#039;ll need to edit minor details in here but it&amp;#039;s fairly obvious. Make sure you check all of the drop downs. &lt;br /&gt;
&lt;br /&gt;
[[File:Visualformbuilder.png]]&lt;br /&gt;
&lt;br /&gt;
To add the created for to the post, add the line &lt;br /&gt;
 [vfb id=&amp;#039;1&amp;#039;]&lt;br /&gt;
where 1 is the ID of the form. This information is provided in the bottom left of the visual form builder edit page.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can export the data from the form by using the &amp;quot;export&amp;quot; section under the visual form builder. &lt;br /&gt;
&lt;br /&gt;
[[File:Export.png]]&lt;br /&gt;
&lt;br /&gt;
= Tips =&lt;br /&gt;
&lt;br /&gt;
* Documents such as PDFs, etc must be uploaded as &amp;quot;Media&amp;quot;. Then the document must be &amp;quot;attached&amp;quot; to a certain page. Even after this, when trying to include the document as a link in a page, one must use &amp;quot;add media&amp;quot; option, the document may not appear on the first tab. There are others however, so it&amp;#039;s a question of viewing all the tabs to find it.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=File:Export.png&amp;diff=3371</id>
		<title>File:Export.png</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=File:Export.png&amp;diff=3371"/>
				<updated>2019-03-27T11:12:34Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Bioinformatics_Wordpress_Site&amp;diff=3370</id>
		<title>Bioinformatics Wordpress Site</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Bioinformatics_Wordpress_Site&amp;diff=3370"/>
				<updated>2019-03-27T11:08:46Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Creating a blog post */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Creating a blog post = &lt;br /&gt;
Navigate to the bioinformatics wordpress admin (wp-admin) page: [http://med.st-andrews.ac.uk/bioinformatics/wp-admin]&lt;br /&gt;
&lt;br /&gt;
Log in. Contact Steve Smart if you don&amp;#039;t have an account? &lt;br /&gt;
&lt;br /&gt;
Click on posts on the left side and select new post at the top&lt;br /&gt;
&lt;br /&gt;
[[File:Newpost.png]]&lt;br /&gt;
&lt;br /&gt;
Write the post, and click update (or publish). It gives you the link to use at the top of the editing page&lt;br /&gt;
&lt;br /&gt;
[[File:Updatepost.png]]&lt;br /&gt;
&lt;br /&gt;
You&amp;#039;ll need a visual form builder form to add registration details to the page. The best way to do this is probably using the &amp;quot;duplicate&amp;quot; function and using a previous form. You&amp;#039;ll need to edit minor details in here but it&amp;#039;s fairly obvious. Make sure you check all of the drop downs. &lt;br /&gt;
&lt;br /&gt;
[[File:Visualformbuilder.png]]&lt;br /&gt;
&lt;br /&gt;
To add the created for to the post, add the line &lt;br /&gt;
 [vfb id=&amp;#039;1&amp;#039;]&lt;br /&gt;
where 1 is the ID of the form. This information is provided in the bottom left of the visual form builder edit page.&lt;br /&gt;
&lt;br /&gt;
= Tips =&lt;br /&gt;
&lt;br /&gt;
* Documents such as PDFs, etc must be uploaded as &amp;quot;Media&amp;quot;. Then the document must be &amp;quot;attached&amp;quot; to a certain page. Even after this, when trying to include the document as a link in a page, one must use &amp;quot;add media&amp;quot; option, the document may not appear on the first tab. There are others however, so it&amp;#039;s a question of viewing all the tabs to find it.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Bioinformatics_Wordpress_Site&amp;diff=3369</id>
		<title>Bioinformatics Wordpress Site</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Bioinformatics_Wordpress_Site&amp;diff=3369"/>
				<updated>2019-03-27T11:03:45Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Creating a blog post */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Creating a blog post = &lt;br /&gt;
Navigate to the bioinformatics wordpress admin (wp-admin) page: [http://med.st-andrews.ac.uk/bioinformatics/wp-admin|http://med.st-andrews.ac.uk/bioinformatics/wp-admin]&lt;br /&gt;
&lt;br /&gt;
Log in. Contact Steve Smart if you don&amp;#039;t have an account? &lt;br /&gt;
&lt;br /&gt;
Click on posts on the left side and select new post at the top&lt;br /&gt;
&lt;br /&gt;
[[File:Newpost.png]]&lt;br /&gt;
&lt;br /&gt;
Write the post, and click update (or publish). It gives you the link to use at the top of the editing page&lt;br /&gt;
&lt;br /&gt;
[[File:Updatepost.png]]&lt;br /&gt;
&lt;br /&gt;
You&amp;#039;ll need a visual form builder form to add registration details to the page. The best way to do this is probably using the &amp;quot;duplicate&amp;quot; function and using a previous form. You&amp;#039;ll need to edit minor details in here but it&amp;#039;s fairly obvious. Make sure you check all of the drop downs. &lt;br /&gt;
&lt;br /&gt;
[[File:Visualformbuilder.png]]&lt;br /&gt;
&lt;br /&gt;
To add the created for to the post, add the line &lt;br /&gt;
 [vfb id=&amp;#039;1&amp;#039;]&lt;br /&gt;
where 1 is the ID of the form. This information is provided in the bottom left of the visual form builder edit page.&lt;br /&gt;
&lt;br /&gt;
= Tips =&lt;br /&gt;
&lt;br /&gt;
* Documents such as PDFs, etc must be uploaded as &amp;quot;Media&amp;quot;. Then the document must be &amp;quot;attached&amp;quot; to a certain page. Even after this, when trying to include the document as a link in a page, one must use &amp;quot;add media&amp;quot; option, the document may not appear on the first tab. There are others however, so it&amp;#039;s a question of viewing all the tabs to find it.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Bioinformatics_Wordpress_Site&amp;diff=3368</id>
		<title>Bioinformatics Wordpress Site</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Bioinformatics_Wordpress_Site&amp;diff=3368"/>
				<updated>2019-03-27T11:02:35Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Tips */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Creating a blog post = &lt;br /&gt;
[http://med.st-andrews.ac.uk/bioinformatics/wp-admin]&lt;br /&gt;
&lt;br /&gt;
Log in. Contact Steve Smart if you don&amp;#039;t have an account? &lt;br /&gt;
&lt;br /&gt;
Click on posts on the left side and select new post at the top&lt;br /&gt;
&lt;br /&gt;
[[File:Newpost.png]]&lt;br /&gt;
&lt;br /&gt;
Write the post, and click update (or publish). It gives you the link to use at the top of the editing page&lt;br /&gt;
&lt;br /&gt;
[[File:Updatepost.jpg]]&lt;br /&gt;
&lt;br /&gt;
You&amp;#039;ll need a visual form builder form to add registration details to the page. The best way to do this is probably using the &amp;quot;duplicate&amp;quot; function and using a previous form. You&amp;#039;ll need to edit minor details in here but it&amp;#039;s fairly obvious. Make sure you check all of the drop downs. &lt;br /&gt;
&lt;br /&gt;
[[File:Visualformbuilder.png]]&lt;br /&gt;
&lt;br /&gt;
To add the created for to the post, add the line &lt;br /&gt;
 [vfb id=&amp;#039;1&amp;#039;]&lt;br /&gt;
where 1 is the ID of the form. This information is provided in the bottom left of the visual form builder edit page. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Tips =&lt;br /&gt;
&lt;br /&gt;
* Documents such as PDFs, etc must be uploaded as &amp;quot;Media&amp;quot;. Then the document must be &amp;quot;attached&amp;quot; to a certain page. Even after this, when trying to include the document as a link in a page, one must use &amp;quot;add media&amp;quot; option, the document may not appear on the first tab. There are others however, so it&amp;#039;s a question of viewing all the tabs to find it.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=File:Visualformbuilder.png&amp;diff=3367</id>
		<title>File:Visualformbuilder.png</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=File:Visualformbuilder.png&amp;diff=3367"/>
				<updated>2019-03-27T10:51:06Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=File:Updatepost.png&amp;diff=3366</id>
		<title>File:Updatepost.png</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=File:Updatepost.png&amp;diff=3366"/>
				<updated>2019-03-27T10:50:54Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=File:Newpost.png&amp;diff=3365</id>
		<title>File:Newpost.png</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=File:Newpost.png&amp;diff=3365"/>
				<updated>2019-03-27T10:50:46Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3364</id>
		<title>Son of Gridengine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3364"/>
				<updated>2019-03-15T10:12:45Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* edit /etc/bashrc */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
After Oracle bought Sun and then shut-sourced the Sun Grid Engine, Open Grid Engine made a release based SGEv5 in 2012 which hey called GE2011. However there were no further releases. Then ARC at the University of Liverpool started releasing its &amp;quot;Son of Gridengine and have been maintaining updates to it at least as far March 2016.&lt;br /&gt;
&lt;br /&gt;
Until September 2016, the Queue manager in the marvin cluster was GE2011 which was getting a bit old, so when the queue manager failed due to a corrupted database&lt;br /&gt;
&lt;br /&gt;
= Steps =&lt;br /&gt;
&lt;br /&gt;
==Administrative host setup==&lt;br /&gt;
&lt;br /&gt;
All nodes must be set up as administrative hosts, despite the fact that only the master seems to be &amp;quot;administrative&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting the XML::Simple perl module ==&lt;br /&gt;
&lt;br /&gt;
The RPMForge Extra repository are need for this. These can be installed via an RPM, and afterwards the Extra branch of the repo much be enabled as it is not enabled by default.&lt;br /&gt;
&lt;br /&gt;
Note that it is best to disable this after all the RPMs have been installed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Install the Son of Gridware RPMs ==&lt;br /&gt;
&lt;br /&gt;
Centos 7 requires the epel repo installed&lt;br /&gt;
 yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm&lt;br /&gt;
&lt;br /&gt;
And the following packages:&lt;br /&gt;
 yum install jemalloc-3.6.0 lesstif-0.95.2 munge-libs-0.5.11 libdb4-utils-4.8.30&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 yum install -y gridengine-8.1.9-1.el6.x86_64.rpm gridengine-devel-8.1.9-1.el6.noarch.rpm gridengine-execd-8.1.9-1.el6.x86_64.rpm gridengine-qmaster-8.1.9-1.el6.x86_64.rpm gridengine-qmon-8.1.9-1.el6.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/&lt;br /&gt;
&lt;br /&gt;
For a worker node this is rather excessive, but such is the nature of the binary-chek stage of the &amp;quot;install_exec&amp;quot; script that all of these are necessary.&lt;br /&gt;
&lt;br /&gt;
== copying the default/common directory over to the node ==&lt;br /&gt;
&lt;br /&gt;
**NOTE: Below is from Ramon, but when JW setup Phylo /opt/sge/default already existed, with the same date and file sizes as other nodes so not sure it&amp;#039;s needed? DO CHECK PERMISSIONS. The user ought to be sgeadmin and the group gridware.**&lt;br /&gt;
&lt;br /&gt;
First the  default directroy must be created:&lt;br /&gt;
&lt;br /&gt;
 ssh nodeX &amp;#039;mkdir /opt/sge/default&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And then followed by:&lt;br /&gt;
&lt;br /&gt;
 scp -r common node8:/opt/sge/default&lt;br /&gt;
&lt;br /&gt;
then run&lt;br /&gt;
 chown -R sgeadmin.gridware /opt/sge&lt;br /&gt;
&lt;br /&gt;
==edit /etc/bashrc==&lt;br /&gt;
&lt;br /&gt;
Add the following two lines to /etc/bashrc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 SGE_ROOT=/opt/sge; export SGE_ROOT;&lt;br /&gt;
 PATH=/opt/sge/bin/lx-amd64:$PATH&lt;br /&gt;
&lt;br /&gt;
Note: on the older centos6 nodes the path is /opt/sge/bin/linux-x64, but the newer centos7 node had it as /opt/sge/bin/lx-amd64.&lt;br /&gt;
&lt;br /&gt;
==Add the new server to the admin host==&lt;br /&gt;
(following this: https://docs.oracle.com/cd/E19957-01/820-0697/i999062/index.html)&lt;br /&gt;
So for phylo, run this on marvin:&lt;br /&gt;
 qconf -ah phylo&lt;br /&gt;
&lt;br /&gt;
check it&amp;#039;s been added&lt;br /&gt;
 qconf -sh &lt;br /&gt;
should show it&amp;#039;s been added.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Move into /opt/sge/ and run in install_execd. Follow the instructions above. For phylo everything was default.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Don&amp;#039;t forget to add it to the queue&amp;#039;s host&lt;br /&gt;
&lt;br /&gt;
 qconf -mq interactive.q&lt;br /&gt;
&lt;br /&gt;
and add the hostname to the end of the list of hosts&lt;br /&gt;
&lt;br /&gt;
= Administration =&lt;br /&gt;
&lt;br /&gt;
==Creating a new parallel environment==&lt;br /&gt;
&lt;br /&gt;
* Copy out an current parallel envioment out to a file&lt;br /&gt;
* edit this file as you wish&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
 qconf -Ap &amp;lt;my_pe_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A crucial oversight is to forget that this new parallel environment needs to be inserted into the queue&amp;#039;s configration.&lt;br /&gt;
&lt;br /&gt;
==Queues and hostgroups ==&lt;br /&gt;
&lt;br /&gt;
the dohfq.sh script accepts a rootname and list of numbers. The rootname becomes @rootname hostgroup and rootname.q for the queue.&lt;br /&gt;
Node0 is in fact marvin. 1, is node 1 etc. These are the nodes to be ncluded in the new queue.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3361</id>
		<title>Son of Gridengine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3361"/>
				<updated>2019-03-11T14:40:12Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Add the new server to the admin host */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
After Oracle bought Sun and then shut-sourced the Sun Grid Engine, Open Grid Engine made a release based SGEv5 in 2012 which hey called GE2011. However there were no further releases. Then ARC at the University of Liverpool started releasing its &amp;quot;Son of Gridengine and have been maintaining updates to it at least as far March 2016.&lt;br /&gt;
&lt;br /&gt;
Until September 2016, the Queue manager in the marvin cluster was GE2011 which was getting a bit old, so when the queue manager failed due to a corrupted database&lt;br /&gt;
&lt;br /&gt;
= Steps =&lt;br /&gt;
&lt;br /&gt;
==Administrative host setup==&lt;br /&gt;
&lt;br /&gt;
All nodes must be set up as administrative hosts, despite the fact that only the master seems to be &amp;quot;administrative&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting the XML::Simple perl module ==&lt;br /&gt;
&lt;br /&gt;
The RPMForge Extra repository are need for this. These can be installed via an RPM, and afterwards the Extra branch of the repo much be enabled as it is not enabled by default.&lt;br /&gt;
&lt;br /&gt;
Note that it is best to disable this after all the RPMs have been installed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Install the Son of Gridware RPMs ==&lt;br /&gt;
&lt;br /&gt;
Centos 7 requires the epel repo installed&lt;br /&gt;
 yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm&lt;br /&gt;
&lt;br /&gt;
And the following packages:&lt;br /&gt;
 yum install jemalloc-3.6.0 lesstif-0.95.2 munge-libs-0.5.11 libdb4-utils-4.8.30&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 yum install -y gridengine-8.1.9-1.el6.x86_64.rpm gridengine-devel-8.1.9-1.el6.noarch.rpm gridengine-execd-8.1.9-1.el6.x86_64.rpm gridengine-qmaster-8.1.9-1.el6.x86_64.rpm gridengine-qmon-8.1.9-1.el6.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/&lt;br /&gt;
&lt;br /&gt;
For a worker node this is rather excessive, but such is the nature of the binary-chek stage of the &amp;quot;install_exec&amp;quot; script that all of these are necessary.&lt;br /&gt;
&lt;br /&gt;
== copying the default/common directory over to the node ==&lt;br /&gt;
&lt;br /&gt;
**NOTE: Below is from Ramon, but when JW setup Phylo /opt/sge/default already existed, with the same date and file sizes as other nodes so not sure it&amp;#039;s needed? DO CHECK PERMISSIONS. The user ought to be sgeadmin and the group gridware.**&lt;br /&gt;
&lt;br /&gt;
First the  default directroy must be created:&lt;br /&gt;
&lt;br /&gt;
 ssh nodeX &amp;#039;mkdir /opt/sge/default&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And then followed by:&lt;br /&gt;
&lt;br /&gt;
 scp -r common node8:/opt/sge/default&lt;br /&gt;
&lt;br /&gt;
then run&lt;br /&gt;
 chown -R sgeadmin.gridware /opt/sge&lt;br /&gt;
&lt;br /&gt;
==edit /etc/bashrc==&lt;br /&gt;
&lt;br /&gt;
Add the following two lines to /etc/bashrc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 SGE_ROOT=/opt/sge; export SGE_ROOT;&lt;br /&gt;
 PATH=/opt/sge/bin/lx-x64:$PATH&lt;br /&gt;
&lt;br /&gt;
Note: on the older centos6 nodes the path is /opt/sge/bin/linux-x64, but the newer centos7 node had it as /opt/sge/bin/lx-x64.&lt;br /&gt;
&lt;br /&gt;
==Add the new server to the admin host==&lt;br /&gt;
(following this: https://docs.oracle.com/cd/E19957-01/820-0697/i999062/index.html)&lt;br /&gt;
So for phylo, run this on marvin:&lt;br /&gt;
 qconf -ah phylo&lt;br /&gt;
&lt;br /&gt;
check it&amp;#039;s been added&lt;br /&gt;
 qconf -sh &lt;br /&gt;
should show it&amp;#039;s been added.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Move into /opt/sge/ and run in install_execd. Follow the instructions above. For phylo everything was default.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Don&amp;#039;t forget to add it to the queue&amp;#039;s host&lt;br /&gt;
&lt;br /&gt;
 qconf -mq interactive.q&lt;br /&gt;
&lt;br /&gt;
and add the hostname to the end of the list of hosts&lt;br /&gt;
&lt;br /&gt;
= Administration =&lt;br /&gt;
&lt;br /&gt;
==Creating a new parallel environment==&lt;br /&gt;
&lt;br /&gt;
* Copy out an current parallel envioment out to a file&lt;br /&gt;
* edit this file as you wish&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
 qconf -Ap &amp;lt;my_pe_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A crucial oversight is to forget that this new parallel environment needs to be inserted into the queue&amp;#039;s configration.&lt;br /&gt;
&lt;br /&gt;
==Queues and hostgroups ==&lt;br /&gt;
&lt;br /&gt;
the dohfq.sh script accepts a rootname and list of numbers. The rootname becomes @rootname hostgroup and rootname.q for the queue.&lt;br /&gt;
Node0 is in fact marvin. 1, is node 1 etc. These are the nodes to be ncluded in the new queue.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3360</id>
		<title>Son of Gridengine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3360"/>
				<updated>2019-03-11T14:10:51Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* copying the default/common directory over to the node */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
After Oracle bought Sun and then shut-sourced the Sun Grid Engine, Open Grid Engine made a release based SGEv5 in 2012 which hey called GE2011. However there were no further releases. Then ARC at the University of Liverpool started releasing its &amp;quot;Son of Gridengine and have been maintaining updates to it at least as far March 2016.&lt;br /&gt;
&lt;br /&gt;
Until September 2016, the Queue manager in the marvin cluster was GE2011 which was getting a bit old, so when the queue manager failed due to a corrupted database&lt;br /&gt;
&lt;br /&gt;
= Steps =&lt;br /&gt;
&lt;br /&gt;
==Administrative host setup==&lt;br /&gt;
&lt;br /&gt;
All nodes must be set up as administrative hosts, despite the fact that only the master seems to be &amp;quot;administrative&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting the XML::Simple perl module ==&lt;br /&gt;
&lt;br /&gt;
The RPMForge Extra repository are need for this. These can be installed via an RPM, and afterwards the Extra branch of the repo much be enabled as it is not enabled by default.&lt;br /&gt;
&lt;br /&gt;
Note that it is best to disable this after all the RPMs have been installed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Install the Son of Gridware RPMs ==&lt;br /&gt;
&lt;br /&gt;
Centos 7 requires the epel repo installed&lt;br /&gt;
 yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm&lt;br /&gt;
&lt;br /&gt;
And the following packages:&lt;br /&gt;
 yum install jemalloc-3.6.0 lesstif-0.95.2 munge-libs-0.5.11 libdb4-utils-4.8.30&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 yum install -y gridengine-8.1.9-1.el6.x86_64.rpm gridengine-devel-8.1.9-1.el6.noarch.rpm gridengine-execd-8.1.9-1.el6.x86_64.rpm gridengine-qmaster-8.1.9-1.el6.x86_64.rpm gridengine-qmon-8.1.9-1.el6.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/&lt;br /&gt;
&lt;br /&gt;
For a worker node this is rather excessive, but such is the nature of the binary-chek stage of the &amp;quot;install_exec&amp;quot; script that all of these are necessary.&lt;br /&gt;
&lt;br /&gt;
== copying the default/common directory over to the node ==&lt;br /&gt;
&lt;br /&gt;
**NOTE: Below is from Ramon, but when JW setup Phylo /opt/sge/default already existed, with the same date and file sizes as other nodes so not sure it&amp;#039;s needed? DO CHECK PERMISSIONS. The user ought to be sgeadmin and the group gridware.**&lt;br /&gt;
&lt;br /&gt;
First the  default directroy must be created:&lt;br /&gt;
&lt;br /&gt;
 ssh nodeX &amp;#039;mkdir /opt/sge/default&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And then followed by:&lt;br /&gt;
&lt;br /&gt;
 scp -r common node8:/opt/sge/default&lt;br /&gt;
&lt;br /&gt;
then run&lt;br /&gt;
 chown -R sgeadmin.gridware /opt/sge&lt;br /&gt;
&lt;br /&gt;
==edit /etc/bashrc==&lt;br /&gt;
&lt;br /&gt;
Add the following two lines to /etc/bashrc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 SGE_ROOT=/opt/sge; export SGE_ROOT;&lt;br /&gt;
 PATH=/opt/sge/bin/lx-x64:$PATH&lt;br /&gt;
&lt;br /&gt;
Note: on the older centos6 nodes the path is /opt/sge/bin/linux-x64, but the newer centos7 node had it as /opt/sge/bin/lx-x64.&lt;br /&gt;
&lt;br /&gt;
==Add the new server to the admin host==&lt;br /&gt;
(following this: https://docs.oracle.com/cd/E19957-01/820-0697/i999062/index.html)&lt;br /&gt;
So for phylo, run this on marvin:&lt;br /&gt;
 qconf -ah phylo&lt;br /&gt;
&lt;br /&gt;
check it&amp;#039;s been added&lt;br /&gt;
 qconf -sh &lt;br /&gt;
should show it&amp;#039;s been added.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Move into /opt/sge/ and run in install_execd. Follow the instructions above. For phylo everything was default.&lt;br /&gt;
&lt;br /&gt;
= Administration =&lt;br /&gt;
&lt;br /&gt;
==Creating a new parallel environment==&lt;br /&gt;
&lt;br /&gt;
* Copy out an current parallel envioment out to a file&lt;br /&gt;
* edit this file as you wish&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
 qconf -Ap &amp;lt;my_pe_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A crucial oversight is to forget that this new parallel environment needs to be inserted into the queue&amp;#039;s configration.&lt;br /&gt;
&lt;br /&gt;
==Queues and hostgroups ==&lt;br /&gt;
&lt;br /&gt;
the dohfq.sh script accepts a rootname and list of numbers. The rootname becomes @rootname hostgroup and rootname.q for the queue.&lt;br /&gt;
Node0 is in fact marvin. 1, is node 1 etc. These are the nodes to be ncluded in the new queue.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3359</id>
		<title>Son of Gridengine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3359"/>
				<updated>2019-03-11T14:10:33Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* chown -R sgeadmin.gridware sge */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
After Oracle bought Sun and then shut-sourced the Sun Grid Engine, Open Grid Engine made a release based SGEv5 in 2012 which hey called GE2011. However there were no further releases. Then ARC at the University of Liverpool started releasing its &amp;quot;Son of Gridengine and have been maintaining updates to it at least as far March 2016.&lt;br /&gt;
&lt;br /&gt;
Until September 2016, the Queue manager in the marvin cluster was GE2011 which was getting a bit old, so when the queue manager failed due to a corrupted database&lt;br /&gt;
&lt;br /&gt;
= Steps =&lt;br /&gt;
&lt;br /&gt;
==Administrative host setup==&lt;br /&gt;
&lt;br /&gt;
All nodes must be set up as administrative hosts, despite the fact that only the master seems to be &amp;quot;administrative&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting the XML::Simple perl module ==&lt;br /&gt;
&lt;br /&gt;
The RPMForge Extra repository are need for this. These can be installed via an RPM, and afterwards the Extra branch of the repo much be enabled as it is not enabled by default.&lt;br /&gt;
&lt;br /&gt;
Note that it is best to disable this after all the RPMs have been installed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Install the Son of Gridware RPMs ==&lt;br /&gt;
&lt;br /&gt;
Centos 7 requires the epel repo installed&lt;br /&gt;
 yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm&lt;br /&gt;
&lt;br /&gt;
And the following packages:&lt;br /&gt;
 yum install jemalloc-3.6.0 lesstif-0.95.2 munge-libs-0.5.11 libdb4-utils-4.8.30&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 yum install -y gridengine-8.1.9-1.el6.x86_64.rpm gridengine-devel-8.1.9-1.el6.noarch.rpm gridengine-execd-8.1.9-1.el6.x86_64.rpm gridengine-qmaster-8.1.9-1.el6.x86_64.rpm gridengine-qmon-8.1.9-1.el6.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/&lt;br /&gt;
&lt;br /&gt;
For a worker node this is rather excessive, but such is the nature of the binary-chek stage of the &amp;quot;install_exec&amp;quot; script that all of these are necessary.&lt;br /&gt;
&lt;br /&gt;
== copying the default/common directory over to the node ==&lt;br /&gt;
&lt;br /&gt;
NOTE: Below is from Ramon, but when JW setup Phylo /opt/sge/default already existed, with the same date and file sizes as other nodes so not sure it&amp;#039;s needed? DO CHECK PERMISSIONS. The user ought to be sgeadmin and the group gridware.&lt;br /&gt;
&lt;br /&gt;
First the  default directroy must be created:&lt;br /&gt;
&lt;br /&gt;
 ssh nodeX &amp;#039;mkdir /opt/sge/default&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And then followed by:&lt;br /&gt;
&lt;br /&gt;
 scp -r common node8:/opt/sge/default&lt;br /&gt;
&lt;br /&gt;
then run&lt;br /&gt;
 chown -R sgeadmin.gridware /opt/sge&lt;br /&gt;
&lt;br /&gt;
==edit /etc/bashrc==&lt;br /&gt;
&lt;br /&gt;
Add the following two lines to /etc/bashrc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 SGE_ROOT=/opt/sge; export SGE_ROOT;&lt;br /&gt;
 PATH=/opt/sge/bin/lx-x64:$PATH&lt;br /&gt;
&lt;br /&gt;
Note: on the older centos6 nodes the path is /opt/sge/bin/linux-x64, but the newer centos7 node had it as /opt/sge/bin/lx-x64.&lt;br /&gt;
&lt;br /&gt;
==Add the new server to the admin host==&lt;br /&gt;
(following this: https://docs.oracle.com/cd/E19957-01/820-0697/i999062/index.html)&lt;br /&gt;
So for phylo, run this on marvin:&lt;br /&gt;
 qconf -ah phylo&lt;br /&gt;
&lt;br /&gt;
check it&amp;#039;s been added&lt;br /&gt;
 qconf -sh &lt;br /&gt;
should show it&amp;#039;s been added.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Move into /opt/sge/ and run in install_execd. Follow the instructions above. For phylo everything was default.&lt;br /&gt;
&lt;br /&gt;
= Administration =&lt;br /&gt;
&lt;br /&gt;
==Creating a new parallel environment==&lt;br /&gt;
&lt;br /&gt;
* Copy out an current parallel envioment out to a file&lt;br /&gt;
* edit this file as you wish&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
 qconf -Ap &amp;lt;my_pe_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A crucial oversight is to forget that this new parallel environment needs to be inserted into the queue&amp;#039;s configration.&lt;br /&gt;
&lt;br /&gt;
==Queues and hostgroups ==&lt;br /&gt;
&lt;br /&gt;
the dohfq.sh script accepts a rootname and list of numbers. The rootname becomes @rootname hostgroup and rootname.q for the queue.&lt;br /&gt;
Node0 is in fact marvin. 1, is node 1 etc. These are the nodes to be ncluded in the new queue.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3358</id>
		<title>Son of Gridengine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3358"/>
				<updated>2019-03-11T13:45:13Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* copying the default/common directory over to the node */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
After Oracle bought Sun and then shut-sourced the Sun Grid Engine, Open Grid Engine made a release based SGEv5 in 2012 which hey called GE2011. However there were no further releases. Then ARC at the University of Liverpool started releasing its &amp;quot;Son of Gridengine and have been maintaining updates to it at least as far March 2016.&lt;br /&gt;
&lt;br /&gt;
Until September 2016, the Queue manager in the marvin cluster was GE2011 which was getting a bit old, so when the queue manager failed due to a corrupted database&lt;br /&gt;
&lt;br /&gt;
= Steps =&lt;br /&gt;
&lt;br /&gt;
==Administrative host setup==&lt;br /&gt;
&lt;br /&gt;
All nodes must be set up as administrative hosts, despite the fact that only the master seems to be &amp;quot;administrative&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting the XML::Simple perl module ==&lt;br /&gt;
&lt;br /&gt;
The RPMForge Extra repository are need for this. These can be installed via an RPM, and afterwards the Extra branch of the repo much be enabled as it is not enabled by default.&lt;br /&gt;
&lt;br /&gt;
Note that it is best to disable this after all the RPMs have been installed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Install the Son of Gridware RPMs ==&lt;br /&gt;
&lt;br /&gt;
Centos 7 requires the epel repo installed&lt;br /&gt;
 yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm&lt;br /&gt;
&lt;br /&gt;
And the following packages:&lt;br /&gt;
 yum install jemalloc-3.6.0 lesstif-0.95.2 munge-libs-0.5.11 libdb4-utils-4.8.30&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 yum install -y gridengine-8.1.9-1.el6.x86_64.rpm gridengine-devel-8.1.9-1.el6.noarch.rpm gridengine-execd-8.1.9-1.el6.x86_64.rpm gridengine-qmaster-8.1.9-1.el6.x86_64.rpm gridengine-qmon-8.1.9-1.el6.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/&lt;br /&gt;
&lt;br /&gt;
For a worker node this is rather excessive, but such is the nature of the binary-chek stage of the &amp;quot;install_exec&amp;quot; script that all of these are necessary.&lt;br /&gt;
&lt;br /&gt;
== copying the default/common directory over to the node ==&lt;br /&gt;
&lt;br /&gt;
NOTE: Below is from Ramon, but when JW setup Phylo /opt/sge/default already existed, with the same date and file sizes as other nodes so not sure it&amp;#039;s needed? DO CHECK PERMISSIONS. The user ought to be sgeadmin and the group gridware.&lt;br /&gt;
&lt;br /&gt;
First the  default directroy must be created:&lt;br /&gt;
&lt;br /&gt;
 ssh nodeX &amp;#039;mkdir /opt/sge/default&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And then followed by:&lt;br /&gt;
&lt;br /&gt;
 scp -r common node8:/opt/sge/default&lt;br /&gt;
&lt;br /&gt;
== chown -R sgeadmin.gridware sge ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==edit /etc/bashrc==&lt;br /&gt;
&lt;br /&gt;
Add the following two lines to /etc/bashrc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 SGE_ROOT=/opt/sge; export SGE_ROOT;&lt;br /&gt;
 PATH=/opt/sge/bin/lx-x64:$PATH&lt;br /&gt;
&lt;br /&gt;
Note: on the older centos6 nodes the path is /opt/sge/bin/linux-x64, but the newer centos7 node had it as /opt/sge/bin/lx-x64.&lt;br /&gt;
&lt;br /&gt;
==Add the new server to the admin host==&lt;br /&gt;
(following this: https://docs.oracle.com/cd/E19957-01/820-0697/i999062/index.html)&lt;br /&gt;
So for phylo, run this on marvin:&lt;br /&gt;
 qconf -ah phylo&lt;br /&gt;
&lt;br /&gt;
check it&amp;#039;s been added&lt;br /&gt;
 qconf -sh &lt;br /&gt;
should show it&amp;#039;s been added.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Move into /opt/sge/ and run in install_execd. Follow the instructions above. For phylo everything was default.&lt;br /&gt;
&lt;br /&gt;
= Administration =&lt;br /&gt;
&lt;br /&gt;
==Creating a new parallel environment==&lt;br /&gt;
&lt;br /&gt;
* Copy out an current parallel envioment out to a file&lt;br /&gt;
* edit this file as you wish&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
 qconf -Ap &amp;lt;my_pe_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A crucial oversight is to forget that this new parallel environment needs to be inserted into the queue&amp;#039;s configration.&lt;br /&gt;
&lt;br /&gt;
==Queues and hostgroups ==&lt;br /&gt;
&lt;br /&gt;
the dohfq.sh script accepts a rootname and list of numbers. The rootname becomes @rootname hostgroup and rootname.q for the queue.&lt;br /&gt;
Node0 is in fact marvin. 1, is node 1 etc. These are the nodes to be ncluded in the new queue.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3357</id>
		<title>Son of Gridengine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3357"/>
				<updated>2019-03-11T11:05:42Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Add the new server to the admin host */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
After Oracle bought Sun and then shut-sourced the Sun Grid Engine, Open Grid Engine made a release based SGEv5 in 2012 which hey called GE2011. However there were no further releases. Then ARC at the University of Liverpool started releasing its &amp;quot;Son of Gridengine and have been maintaining updates to it at least as far March 2016.&lt;br /&gt;
&lt;br /&gt;
Until September 2016, the Queue manager in the marvin cluster was GE2011 which was getting a bit old, so when the queue manager failed due to a corrupted database&lt;br /&gt;
&lt;br /&gt;
= Steps =&lt;br /&gt;
&lt;br /&gt;
==Administrative host setup==&lt;br /&gt;
&lt;br /&gt;
All nodes must be set up as administrative hosts, despite the fact that only the master seems to be &amp;quot;administrative&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting the XML::Simple perl module ==&lt;br /&gt;
&lt;br /&gt;
The RPMForge Extra repository are need for this. These can be installed via an RPM, and afterwards the Extra branch of the repo much be enabled as it is not enabled by default.&lt;br /&gt;
&lt;br /&gt;
Note that it is best to disable this after all the RPMs have been installed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Install the Son of Gridware RPMs ==&lt;br /&gt;
&lt;br /&gt;
Centos 7 requires the epel repo installed&lt;br /&gt;
 yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm&lt;br /&gt;
&lt;br /&gt;
And the following packages:&lt;br /&gt;
 yum install jemalloc-3.6.0 lesstif-0.95.2 munge-libs-0.5.11 libdb4-utils-4.8.30&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 yum install -y gridengine-8.1.9-1.el6.x86_64.rpm gridengine-devel-8.1.9-1.el6.noarch.rpm gridengine-execd-8.1.9-1.el6.x86_64.rpm gridengine-qmaster-8.1.9-1.el6.x86_64.rpm gridengine-qmon-8.1.9-1.el6.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/&lt;br /&gt;
&lt;br /&gt;
For a worker node this is rather excessive, but such is the nature of the binary-chek stage of the &amp;quot;install_exec&amp;quot; script that all of these are necessary.&lt;br /&gt;
&lt;br /&gt;
== copying the default/common directory over to the node ==&lt;br /&gt;
&lt;br /&gt;
NOTE: Below is from Ramon, but when JW setup Phylo /opt/sge/default already existed, with the same date and file sizes as other nodes so not sure it&amp;#039;s needed? &lt;br /&gt;
First the  default directroy must be created:&lt;br /&gt;
&lt;br /&gt;
 ssh nodeX &amp;#039;mkdir /opt/sge/default&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And then followed by:&lt;br /&gt;
&lt;br /&gt;
 scp -r common node8:/opt/sge/default&lt;br /&gt;
&lt;br /&gt;
== chown -R sgeadmin.gridware sge ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==edit /etc/bashrc==&lt;br /&gt;
&lt;br /&gt;
Add the following two lines to /etc/bashrc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 SGE_ROOT=/opt/sge; export SGE_ROOT;&lt;br /&gt;
 PATH=/opt/sge/bin/lx-x64:$PATH&lt;br /&gt;
&lt;br /&gt;
Note: on the older centos6 nodes the path is /opt/sge/bin/linux-x64, but the newer centos7 node had it as /opt/sge/bin/lx-x64.&lt;br /&gt;
&lt;br /&gt;
==Add the new server to the admin host==&lt;br /&gt;
(following this: https://docs.oracle.com/cd/E19957-01/820-0697/i999062/index.html)&lt;br /&gt;
So for phylo, run this on marvin:&lt;br /&gt;
 qconf -ah phylo&lt;br /&gt;
&lt;br /&gt;
check it&amp;#039;s been added&lt;br /&gt;
 qconf -sh &lt;br /&gt;
should show it&amp;#039;s been added.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Move into /opt/sge/ and run in install_execd. Follow the instructions above. For phylo everything was default.&lt;br /&gt;
&lt;br /&gt;
= Administration =&lt;br /&gt;
&lt;br /&gt;
==Creating a new parallel environment==&lt;br /&gt;
&lt;br /&gt;
* Copy out an current parallel envioment out to a file&lt;br /&gt;
* edit this file as you wish&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
 qconf -Ap &amp;lt;my_pe_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A crucial oversight is to forget that this new parallel environment needs to be inserted into the queue&amp;#039;s configration.&lt;br /&gt;
&lt;br /&gt;
==Queues and hostgroups ==&lt;br /&gt;
&lt;br /&gt;
the dohfq.sh script accepts a rootname and list of numbers. The rootname becomes @rootname hostgroup and rootname.q for the queue.&lt;br /&gt;
Node0 is in fact marvin. 1, is node 1 etc. These are the nodes to be ncluded in the new queue.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3356</id>
		<title>Son of Gridengine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3356"/>
				<updated>2019-03-11T10:58:14Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Add the new server to the admin host */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
After Oracle bought Sun and then shut-sourced the Sun Grid Engine, Open Grid Engine made a release based SGEv5 in 2012 which hey called GE2011. However there were no further releases. Then ARC at the University of Liverpool started releasing its &amp;quot;Son of Gridengine and have been maintaining updates to it at least as far March 2016.&lt;br /&gt;
&lt;br /&gt;
Until September 2016, the Queue manager in the marvin cluster was GE2011 which was getting a bit old, so when the queue manager failed due to a corrupted database&lt;br /&gt;
&lt;br /&gt;
= Steps =&lt;br /&gt;
&lt;br /&gt;
==Administrative host setup==&lt;br /&gt;
&lt;br /&gt;
All nodes must be set up as administrative hosts, despite the fact that only the master seems to be &amp;quot;administrative&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting the XML::Simple perl module ==&lt;br /&gt;
&lt;br /&gt;
The RPMForge Extra repository are need for this. These can be installed via an RPM, and afterwards the Extra branch of the repo much be enabled as it is not enabled by default.&lt;br /&gt;
&lt;br /&gt;
Note that it is best to disable this after all the RPMs have been installed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Install the Son of Gridware RPMs ==&lt;br /&gt;
&lt;br /&gt;
Centos 7 requires the epel repo installed&lt;br /&gt;
 yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm&lt;br /&gt;
&lt;br /&gt;
And the following packages:&lt;br /&gt;
 yum install jemalloc-3.6.0 lesstif-0.95.2 munge-libs-0.5.11 libdb4-utils-4.8.30&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 yum install -y gridengine-8.1.9-1.el6.x86_64.rpm gridengine-devel-8.1.9-1.el6.noarch.rpm gridengine-execd-8.1.9-1.el6.x86_64.rpm gridengine-qmaster-8.1.9-1.el6.x86_64.rpm gridengine-qmon-8.1.9-1.el6.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/&lt;br /&gt;
&lt;br /&gt;
For a worker node this is rather excessive, but such is the nature of the binary-chek stage of the &amp;quot;install_exec&amp;quot; script that all of these are necessary.&lt;br /&gt;
&lt;br /&gt;
== copying the default/common directory over to the node ==&lt;br /&gt;
&lt;br /&gt;
NOTE: Below is from Ramon, but when JW setup Phylo /opt/sge/default already existed, with the same date and file sizes as other nodes so not sure it&amp;#039;s needed? &lt;br /&gt;
First the  default directroy must be created:&lt;br /&gt;
&lt;br /&gt;
 ssh nodeX &amp;#039;mkdir /opt/sge/default&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And then followed by:&lt;br /&gt;
&lt;br /&gt;
 scp -r common node8:/opt/sge/default&lt;br /&gt;
&lt;br /&gt;
== chown -R sgeadmin.gridware sge ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==edit /etc/bashrc==&lt;br /&gt;
&lt;br /&gt;
Add the following two lines to /etc/bashrc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 SGE_ROOT=/opt/sge; export SGE_ROOT;&lt;br /&gt;
 PATH=/opt/sge/bin/lx-x64:$PATH&lt;br /&gt;
&lt;br /&gt;
Note: on the older centos6 nodes the path is /opt/sge/bin/linux-x64, but the newer centos7 node had it as /opt/sge/bin/lx-x64.&lt;br /&gt;
&lt;br /&gt;
==Add the new server to the admin host==&lt;br /&gt;
(following this: https://docs.oracle.com/cd/E19957-01/820-0697/i999062/index.html)&lt;br /&gt;
So for phylo, run this on marvin:&lt;br /&gt;
 qconf -ah phylo&lt;br /&gt;
&lt;br /&gt;
check it&amp;#039;s been added&lt;br /&gt;
 qconf -sh &lt;br /&gt;
should show it&amp;#039;s been added.&lt;br /&gt;
&lt;br /&gt;
= Administration =&lt;br /&gt;
&lt;br /&gt;
==Creating a new parallel environment==&lt;br /&gt;
&lt;br /&gt;
* Copy out an current parallel envioment out to a file&lt;br /&gt;
* edit this file as you wish&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
 qconf -Ap &amp;lt;my_pe_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A crucial oversight is to forget that this new parallel environment needs to be inserted into the queue&amp;#039;s configration.&lt;br /&gt;
&lt;br /&gt;
==Queues and hostgroups ==&lt;br /&gt;
&lt;br /&gt;
the dohfq.sh script accepts a rootname and list of numbers. The rootname becomes @rootname hostgroup and rootname.q for the queue.&lt;br /&gt;
Node0 is in fact marvin. 1, is node 1 etc. These are the nodes to be ncluded in the new queue.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3355</id>
		<title>Son of Gridengine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3355"/>
				<updated>2019-03-11T10:55:21Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* edit /etc/bashrc */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
After Oracle bought Sun and then shut-sourced the Sun Grid Engine, Open Grid Engine made a release based SGEv5 in 2012 which hey called GE2011. However there were no further releases. Then ARC at the University of Liverpool started releasing its &amp;quot;Son of Gridengine and have been maintaining updates to it at least as far March 2016.&lt;br /&gt;
&lt;br /&gt;
Until September 2016, the Queue manager in the marvin cluster was GE2011 which was getting a bit old, so when the queue manager failed due to a corrupted database&lt;br /&gt;
&lt;br /&gt;
= Steps =&lt;br /&gt;
&lt;br /&gt;
==Administrative host setup==&lt;br /&gt;
&lt;br /&gt;
All nodes must be set up as administrative hosts, despite the fact that only the master seems to be &amp;quot;administrative&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting the XML::Simple perl module ==&lt;br /&gt;
&lt;br /&gt;
The RPMForge Extra repository are need for this. These can be installed via an RPM, and afterwards the Extra branch of the repo much be enabled as it is not enabled by default.&lt;br /&gt;
&lt;br /&gt;
Note that it is best to disable this after all the RPMs have been installed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Install the Son of Gridware RPMs ==&lt;br /&gt;
&lt;br /&gt;
Centos 7 requires the epel repo installed&lt;br /&gt;
 yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm&lt;br /&gt;
&lt;br /&gt;
And the following packages:&lt;br /&gt;
 yum install jemalloc-3.6.0 lesstif-0.95.2 munge-libs-0.5.11 libdb4-utils-4.8.30&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 yum install -y gridengine-8.1.9-1.el6.x86_64.rpm gridengine-devel-8.1.9-1.el6.noarch.rpm gridengine-execd-8.1.9-1.el6.x86_64.rpm gridengine-qmaster-8.1.9-1.el6.x86_64.rpm gridengine-qmon-8.1.9-1.el6.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/&lt;br /&gt;
&lt;br /&gt;
For a worker node this is rather excessive, but such is the nature of the binary-chek stage of the &amp;quot;install_exec&amp;quot; script that all of these are necessary.&lt;br /&gt;
&lt;br /&gt;
== copying the default/common directory over to the node ==&lt;br /&gt;
&lt;br /&gt;
NOTE: Below is from Ramon, but when JW setup Phylo /opt/sge/default already existed, with the same date and file sizes as other nodes so not sure it&amp;#039;s needed? &lt;br /&gt;
First the  default directroy must be created:&lt;br /&gt;
&lt;br /&gt;
 ssh nodeX &amp;#039;mkdir /opt/sge/default&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And then followed by:&lt;br /&gt;
&lt;br /&gt;
 scp -r common node8:/opt/sge/default&lt;br /&gt;
&lt;br /&gt;
== chown -R sgeadmin.gridware sge ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==edit /etc/bashrc==&lt;br /&gt;
&lt;br /&gt;
Add the following two lines to /etc/bashrc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 SGE_ROOT=/opt/sge; export SGE_ROOT;&lt;br /&gt;
 PATH=/opt/sge/bin/lx-x64:$PATH&lt;br /&gt;
&lt;br /&gt;
Note: on the older centos6 nodes the path is /opt/sge/bin/linux-x64, but the newer centos7 node had it as /opt/sge/bin/lx-x64.&lt;br /&gt;
&lt;br /&gt;
==Add the new server to the admin host==&lt;br /&gt;
So for phylo, run this on marvin:&lt;br /&gt;
 qconf -ah phylo&lt;br /&gt;
&lt;br /&gt;
check it&amp;#039;s been added&lt;br /&gt;
 qconf -sh &lt;br /&gt;
should show it&amp;#039;s been added.&lt;br /&gt;
&lt;br /&gt;
= Administration =&lt;br /&gt;
&lt;br /&gt;
==Creating a new parallel environment==&lt;br /&gt;
&lt;br /&gt;
* Copy out an current parallel envioment out to a file&lt;br /&gt;
* edit this file as you wish&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
 qconf -Ap &amp;lt;my_pe_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A crucial oversight is to forget that this new parallel environment needs to be inserted into the queue&amp;#039;s configration.&lt;br /&gt;
&lt;br /&gt;
==Queues and hostgroups ==&lt;br /&gt;
&lt;br /&gt;
the dohfq.sh script accepts a rootname and list of numbers. The rootname becomes @rootname hostgroup and rootname.q for the queue.&lt;br /&gt;
Node0 is in fact marvin. 1, is node 1 etc. These are the nodes to be ncluded in the new queue.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3354</id>
		<title>Son of Gridengine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3354"/>
				<updated>2019-03-11T10:44:45Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* copying the default/common directory over to the node */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
After Oracle bought Sun and then shut-sourced the Sun Grid Engine, Open Grid Engine made a release based SGEv5 in 2012 which hey called GE2011. However there were no further releases. Then ARC at the University of Liverpool started releasing its &amp;quot;Son of Gridengine and have been maintaining updates to it at least as far March 2016.&lt;br /&gt;
&lt;br /&gt;
Until September 2016, the Queue manager in the marvin cluster was GE2011 which was getting a bit old, so when the queue manager failed due to a corrupted database&lt;br /&gt;
&lt;br /&gt;
= Steps =&lt;br /&gt;
&lt;br /&gt;
==Administrative host setup==&lt;br /&gt;
&lt;br /&gt;
All nodes must be set up as administrative hosts, despite the fact that only the master seems to be &amp;quot;administrative&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting the XML::Simple perl module ==&lt;br /&gt;
&lt;br /&gt;
The RPMForge Extra repository are need for this. These can be installed via an RPM, and afterwards the Extra branch of the repo much be enabled as it is not enabled by default.&lt;br /&gt;
&lt;br /&gt;
Note that it is best to disable this after all the RPMs have been installed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Install the Son of Gridware RPMs ==&lt;br /&gt;
&lt;br /&gt;
Centos 7 requires the epel repo installed&lt;br /&gt;
 yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm&lt;br /&gt;
&lt;br /&gt;
And the following packages:&lt;br /&gt;
 yum install jemalloc-3.6.0 lesstif-0.95.2 munge-libs-0.5.11 libdb4-utils-4.8.30&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 yum install -y gridengine-8.1.9-1.el6.x86_64.rpm gridengine-devel-8.1.9-1.el6.noarch.rpm gridengine-execd-8.1.9-1.el6.x86_64.rpm gridengine-qmaster-8.1.9-1.el6.x86_64.rpm gridengine-qmon-8.1.9-1.el6.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/&lt;br /&gt;
&lt;br /&gt;
For a worker node this is rather excessive, but such is the nature of the binary-chek stage of the &amp;quot;install_exec&amp;quot; script that all of these are necessary.&lt;br /&gt;
&lt;br /&gt;
== copying the default/common directory over to the node ==&lt;br /&gt;
&lt;br /&gt;
NOTE: Below is from Ramon, but when JW setup Phylo /opt/sge/default already existed, with the same date and file sizes as other nodes so not sure it&amp;#039;s needed? &lt;br /&gt;
First the  default directroy must be created:&lt;br /&gt;
&lt;br /&gt;
 ssh nodeX &amp;#039;mkdir /opt/sge/default&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And then followed by:&lt;br /&gt;
&lt;br /&gt;
 scp -r common node8:/opt/sge/default&lt;br /&gt;
&lt;br /&gt;
== chown -R sgeadmin.gridware sge ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==edit /etc/bashrc==&lt;br /&gt;
&lt;br /&gt;
Add the following two lines to /etc/bashrc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 SGE_ROOT=/opt/sge; export SGE_ROOT;&lt;br /&gt;
 PATH=/opt/sge/bin/lx-x64:$PATH&lt;br /&gt;
&lt;br /&gt;
Note: on the older centos6 nodes the path is /opt/sge/bin/linux-x64, but the newer centos7 node had it as /opt/sge/bin/lx-x64.&lt;br /&gt;
&lt;br /&gt;
= Administration =&lt;br /&gt;
&lt;br /&gt;
==Creating a new parallel environment==&lt;br /&gt;
&lt;br /&gt;
* Copy out an current parallel envioment out to a file&lt;br /&gt;
* edit this file as you wish&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
 qconf -Ap &amp;lt;my_pe_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A crucial oversight is to forget that this new parallel environment needs to be inserted into the queue&amp;#039;s configration.&lt;br /&gt;
&lt;br /&gt;
==Queues and hostgroups ==&lt;br /&gt;
&lt;br /&gt;
the dohfq.sh script accepts a rootname and list of numbers. The rootname becomes @rootname hostgroup and rootname.q for the queue.&lt;br /&gt;
Node0 is in fact marvin. 1, is node 1 etc. These are the nodes to be ncluded in the new queue.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3353</id>
		<title>Son of Gridengine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3353"/>
				<updated>2019-03-11T10:43:49Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* copying the default/common directory over to the node */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
After Oracle bought Sun and then shut-sourced the Sun Grid Engine, Open Grid Engine made a release based SGEv5 in 2012 which hey called GE2011. However there were no further releases. Then ARC at the University of Liverpool started releasing its &amp;quot;Son of Gridengine and have been maintaining updates to it at least as far March 2016.&lt;br /&gt;
&lt;br /&gt;
Until September 2016, the Queue manager in the marvin cluster was GE2011 which was getting a bit old, so when the queue manager failed due to a corrupted database&lt;br /&gt;
&lt;br /&gt;
= Steps =&lt;br /&gt;
&lt;br /&gt;
==Administrative host setup==&lt;br /&gt;
&lt;br /&gt;
All nodes must be set up as administrative hosts, despite the fact that only the master seems to be &amp;quot;administrative&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting the XML::Simple perl module ==&lt;br /&gt;
&lt;br /&gt;
The RPMForge Extra repository are need for this. These can be installed via an RPM, and afterwards the Extra branch of the repo much be enabled as it is not enabled by default.&lt;br /&gt;
&lt;br /&gt;
Note that it is best to disable this after all the RPMs have been installed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Install the Son of Gridware RPMs ==&lt;br /&gt;
&lt;br /&gt;
Centos 7 requires the epel repo installed&lt;br /&gt;
 yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm&lt;br /&gt;
&lt;br /&gt;
And the following packages:&lt;br /&gt;
 yum install jemalloc-3.6.0 lesstif-0.95.2 munge-libs-0.5.11 libdb4-utils-4.8.30&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 yum install -y gridengine-8.1.9-1.el6.x86_64.rpm gridengine-devel-8.1.9-1.el6.noarch.rpm gridengine-execd-8.1.9-1.el6.x86_64.rpm gridengine-qmaster-8.1.9-1.el6.x86_64.rpm gridengine-qmon-8.1.9-1.el6.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/&lt;br /&gt;
&lt;br /&gt;
For a worker node this is rather excessive, but such is the nature of the binary-chek stage of the &amp;quot;install_exec&amp;quot; script that all of these are necessary.&lt;br /&gt;
&lt;br /&gt;
== copying the default/common directory over to the node ==&lt;br /&gt;
&lt;br /&gt;
NOTE: Below is from Ramon, but when JW setup Phylo /opt/sge/default already existed. &lt;br /&gt;
First the  default directroy must be created:&lt;br /&gt;
&lt;br /&gt;
 ssh nodeX &amp;#039;mkdir /opt/sge/default&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And then followed by:&lt;br /&gt;
&lt;br /&gt;
 scp -r common node8:/opt/sge/default&lt;br /&gt;
&lt;br /&gt;
== chown -R sgeadmin.gridware sge ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==edit /etc/bashrc==&lt;br /&gt;
&lt;br /&gt;
Add the following two lines to /etc/bashrc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 SGE_ROOT=/opt/sge; export SGE_ROOT;&lt;br /&gt;
 PATH=/opt/sge/bin/lx-x64:$PATH&lt;br /&gt;
&lt;br /&gt;
Note: on the older centos6 nodes the path is /opt/sge/bin/linux-x64, but the newer centos7 node had it as /opt/sge/bin/lx-x64.&lt;br /&gt;
&lt;br /&gt;
= Administration =&lt;br /&gt;
&lt;br /&gt;
==Creating a new parallel environment==&lt;br /&gt;
&lt;br /&gt;
* Copy out an current parallel envioment out to a file&lt;br /&gt;
* edit this file as you wish&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
 qconf -Ap &amp;lt;my_pe_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A crucial oversight is to forget that this new parallel environment needs to be inserted into the queue&amp;#039;s configration.&lt;br /&gt;
&lt;br /&gt;
==Queues and hostgroups ==&lt;br /&gt;
&lt;br /&gt;
the dohfq.sh script accepts a rootname and list of numbers. The rootname becomes @rootname hostgroup and rootname.q for the queue.&lt;br /&gt;
Node0 is in fact marvin. 1, is node 1 etc. These are the nodes to be ncluded in the new queue.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3352</id>
		<title>Son of Gridengine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3352"/>
				<updated>2019-03-11T10:39:26Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* edit /etc/bashrc */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
After Oracle bought Sun and then shut-sourced the Sun Grid Engine, Open Grid Engine made a release based SGEv5 in 2012 which hey called GE2011. However there were no further releases. Then ARC at the University of Liverpool started releasing its &amp;quot;Son of Gridengine and have been maintaining updates to it at least as far March 2016.&lt;br /&gt;
&lt;br /&gt;
Until September 2016, the Queue manager in the marvin cluster was GE2011 which was getting a bit old, so when the queue manager failed due to a corrupted database&lt;br /&gt;
&lt;br /&gt;
= Steps =&lt;br /&gt;
&lt;br /&gt;
==Administrative host setup==&lt;br /&gt;
&lt;br /&gt;
All nodes must be set up as administrative hosts, despite the fact that only the master seems to be &amp;quot;administrative&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting the XML::Simple perl module ==&lt;br /&gt;
&lt;br /&gt;
The RPMForge Extra repository are need for this. These can be installed via an RPM, and afterwards the Extra branch of the repo much be enabled as it is not enabled by default.&lt;br /&gt;
&lt;br /&gt;
Note that it is best to disable this after all the RPMs have been installed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Install the Son of Gridware RPMs ==&lt;br /&gt;
&lt;br /&gt;
Centos 7 requires the epel repo installed&lt;br /&gt;
 yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm&lt;br /&gt;
&lt;br /&gt;
And the following packages:&lt;br /&gt;
 yum install jemalloc-3.6.0 lesstif-0.95.2 munge-libs-0.5.11 libdb4-utils-4.8.30&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 yum install -y gridengine-8.1.9-1.el6.x86_64.rpm gridengine-devel-8.1.9-1.el6.noarch.rpm gridengine-execd-8.1.9-1.el6.x86_64.rpm gridengine-qmaster-8.1.9-1.el6.x86_64.rpm gridengine-qmon-8.1.9-1.el6.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/&lt;br /&gt;
&lt;br /&gt;
For a worker node this is rather excessive, but such is the nature of the binary-chek stage of the &amp;quot;install_exec&amp;quot; script that all of these are necessary.&lt;br /&gt;
&lt;br /&gt;
== copying the default/common directory over to the node ==&lt;br /&gt;
&lt;br /&gt;
First the  default directroy must be created:&lt;br /&gt;
&lt;br /&gt;
 ssh nodeX &amp;#039;mkdir /opt/sge/default&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And then followed by:&lt;br /&gt;
&lt;br /&gt;
 scp -r common node8:/opt/sge/default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== chown -R sgeadmin.gridware sge ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==edit /etc/bashrc==&lt;br /&gt;
&lt;br /&gt;
Add the following two lines to /etc/bashrc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 SGE_ROOT=/opt/sge; export SGE_ROOT;&lt;br /&gt;
 PATH=/opt/sge/bin/lx-x64:$PATH&lt;br /&gt;
&lt;br /&gt;
Note: on the older centos6 nodes the path is /opt/sge/bin/linux-x64, but the newer centos7 node had it as /opt/sge/bin/lx-x64.&lt;br /&gt;
&lt;br /&gt;
= Administration =&lt;br /&gt;
&lt;br /&gt;
==Creating a new parallel environment==&lt;br /&gt;
&lt;br /&gt;
* Copy out an current parallel envioment out to a file&lt;br /&gt;
* edit this file as you wish&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
 qconf -Ap &amp;lt;my_pe_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A crucial oversight is to forget that this new parallel environment needs to be inserted into the queue&amp;#039;s configration.&lt;br /&gt;
&lt;br /&gt;
==Queues and hostgroups ==&lt;br /&gt;
&lt;br /&gt;
the dohfq.sh script accepts a rootname and list of numbers. The rootname becomes @rootname hostgroup and rootname.q for the queue.&lt;br /&gt;
Node0 is in fact marvin. 1, is node 1 etc. These are the nodes to be ncluded in the new queue.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3351</id>
		<title>Son of Gridengine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3351"/>
				<updated>2019-03-11T10:36:04Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* chown -R sgeadmin.gridware sge */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
After Oracle bought Sun and then shut-sourced the Sun Grid Engine, Open Grid Engine made a release based SGEv5 in 2012 which hey called GE2011. However there were no further releases. Then ARC at the University of Liverpool started releasing its &amp;quot;Son of Gridengine and have been maintaining updates to it at least as far March 2016.&lt;br /&gt;
&lt;br /&gt;
Until September 2016, the Queue manager in the marvin cluster was GE2011 which was getting a bit old, so when the queue manager failed due to a corrupted database&lt;br /&gt;
&lt;br /&gt;
= Steps =&lt;br /&gt;
&lt;br /&gt;
==Administrative host setup==&lt;br /&gt;
&lt;br /&gt;
All nodes must be set up as administrative hosts, despite the fact that only the master seems to be &amp;quot;administrative&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting the XML::Simple perl module ==&lt;br /&gt;
&lt;br /&gt;
The RPMForge Extra repository are need for this. These can be installed via an RPM, and afterwards the Extra branch of the repo much be enabled as it is not enabled by default.&lt;br /&gt;
&lt;br /&gt;
Note that it is best to disable this after all the RPMs have been installed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Install the Son of Gridware RPMs ==&lt;br /&gt;
&lt;br /&gt;
Centos 7 requires the epel repo installed&lt;br /&gt;
 yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm&lt;br /&gt;
&lt;br /&gt;
And the following packages:&lt;br /&gt;
 yum install jemalloc-3.6.0 lesstif-0.95.2 munge-libs-0.5.11 libdb4-utils-4.8.30&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 yum install -y gridengine-8.1.9-1.el6.x86_64.rpm gridengine-devel-8.1.9-1.el6.noarch.rpm gridengine-execd-8.1.9-1.el6.x86_64.rpm gridengine-qmaster-8.1.9-1.el6.x86_64.rpm gridengine-qmon-8.1.9-1.el6.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/&lt;br /&gt;
&lt;br /&gt;
For a worker node this is rather excessive, but such is the nature of the binary-chek stage of the &amp;quot;install_exec&amp;quot; script that all of these are necessary.&lt;br /&gt;
&lt;br /&gt;
== copying the default/common directory over to the node ==&lt;br /&gt;
&lt;br /&gt;
First the  default directroy must be created:&lt;br /&gt;
&lt;br /&gt;
 ssh nodeX &amp;#039;mkdir /opt/sge/default&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And then followed by:&lt;br /&gt;
&lt;br /&gt;
 scp -r common node8:/opt/sge/default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== chown -R sgeadmin.gridware sge ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==edit /etc/bashrc==&lt;br /&gt;
&lt;br /&gt;
Add the following two lines to /etc/bashrc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 SGE_ROOT=/opt/sge; export SGE_ROOT;&lt;br /&gt;
 PATH=/opt/sge/bin/linux-x64:$PATH&lt;br /&gt;
&lt;br /&gt;
= Administration =&lt;br /&gt;
&lt;br /&gt;
==Creating a new parallel environment==&lt;br /&gt;
&lt;br /&gt;
* Copy out an current parallel envioment out to a file&lt;br /&gt;
* edit this file as you wish&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
 qconf -Ap &amp;lt;my_pe_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A crucial oversight is to forget that this new parallel environment needs to be inserted into the queue&amp;#039;s configration.&lt;br /&gt;
&lt;br /&gt;
==Queues and hostgroups ==&lt;br /&gt;
&lt;br /&gt;
the dohfq.sh script accepts a rootname and list of numbers. The rootname becomes @rootname hostgroup and rootname.q for the queue.&lt;br /&gt;
Node0 is in fact marvin. 1, is node 1 etc. These are the nodes to be ncluded in the new queue.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3350</id>
		<title>Son of Gridengine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3350"/>
				<updated>2019-03-11T10:24:24Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Install the Son of Gridware RPMs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
After Oracle bought Sun and then shut-sourced the Sun Grid Engine, Open Grid Engine made a release based SGEv5 in 2012 which hey called GE2011. However there were no further releases. Then ARC at the University of Liverpool started releasing its &amp;quot;Son of Gridengine and have been maintaining updates to it at least as far March 2016.&lt;br /&gt;
&lt;br /&gt;
Until September 2016, the Queue manager in the marvin cluster was GE2011 which was getting a bit old, so when the queue manager failed due to a corrupted database&lt;br /&gt;
&lt;br /&gt;
= Steps =&lt;br /&gt;
&lt;br /&gt;
==Administrative host setup==&lt;br /&gt;
&lt;br /&gt;
All nodes must be set up as administrative hosts, despite the fact that only the master seems to be &amp;quot;administrative&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting the XML::Simple perl module ==&lt;br /&gt;
&lt;br /&gt;
The RPMForge Extra repository are need for this. These can be installed via an RPM, and afterwards the Extra branch of the repo much be enabled as it is not enabled by default.&lt;br /&gt;
&lt;br /&gt;
Note that it is best to disable this after all the RPMs have been installed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Install the Son of Gridware RPMs ==&lt;br /&gt;
&lt;br /&gt;
Centos 7 requires the epel repo installed&lt;br /&gt;
 yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm&lt;br /&gt;
&lt;br /&gt;
And the following packages:&lt;br /&gt;
 yum install jemalloc-3.6.0 lesstif-0.95.2 munge-libs-0.5.11 libdb4-utils-4.8.30&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 yum install -y gridengine-8.1.9-1.el6.x86_64.rpm gridengine-devel-8.1.9-1.el6.noarch.rpm gridengine-execd-8.1.9-1.el6.x86_64.rpm gridengine-qmaster-8.1.9-1.el6.x86_64.rpm gridengine-qmon-8.1.9-1.el6.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/&lt;br /&gt;
&lt;br /&gt;
For a worker node this is rather excessive, but such is the nature of the binary-chek stage of the &amp;quot;install_exec&amp;quot; script that all of these are necessary.&lt;br /&gt;
&lt;br /&gt;
== copying the default/common directory over to the node ==&lt;br /&gt;
&lt;br /&gt;
First the  default directroy must be created:&lt;br /&gt;
&lt;br /&gt;
 ssh nodeX &amp;#039;mkdir /opt/sge/default&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And then followed by:&lt;br /&gt;
&lt;br /&gt;
 scp -r common node8:/opt/sge/default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== chown -R sgeadmin.gridware sge ==&lt;br /&gt;
&lt;br /&gt;
= Administration =&lt;br /&gt;
&lt;br /&gt;
==Creating a new parallel environment==&lt;br /&gt;
&lt;br /&gt;
* Copy out an current parallel envioment out to a file&lt;br /&gt;
* edit this file as you wish&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
 qconf -Ap &amp;lt;my_pe_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A crucial oversight is to forget that this new parallel environment needs to be inserted into the queue&amp;#039;s configration.&lt;br /&gt;
&lt;br /&gt;
==Queues and hostgroups ==&lt;br /&gt;
&lt;br /&gt;
the dohfq.sh script accepts a rootname and list of numbers. The rootname becomes @rootname hostgroup and rootname.q for the queue.&lt;br /&gt;
Node0 is in fact marvin. 1, is node 1 etc. These are the nodes to be ncluded in the new queue.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3349</id>
		<title>Son of Gridengine</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Son_of_Gridengine&amp;diff=3349"/>
				<updated>2019-03-11T09:36:40Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Install the Son of Gridware RPMs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
After Oracle bought Sun and then shut-sourced the Sun Grid Engine, Open Grid Engine made a release based SGEv5 in 2012 which hey called GE2011. However there were no further releases. Then ARC at the University of Liverpool started releasing its &amp;quot;Son of Gridengine and have been maintaining updates to it at least as far March 2016.&lt;br /&gt;
&lt;br /&gt;
Until September 2016, the Queue manager in the marvin cluster was GE2011 which was getting a bit old, so when the queue manager failed due to a corrupted database&lt;br /&gt;
&lt;br /&gt;
= Steps =&lt;br /&gt;
&lt;br /&gt;
==Administrative host setup==&lt;br /&gt;
&lt;br /&gt;
All nodes must be set up as administrative hosts, despite the fact that only the master seems to be &amp;quot;administrative&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting the XML::Simple perl module ==&lt;br /&gt;
&lt;br /&gt;
The RPMForge Extra repository are need for this. These can be installed via an RPM, and afterwards the Extra branch of the repo much be enabled as it is not enabled by default.&lt;br /&gt;
&lt;br /&gt;
Note that it is best to disable this after all the RPMs have been installed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Install the Son of Gridware RPMs ==&lt;br /&gt;
&lt;br /&gt;
 yum install -y gridengine-8.1.9-1.el6.x86_64.rpm gridengine-devel-8.1.9-1.el6.noarch.rpm gridengine-execd-8.1.9-1.el6.x86_64.rpm gridengine-qmaster-8.1.9-1.el6.x86_64.rpm gridengine-qmon-8.1.9-1.el6.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/&lt;br /&gt;
&lt;br /&gt;
For a worker node this is rather excessive, but such is the nature of the binary-chek stage of the &amp;quot;install_exec&amp;quot; script that all of these are necessary.&lt;br /&gt;
&lt;br /&gt;
== copying the default/common directory over to the node ==&lt;br /&gt;
&lt;br /&gt;
First the  default directroy must be created:&lt;br /&gt;
&lt;br /&gt;
 ssh nodeX &amp;#039;mkdir /opt/sge/default&amp;#039;&lt;br /&gt;
&lt;br /&gt;
And then followed by:&lt;br /&gt;
&lt;br /&gt;
 scp -r common node8:/opt/sge/default&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== chown -R sgeadmin.gridware sge ==&lt;br /&gt;
&lt;br /&gt;
= Administration =&lt;br /&gt;
&lt;br /&gt;
==Creating a new parallel environment==&lt;br /&gt;
&lt;br /&gt;
* Copy out an current parallel envioment out to a file&lt;br /&gt;
* edit this file as you wish&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
 qconf -Ap &amp;lt;my_pe_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A crucial oversight is to forget that this new parallel environment needs to be inserted into the queue&amp;#039;s configration.&lt;br /&gt;
&lt;br /&gt;
==Queues and hostgroups ==&lt;br /&gt;
&lt;br /&gt;
the dohfq.sh script accepts a rootname and list of numbers. The rootname becomes @rootname hostgroup and rootname.q for the queue.&lt;br /&gt;
Node0 is in fact marvin. 1, is node 1 etc. These are the nodes to be ncluded in the new queue.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Marvin_and_IPMI_(remote_hardware_control)&amp;diff=3333</id>
		<title>Marvin and IPMI (remote hardware control)</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Marvin_and_IPMI_(remote_hardware_control)&amp;diff=3333"/>
				<updated>2019-01-25T08:27:47Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* Blinking LEDs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
For example, when networking goes down on marvin and yet the machine is still up, ssh can no longer be used, so the administrator is stuck.&lt;br /&gt;
&lt;br /&gt;
However all modern, industrial grade (i.e. non-consumer) servers now include a separate subsystem on the machine which is independent of the main machine. This is known by various names, and there is a standard called IPMI, which is what Supermicro calls it. Dell calls it DRAC and HP calls it LightsOn (or something to that effect).&lt;br /&gt;
&lt;br /&gt;
It can be seen as a hardware remote control system and is a device which include a network connection so one can login to the IPMI module and send hardware commands (principally power-on and power-cycle) to the main machine.&lt;br /&gt;
&lt;br /&gt;
Despite IPMI&amp;#039;s isolation from the main system, it is not immune to faults of its own, in which case, the only cure is to ring Ian at IT Services and physically go to the datacentre. Unfortunately, IPMI tends not to cooperate exactly when the man machines has problems of its own, which is disappointing because that&amp;#039;s exactly when it&amp;#039;s needed. Nevertheless, IPMI is better than nothing and has proved useful on many occasions.&lt;br /&gt;
&lt;br /&gt;
= Details =&lt;br /&gt;
&lt;br /&gt;
Marvin&amp;#039;s nodes can all be remotely controlled but only in marvin itself. So the usual exercise is to run firefox on marvin and connect to the node&amp;#039;s IPMI IPs there.&lt;br /&gt;
&lt;br /&gt;
When marvin&amp;#039;s IPMI itself needs to be used, then this can be done from another computer within the University campus.&lt;br /&gt;
&lt;br /&gt;
There is a standalone GUI application called IPMIconfig which does then same things as the IPMI web interface, but because it doesn&amp;#039;t need a browser, can be faster.&lt;br /&gt;
&lt;br /&gt;
The virtual console on IPMI&amp;#039;s web interface uses JNLP (javaws) program and is the best implementation, but it can be patchy. It also allows the loading of a local Live Linux ISO file so that the machine may be booted from it, though this can be a bit tortuous. Certainly, it is very clear that Supermicro&amp;#039;s IPMI interface is considerably inferior to Dell&amp;#039;s DRAC interface used on the biotime machine. Nevertheless, it is possible to boot the recommended Linux Live ISO, [http://www.system-rescue-cd.org sysrescuecd] on Supermicro&amp;#039;s IPMI.&lt;br /&gt;
&lt;br /&gt;
Again it must be repeated that the virtual console&amp;#039;s functioning is patchy. An even less dependable version of Virtual Console is SOL. It may seem silly to mention SOL when it is even worse than Virtual console, however it has one or two crucial advantages which make it the holy grail of remote hardware control:&lt;br /&gt;
* SOL is a raw terminal connection to the login screen of the main machine.&lt;br /&gt;
* it does not operate via buggy GUI&amp;#039;s and web interfaces.&lt;br /&gt;
* one can connect via the command line and record all input and output via your local linux computer&amp;#039;s &amp;quot;script&amp;quot; progam (see &amp;quot;man script&amp;quot;).&lt;br /&gt;
* When it works, it is much faster than the alternatives.&lt;br /&gt;
* It behaves as if one really was sitting down locally at the machine, looking at the login screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=IPMI Is Down=&lt;br /&gt;
&lt;br /&gt;
An issue was had in Jan 2019 where the IPMI for node1 and node3 wasn&amp;#039;t accessible via the command line or webserver. This meant we couldn&amp;#039;t reboot the nodes when they were up. Node3 was rebooted manually by Ally pushing a button on the box. &lt;br /&gt;
&lt;br /&gt;
When trying to interact with the mc (management controller, sometimes baseboard management controller)&lt;br /&gt;
&lt;br /&gt;
 ipmitool mc info&lt;br /&gt;
 Could not open device at /dev/ipmi0 or /dev/ipmi/0 or /dev/ipmidev/0: No such file or directory&lt;br /&gt;
&lt;br /&gt;
The error tells me we need to start two services&lt;br /&gt;
&lt;br /&gt;
 modprobe ipmi_devintf&lt;br /&gt;
 modprobe ipmi_si&lt;br /&gt;
&lt;br /&gt;
Which then gives us&lt;br /&gt;
 ipmitool mc info&lt;br /&gt;
 Device ID                 : 32&lt;br /&gt;
 Device Revision           : 1&lt;br /&gt;
 Firmware Revision         : 2.06&lt;br /&gt;
 IPMI Version              : 2.0&lt;br /&gt;
 Manufacturer ID           : 47488&lt;br /&gt;
 Manufacturer Name         : Unknown (0xB980)&lt;br /&gt;
 Product ID                : 43707 (0xaabb)&lt;br /&gt;
 Product Name              : Unknown (0xAABB)&lt;br /&gt;
 Device Available          : yes&lt;br /&gt;
 Provides Device SDRs      : no&lt;br /&gt;
 Additional Device Support :&lt;br /&gt;
     Sensor Device&lt;br /&gt;
     SDR Repository Device&lt;br /&gt;
     SEL Device&lt;br /&gt;
     FRU Inventory Device&lt;br /&gt;
     IPMB Event Receiver&lt;br /&gt;
     IPMB Event Generator&lt;br /&gt;
     Chassis Device&lt;br /&gt;
 Aux Firmware Rev Info     : &lt;br /&gt;
     0x01&lt;br /&gt;
     0x00&lt;br /&gt;
     0x00&lt;br /&gt;
     0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So to warm reboot the ipmi we do:&lt;br /&gt;
&lt;br /&gt;
 ipmitool mc reset warm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This will either tell you it sent the warm reset command (&amp;quot;Sent warm reset command to MC&amp;quot;) or return&lt;br /&gt;
&lt;br /&gt;
 MC reset command failed: Invalid command&lt;br /&gt;
&lt;br /&gt;
If this happens, send the cold reset command&lt;br /&gt;
&lt;br /&gt;
 ipmitool mc reset cold&lt;br /&gt;
 Sent cold reset command to MC&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This didn&amp;#039;t work however, as &lt;br /&gt;
 ipmitool -H node3IP -U ADMIN -P ***** mc info&lt;br /&gt;
 Error: Unable to establish LAN session&lt;br /&gt;
&lt;br /&gt;
but &lt;br /&gt;
&lt;br /&gt;
 ipmitool -H node2IP -u ADMIN -P **** mc infoDevice ID                 : 32&lt;br /&gt;
 Device Revision           : 1&lt;br /&gt;
 Firmware Revision         : 2.59&lt;br /&gt;
 IPMI Version              : 2.0&lt;br /&gt;
 Manufacturer ID           : 47488&lt;br /&gt;
 Manufacturer Name         : Unknown (0xB980)&lt;br /&gt;
 Product ID                : 43537 (0xaa11)&lt;br /&gt;
 Product Name              : Unknown (0xAA11)&lt;br /&gt;
 Device Available          : yes&lt;br /&gt;
 Provides Device SDRs      : no&lt;br /&gt;
 Additional Device Support :&lt;br /&gt;
     Sensor Device&lt;br /&gt;
     SDR Repository Device&lt;br /&gt;
     SEL Device&lt;br /&gt;
     FRU Inventory Device&lt;br /&gt;
     IPMB Event Receiver&lt;br /&gt;
     IPMB Event Generator&lt;br /&gt;
     Chassis Device&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Resolution still not found 10.54 Jan 22nd 2019.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Blinking LEDs===&lt;br /&gt;
&lt;br /&gt;
 ipmitool -U ADMIN -P marvinIPMI chassis identify force&lt;br /&gt;
&lt;br /&gt;
This turns on the flashing LED on the server indefinitely. &lt;br /&gt;
&lt;br /&gt;
To turn it off again use&lt;br /&gt;
&lt;br /&gt;
 ipmitool -U ADMIN -P PASSWORD chassis identify 0&lt;br /&gt;
&lt;br /&gt;
or to set a time for it to flash for &lt;br /&gt;
&lt;br /&gt;
 ipmitool -U ADMIN -P PASSWORD chassis identify 300 #five minutes&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Marvin_and_IPMI_(remote_hardware_control)&amp;diff=3332</id>
		<title>Marvin and IPMI (remote hardware control)</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Marvin_and_IPMI_(remote_hardware_control)&amp;diff=3332"/>
				<updated>2019-01-25T08:27:22Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* IPMI Is Down */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
For example, when networking goes down on marvin and yet the machine is still up, ssh can no longer be used, so the administrator is stuck.&lt;br /&gt;
&lt;br /&gt;
However all modern, industrial grade (i.e. non-consumer) servers now include a separate subsystem on the machine which is independent of the main machine. This is known by various names, and there is a standard called IPMI, which is what Supermicro calls it. Dell calls it DRAC and HP calls it LightsOn (or something to that effect).&lt;br /&gt;
&lt;br /&gt;
It can be seen as a hardware remote control system and is a device which include a network connection so one can login to the IPMI module and send hardware commands (principally power-on and power-cycle) to the main machine.&lt;br /&gt;
&lt;br /&gt;
Despite IPMI&amp;#039;s isolation from the main system, it is not immune to faults of its own, in which case, the only cure is to ring Ian at IT Services and physically go to the datacentre. Unfortunately, IPMI tends not to cooperate exactly when the man machines has problems of its own, which is disappointing because that&amp;#039;s exactly when it&amp;#039;s needed. Nevertheless, IPMI is better than nothing and has proved useful on many occasions.&lt;br /&gt;
&lt;br /&gt;
= Details =&lt;br /&gt;
&lt;br /&gt;
Marvin&amp;#039;s nodes can all be remotely controlled but only in marvin itself. So the usual exercise is to run firefox on marvin and connect to the node&amp;#039;s IPMI IPs there.&lt;br /&gt;
&lt;br /&gt;
When marvin&amp;#039;s IPMI itself needs to be used, then this can be done from another computer within the University campus.&lt;br /&gt;
&lt;br /&gt;
There is a standalone GUI application called IPMIconfig which does then same things as the IPMI web interface, but because it doesn&amp;#039;t need a browser, can be faster.&lt;br /&gt;
&lt;br /&gt;
The virtual console on IPMI&amp;#039;s web interface uses JNLP (javaws) program and is the best implementation, but it can be patchy. It also allows the loading of a local Live Linux ISO file so that the machine may be booted from it, though this can be a bit tortuous. Certainly, it is very clear that Supermicro&amp;#039;s IPMI interface is considerably inferior to Dell&amp;#039;s DRAC interface used on the biotime machine. Nevertheless, it is possible to boot the recommended Linux Live ISO, [http://www.system-rescue-cd.org sysrescuecd] on Supermicro&amp;#039;s IPMI.&lt;br /&gt;
&lt;br /&gt;
Again it must be repeated that the virtual console&amp;#039;s functioning is patchy. An even less dependable version of Virtual Console is SOL. It may seem silly to mention SOL when it is even worse than Virtual console, however it has one or two crucial advantages which make it the holy grail of remote hardware control:&lt;br /&gt;
* SOL is a raw terminal connection to the login screen of the main machine.&lt;br /&gt;
* it does not operate via buggy GUI&amp;#039;s and web interfaces.&lt;br /&gt;
* one can connect via the command line and record all input and output via your local linux computer&amp;#039;s &amp;quot;script&amp;quot; progam (see &amp;quot;man script&amp;quot;).&lt;br /&gt;
* When it works, it is much faster than the alternatives.&lt;br /&gt;
* It behaves as if one really was sitting down locally at the machine, looking at the login screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=IPMI Is Down=&lt;br /&gt;
&lt;br /&gt;
An issue was had in Jan 2019 where the IPMI for node1 and node3 wasn&amp;#039;t accessible via the command line or webserver. This meant we couldn&amp;#039;t reboot the nodes when they were up. Node3 was rebooted manually by Ally pushing a button on the box. &lt;br /&gt;
&lt;br /&gt;
When trying to interact with the mc (management controller, sometimes baseboard management controller)&lt;br /&gt;
&lt;br /&gt;
 ipmitool mc info&lt;br /&gt;
 Could not open device at /dev/ipmi0 or /dev/ipmi/0 or /dev/ipmidev/0: No such file or directory&lt;br /&gt;
&lt;br /&gt;
The error tells me we need to start two services&lt;br /&gt;
&lt;br /&gt;
 modprobe ipmi_devintf&lt;br /&gt;
 modprobe ipmi_si&lt;br /&gt;
&lt;br /&gt;
Which then gives us&lt;br /&gt;
 ipmitool mc info&lt;br /&gt;
 Device ID                 : 32&lt;br /&gt;
 Device Revision           : 1&lt;br /&gt;
 Firmware Revision         : 2.06&lt;br /&gt;
 IPMI Version              : 2.0&lt;br /&gt;
 Manufacturer ID           : 47488&lt;br /&gt;
 Manufacturer Name         : Unknown (0xB980)&lt;br /&gt;
 Product ID                : 43707 (0xaabb)&lt;br /&gt;
 Product Name              : Unknown (0xAABB)&lt;br /&gt;
 Device Available          : yes&lt;br /&gt;
 Provides Device SDRs      : no&lt;br /&gt;
 Additional Device Support :&lt;br /&gt;
     Sensor Device&lt;br /&gt;
     SDR Repository Device&lt;br /&gt;
     SEL Device&lt;br /&gt;
     FRU Inventory Device&lt;br /&gt;
     IPMB Event Receiver&lt;br /&gt;
     IPMB Event Generator&lt;br /&gt;
     Chassis Device&lt;br /&gt;
 Aux Firmware Rev Info     : &lt;br /&gt;
     0x01&lt;br /&gt;
     0x00&lt;br /&gt;
     0x00&lt;br /&gt;
     0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So to warm reboot the ipmi we do:&lt;br /&gt;
&lt;br /&gt;
 ipmitool mc reset warm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This will either tell you it sent the warm reset command (&amp;quot;Sent warm reset command to MC&amp;quot;) or return&lt;br /&gt;
&lt;br /&gt;
 MC reset command failed: Invalid command&lt;br /&gt;
&lt;br /&gt;
If this happens, send the cold reset command&lt;br /&gt;
&lt;br /&gt;
 ipmitool mc reset cold&lt;br /&gt;
 Sent cold reset command to MC&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This didn&amp;#039;t work however, as &lt;br /&gt;
 ipmitool -H node3IP -U ADMIN -P ***** mc info&lt;br /&gt;
 Error: Unable to establish LAN session&lt;br /&gt;
&lt;br /&gt;
but &lt;br /&gt;
&lt;br /&gt;
 ipmitool -H node2IP -u ADMIN -P **** mc infoDevice ID                 : 32&lt;br /&gt;
 Device Revision           : 1&lt;br /&gt;
 Firmware Revision         : 2.59&lt;br /&gt;
 IPMI Version              : 2.0&lt;br /&gt;
 Manufacturer ID           : 47488&lt;br /&gt;
 Manufacturer Name         : Unknown (0xB980)&lt;br /&gt;
 Product ID                : 43537 (0xaa11)&lt;br /&gt;
 Product Name              : Unknown (0xAA11)&lt;br /&gt;
 Device Available          : yes&lt;br /&gt;
 Provides Device SDRs      : no&lt;br /&gt;
 Additional Device Support :&lt;br /&gt;
     Sensor Device&lt;br /&gt;
     SDR Repository Device&lt;br /&gt;
     SEL Device&lt;br /&gt;
     FRU Inventory Device&lt;br /&gt;
     IPMB Event Receiver&lt;br /&gt;
     IPMB Event Generator&lt;br /&gt;
     Chassis Device&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Resolution still not found 10.54 Jan 22nd 2019.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Blinking LEDs===&lt;br /&gt;
&lt;br /&gt;
 ipmitool -U ADMIN -P marvinIPMI chassis identify force&lt;br /&gt;
&lt;br /&gt;
This turns on the flashing LED on the server indefinitely. &lt;br /&gt;
&lt;br /&gt;
To turn it off again use&lt;br /&gt;
&lt;br /&gt;
 ipmitool -U ADMIN -P marvinIPMI chassis identify 0&lt;br /&gt;
&lt;br /&gt;
or to set a time for it to flash for &lt;br /&gt;
&lt;br /&gt;
 ipmitool -U ADMIN -P marvinIPMI chassis identify 300 #five minutes&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Marvin_and_IPMI_(remote_hardware_control)&amp;diff=3329</id>
		<title>Marvin and IPMI (remote hardware control)</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Marvin_and_IPMI_(remote_hardware_control)&amp;diff=3329"/>
				<updated>2019-01-22T10:54:56Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* IPMI Is Down */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
For example, when networking goes down on marvin and yet the machine is still up, ssh can no longer be used, so the administrator is stuck.&lt;br /&gt;
&lt;br /&gt;
However all modern, industrial grade (i.e. non-consumer) servers now include a separate subsystem on the machine which is independent of the main machine. This is known by various names, and there is a standard called IPMI, which is what Supermicro calls it. Dell calls it DRAC and HP calls it LightsOn (or something to that effect).&lt;br /&gt;
&lt;br /&gt;
It can be seen as a hardware remote control system and is a device which include a network connection so one can login to the IPMI module and send hardware commands (principally power-on and power-cycle) to the main machine.&lt;br /&gt;
&lt;br /&gt;
Despite IPMI&amp;#039;s isolation from the main system, it is not immune to faults of its own, in which case, the only cure is to ring Ian at IT Services and physically go to the datacentre. Unfortunately, IPMI tends not to cooperate exactly when the man machines has problems of its own, which is disappointing because that&amp;#039;s exactly when it&amp;#039;s needed. Nevertheless, IPMI is better than nothing and has proved useful on many occasions.&lt;br /&gt;
&lt;br /&gt;
= Details =&lt;br /&gt;
&lt;br /&gt;
Marvin&amp;#039;s nodes can all be remotely controlled but only in marvin itself. So the usual exercise is to run firefox on marvin and connect to the node&amp;#039;s IPMI IPs there.&lt;br /&gt;
&lt;br /&gt;
When marvin&amp;#039;s IPMI itself needs to be used, then this can be done from another computer within the University campus.&lt;br /&gt;
&lt;br /&gt;
There is a standalone GUI application called IPMIconfig which does then same things as the IPMI web interface, but because it doesn&amp;#039;t need a browser, can be faster.&lt;br /&gt;
&lt;br /&gt;
The virtual console on IPMI&amp;#039;s web interface uses JNLP (javaws) program and is the best implementation, but it can be patchy. It also allows the loading of a local Live Linux ISO file so that the machine may be booted from it, though this can be a bit tortuous. Certainly, it is very clear that Supermicro&amp;#039;s IPMI interface is considerably inferior to Dell&amp;#039;s DRAC interface used on the biotime machine. Nevertheless, it is possible to boot the recommended Linux Live ISO, [http://www.system-rescue-cd.org sysrescuecd] on Supermicro&amp;#039;s IPMI.&lt;br /&gt;
&lt;br /&gt;
Again it must be repeated that the virtual console&amp;#039;s functioning is patchy. An even less dependable version of Virtual Console is SOL. It may seem silly to mention SOL when it is even worse than Virtual console, however it has one or two crucial advantages which make it the holy grail of remote hardware control:&lt;br /&gt;
* SOL is a raw terminal connection to the login screen of the main machine.&lt;br /&gt;
* it does not operate via buggy GUI&amp;#039;s and web interfaces.&lt;br /&gt;
* one can connect via the command line and record all input and output via your local linux computer&amp;#039;s &amp;quot;script&amp;quot; progam (see &amp;quot;man script&amp;quot;).&lt;br /&gt;
* When it works, it is much faster than the alternatives.&lt;br /&gt;
* It behaves as if one really was sitting down locally at the machine, looking at the login screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=IPMI Is Down=&lt;br /&gt;
&lt;br /&gt;
An issue was had in Jan 2019 where the IPMI for node1 and node3 wasn&amp;#039;t accessible via the command line or webserver. This meant we couldn&amp;#039;t reboot the nodes when they were up. Node3 was rebooted manually by Ally pushing a button on the box. &lt;br /&gt;
&lt;br /&gt;
When trying to interact with the mc (management controller, sometimes baseboard management controller)&lt;br /&gt;
&lt;br /&gt;
 ipmitool mc info&lt;br /&gt;
 Could not open device at /dev/ipmi0 or /dev/ipmi/0 or /dev/ipmidev/0: No such file or directory&lt;br /&gt;
&lt;br /&gt;
The error tells me we need to start two services&lt;br /&gt;
&lt;br /&gt;
 modprobe ipmi_devintf&lt;br /&gt;
 modprobe ipmi_si&lt;br /&gt;
&lt;br /&gt;
Which then gives us&lt;br /&gt;
 ipmitool mc info&lt;br /&gt;
 Device ID                 : 32&lt;br /&gt;
 Device Revision           : 1&lt;br /&gt;
 Firmware Revision         : 2.06&lt;br /&gt;
 IPMI Version              : 2.0&lt;br /&gt;
 Manufacturer ID           : 47488&lt;br /&gt;
 Manufacturer Name         : Unknown (0xB980)&lt;br /&gt;
 Product ID                : 43707 (0xaabb)&lt;br /&gt;
 Product Name              : Unknown (0xAABB)&lt;br /&gt;
 Device Available          : yes&lt;br /&gt;
 Provides Device SDRs      : no&lt;br /&gt;
 Additional Device Support :&lt;br /&gt;
     Sensor Device&lt;br /&gt;
     SDR Repository Device&lt;br /&gt;
     SEL Device&lt;br /&gt;
     FRU Inventory Device&lt;br /&gt;
     IPMB Event Receiver&lt;br /&gt;
     IPMB Event Generator&lt;br /&gt;
     Chassis Device&lt;br /&gt;
 Aux Firmware Rev Info     : &lt;br /&gt;
     0x01&lt;br /&gt;
     0x00&lt;br /&gt;
     0x00&lt;br /&gt;
     0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So to warm reboot the ipmi we do:&lt;br /&gt;
&lt;br /&gt;
 ipmitool mc reset warm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This will either tell you it sent the warm reset command (&amp;quot;Sent warm reset command to MC&amp;quot;) or return&lt;br /&gt;
&lt;br /&gt;
 MC reset command failed: Invalid command&lt;br /&gt;
&lt;br /&gt;
If this happens, send the cold reset command&lt;br /&gt;
&lt;br /&gt;
 ipmitool mc reset cold&lt;br /&gt;
 Sent cold reset command to MC&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This didn&amp;#039;t work however, as &lt;br /&gt;
 ipmitool -H node3IP -U ADMIN -P ***** mc info&lt;br /&gt;
 Error: Unable to establish LAN session&lt;br /&gt;
&lt;br /&gt;
but &lt;br /&gt;
&lt;br /&gt;
 ipmitool -H node2IP -u ADMIN -P **** mc infoDevice ID                 : 32&lt;br /&gt;
 Device Revision           : 1&lt;br /&gt;
 Firmware Revision         : 2.59&lt;br /&gt;
 IPMI Version              : 2.0&lt;br /&gt;
 Manufacturer ID           : 47488&lt;br /&gt;
 Manufacturer Name         : Unknown (0xB980)&lt;br /&gt;
 Product ID                : 43537 (0xaa11)&lt;br /&gt;
 Product Name              : Unknown (0xAA11)&lt;br /&gt;
 Device Available          : yes&lt;br /&gt;
 Provides Device SDRs      : no&lt;br /&gt;
 Additional Device Support :&lt;br /&gt;
     Sensor Device&lt;br /&gt;
     SDR Repository Device&lt;br /&gt;
     SEL Device&lt;br /&gt;
     FRU Inventory Device&lt;br /&gt;
     IPMB Event Receiver&lt;br /&gt;
     IPMB Event Generator&lt;br /&gt;
     Chassis Device&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Resolution still not found 10.54 Jan 22nd 2019.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Marvin_and_IPMI_(remote_hardware_control)&amp;diff=3326</id>
		<title>Marvin and IPMI (remote hardware control)</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Marvin_and_IPMI_(remote_hardware_control)&amp;diff=3326"/>
				<updated>2019-01-22T10:50:12Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: /* IPMI Is Down */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
For example, when networking goes down on marvin and yet the machine is still up, ssh can no longer be used, so the administrator is stuck.&lt;br /&gt;
&lt;br /&gt;
However all modern, industrial grade (i.e. non-consumer) servers now include a separate subsystem on the machine which is independent of the main machine. This is known by various names, and there is a standard called IPMI, which is what Supermicro calls it. Dell calls it DRAC and HP calls it LightsOn (or something to that effect).&lt;br /&gt;
&lt;br /&gt;
It can be seen as a hardware remote control system and is a device which include a network connection so one can login to the IPMI module and send hardware commands (principally power-on and power-cycle) to the main machine.&lt;br /&gt;
&lt;br /&gt;
Despite IPMI&amp;#039;s isolation from the main system, it is not immune to faults of its own, in which case, the only cure is to ring Ian at IT Services and physically go to the datacentre. Unfortunately, IPMI tends not to cooperate exactly when the man machines has problems of its own, which is disappointing because that&amp;#039;s exactly when it&amp;#039;s needed. Nevertheless, IPMI is better than nothing and has proved useful on many occasions.&lt;br /&gt;
&lt;br /&gt;
= Details =&lt;br /&gt;
&lt;br /&gt;
Marvin&amp;#039;s nodes can all be remotely controlled but only in marvin itself. So the usual exercise is to run firefox on marvin and connect to the node&amp;#039;s IPMI IPs there.&lt;br /&gt;
&lt;br /&gt;
When marvin&amp;#039;s IPMI itself needs to be used, then this can be done from another computer within the University campus.&lt;br /&gt;
&lt;br /&gt;
There is a standalone GUI application called IPMIconfig which does then same things as the IPMI web interface, but because it doesn&amp;#039;t need a browser, can be faster.&lt;br /&gt;
&lt;br /&gt;
The virtual console on IPMI&amp;#039;s web interface uses JNLP (javaws) program and is the best implementation, but it can be patchy. It also allows the loading of a local Live Linux ISO file so that the machine may be booted from it, though this can be a bit tortuous. Certainly, it is very clear that Supermicro&amp;#039;s IPMI interface is considerably inferior to Dell&amp;#039;s DRAC interface used on the biotime machine. Nevertheless, it is possible to boot the recommended Linux Live ISO, [http://www.system-rescue-cd.org sysrescuecd] on Supermicro&amp;#039;s IPMI.&lt;br /&gt;
&lt;br /&gt;
Again it must be repeated that the virtual console&amp;#039;s functioning is patchy. An even less dependable version of Virtual Console is SOL. It may seem silly to mention SOL when it is even worse than Virtual console, however it has one or two crucial advantages which make it the holy grail of remote hardware control:&lt;br /&gt;
* SOL is a raw terminal connection to the login screen of the main machine.&lt;br /&gt;
* it does not operate via buggy GUI&amp;#039;s and web interfaces.&lt;br /&gt;
* one can connect via the command line and record all input and output via your local linux computer&amp;#039;s &amp;quot;script&amp;quot; progam (see &amp;quot;man script&amp;quot;).&lt;br /&gt;
* When it works, it is much faster than the alternatives.&lt;br /&gt;
* It behaves as if one really was sitting down locally at the machine, looking at the login screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=IPMI Is Down=&lt;br /&gt;
&lt;br /&gt;
An issue was had in Jan 2019 where the IPMI for node1 and node3 wasn&amp;#039;t accessible via the command line or webserver. This meant we couldn&amp;#039;t reboot the nodes when they were up. Node3 was rebooted manually by Ally pushing a button on the box. &lt;br /&gt;
&lt;br /&gt;
When trying to interact with the mc (management controller, sometimes baseboard management controller)&lt;br /&gt;
&lt;br /&gt;
 ipmitool mc info&lt;br /&gt;
 Could not open device at /dev/ipmi0 or /dev/ipmi/0 or /dev/ipmidev/0: No such file or directory&lt;br /&gt;
&lt;br /&gt;
The error tells me we need to start two services&lt;br /&gt;
&lt;br /&gt;
 modprobe ipmi_devintf&lt;br /&gt;
 modprobe ipmi_si&lt;br /&gt;
&lt;br /&gt;
Which then gives us&lt;br /&gt;
 ipmitool mc info&lt;br /&gt;
 Device ID                 : 32&lt;br /&gt;
 Device Revision           : 1&lt;br /&gt;
 Firmware Revision         : 2.06&lt;br /&gt;
 IPMI Version              : 2.0&lt;br /&gt;
 Manufacturer ID           : 47488&lt;br /&gt;
 Manufacturer Name         : Unknown (0xB980)&lt;br /&gt;
 Product ID                : 43707 (0xaabb)&lt;br /&gt;
 Product Name              : Unknown (0xAABB)&lt;br /&gt;
 Device Available          : yes&lt;br /&gt;
 Provides Device SDRs      : no&lt;br /&gt;
 Additional Device Support :&lt;br /&gt;
     Sensor Device&lt;br /&gt;
     SDR Repository Device&lt;br /&gt;
     SEL Device&lt;br /&gt;
     FRU Inventory Device&lt;br /&gt;
     IPMB Event Receiver&lt;br /&gt;
     IPMB Event Generator&lt;br /&gt;
     Chassis Device&lt;br /&gt;
 Aux Firmware Rev Info     : &lt;br /&gt;
     0x01&lt;br /&gt;
     0x00&lt;br /&gt;
     0x00&lt;br /&gt;
     0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So to warm reboot the ipmi we do:&lt;br /&gt;
&lt;br /&gt;
 ipmitool mc reset warm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This will either tell you it sent the warm reset command (&amp;quot;Sent warm reset command to MC&amp;quot;) or return&lt;br /&gt;
&lt;br /&gt;
 MC reset command failed: Invalid command&lt;br /&gt;
&lt;br /&gt;
If this happens, send the cold reset command&lt;br /&gt;
&lt;br /&gt;
 ipmitool mc reset cold&lt;br /&gt;
 Sent cold reset command to MC&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This didn&amp;#039;t work however, as &lt;br /&gt;
 ipmitool -H node3IP -U ADMIN -P ***** mc info&lt;br /&gt;
 Error: Unable to establish LAN session&lt;br /&gt;
&lt;br /&gt;
but &lt;br /&gt;
&lt;br /&gt;
 ipmitool -H node2IP -u ADMIN -P **** mc infoDevice ID                 : 32&lt;br /&gt;
 Device Revision           : 1&lt;br /&gt;
 Firmware Revision         : 2.59&lt;br /&gt;
 IPMI Version              : 2.0&lt;br /&gt;
 Manufacturer ID           : 47488&lt;br /&gt;
 Manufacturer Name         : Unknown (0xB980)&lt;br /&gt;
 Product ID                : 43537 (0xaa11)&lt;br /&gt;
 Product Name              : Unknown (0xAA11)&lt;br /&gt;
 Device Available          : yes&lt;br /&gt;
 Provides Device SDRs      : no&lt;br /&gt;
 Additional Device Support :&lt;br /&gt;
     Sensor Device&lt;br /&gt;
     SDR Repository Device&lt;br /&gt;
     SEL Device&lt;br /&gt;
     FRU Inventory Device&lt;br /&gt;
     IPMB Event Receiver&lt;br /&gt;
     IPMB Event Generator&lt;br /&gt;
     Chassis Device&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Marvin_and_IPMI_(remote_hardware_control)&amp;diff=3325</id>
		<title>Marvin and IPMI (remote hardware control)</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Marvin_and_IPMI_(remote_hardware_control)&amp;diff=3325"/>
				<updated>2019-01-22T10:45:40Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
For example, when networking goes down on marvin and yet the machine is still up, ssh can no longer be used, so the administrator is stuck.&lt;br /&gt;
&lt;br /&gt;
However all modern, industrial grade (i.e. non-consumer) servers now include a separate subsystem on the machine which is independent of the main machine. This is known by various names, and there is a standard called IPMI, which is what Supermicro calls it. Dell calls it DRAC and HP calls it LightsOn (or something to that effect).&lt;br /&gt;
&lt;br /&gt;
It can be seen as a hardware remote control system and is a device which include a network connection so one can login to the IPMI module and send hardware commands (principally power-on and power-cycle) to the main machine.&lt;br /&gt;
&lt;br /&gt;
Despite IPMI&amp;#039;s isolation from the main system, it is not immune to faults of its own, in which case, the only cure is to ring Ian at IT Services and physically go to the datacentre. Unfortunately, IPMI tends not to cooperate exactly when the man machines has problems of its own, which is disappointing because that&amp;#039;s exactly when it&amp;#039;s needed. Nevertheless, IPMI is better than nothing and has proved useful on many occasions.&lt;br /&gt;
&lt;br /&gt;
= Details =&lt;br /&gt;
&lt;br /&gt;
Marvin&amp;#039;s nodes can all be remotely controlled but only in marvin itself. So the usual exercise is to run firefox on marvin and connect to the node&amp;#039;s IPMI IPs there.&lt;br /&gt;
&lt;br /&gt;
When marvin&amp;#039;s IPMI itself needs to be used, then this can be done from another computer within the University campus.&lt;br /&gt;
&lt;br /&gt;
There is a standalone GUI application called IPMIconfig which does then same things as the IPMI web interface, but because it doesn&amp;#039;t need a browser, can be faster.&lt;br /&gt;
&lt;br /&gt;
The virtual console on IPMI&amp;#039;s web interface uses JNLP (javaws) program and is the best implementation, but it can be patchy. It also allows the loading of a local Live Linux ISO file so that the machine may be booted from it, though this can be a bit tortuous. Certainly, it is very clear that Supermicro&amp;#039;s IPMI interface is considerably inferior to Dell&amp;#039;s DRAC interface used on the biotime machine. Nevertheless, it is possible to boot the recommended Linux Live ISO, [http://www.system-rescue-cd.org sysrescuecd] on Supermicro&amp;#039;s IPMI.&lt;br /&gt;
&lt;br /&gt;
Again it must be repeated that the virtual console&amp;#039;s functioning is patchy. An even less dependable version of Virtual Console is SOL. It may seem silly to mention SOL when it is even worse than Virtual console, however it has one or two crucial advantages which make it the holy grail of remote hardware control:&lt;br /&gt;
* SOL is a raw terminal connection to the login screen of the main machine.&lt;br /&gt;
* it does not operate via buggy GUI&amp;#039;s and web interfaces.&lt;br /&gt;
* one can connect via the command line and record all input and output via your local linux computer&amp;#039;s &amp;quot;script&amp;quot; progam (see &amp;quot;man script&amp;quot;).&lt;br /&gt;
* When it works, it is much faster than the alternatives.&lt;br /&gt;
* It behaves as if one really was sitting down locally at the machine, looking at the login screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=IPMI Is Down=&lt;br /&gt;
&lt;br /&gt;
An issue was had in Jan 2019 where the IPMI for node1 and node3 wasn&amp;#039;t accessible via the command line or webserver. This meant we couldn&amp;#039;t reboot the nodes when they were up. Node3 was rebooted manually by Ally pushing a button on the box. &lt;br /&gt;
&lt;br /&gt;
When trying to interact with the mc (management controller, sometimes baseboard management controller)&lt;br /&gt;
&lt;br /&gt;
 ipmitool mc info&lt;br /&gt;
 Could not open device at /dev/ipmi0 or /dev/ipmi/0 or /dev/ipmidev/0: No such file or directory&lt;br /&gt;
&lt;br /&gt;
The error tells me we need to start two services&lt;br /&gt;
&lt;br /&gt;
 modprobe ipmi_devintf&lt;br /&gt;
 modprobe ipmi_si&lt;br /&gt;
&lt;br /&gt;
Which then gives us&lt;br /&gt;
 ipmitool mc info&lt;br /&gt;
 Device ID                 : 32&lt;br /&gt;
 Device Revision           : 1&lt;br /&gt;
 Firmware Revision         : 2.06&lt;br /&gt;
 IPMI Version              : 2.0&lt;br /&gt;
 Manufacturer ID           : 47488&lt;br /&gt;
 Manufacturer Name         : Unknown (0xB980)&lt;br /&gt;
 Product ID                : 43707 (0xaabb)&lt;br /&gt;
 Product Name              : Unknown (0xAABB)&lt;br /&gt;
 Device Available          : yes&lt;br /&gt;
 Provides Device SDRs      : no&lt;br /&gt;
 Additional Device Support :&lt;br /&gt;
     Sensor Device&lt;br /&gt;
     SDR Repository Device&lt;br /&gt;
     SEL Device&lt;br /&gt;
     FRU Inventory Device&lt;br /&gt;
     IPMB Event Receiver&lt;br /&gt;
     IPMB Event Generator&lt;br /&gt;
     Chassis Device&lt;br /&gt;
 Aux Firmware Rev Info     : &lt;br /&gt;
     0x01&lt;br /&gt;
     0x00&lt;br /&gt;
     0x00&lt;br /&gt;
     0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So to warm reboot the ipmi we do:&lt;br /&gt;
&lt;br /&gt;
 ipmitool mc reset warm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This will either tell you it sent the warm reset command (&amp;quot;Sent warm reset command to MC&amp;quot;) or return&lt;br /&gt;
&lt;br /&gt;
 MC reset command failed: Invalid command&lt;br /&gt;
&lt;br /&gt;
If this happens, send the cold reset command&lt;br /&gt;
&lt;br /&gt;
 ipmitool mc reset cold&lt;br /&gt;
 Sent cold reset command to MC&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	<entry>
		<id>http://stab.st-andrews.ac.uk/wiki/index.php?title=Clearing_queue_errors&amp;diff=3317</id>
		<title>Clearing queue errors</title>
		<link rel="alternate" type="text/html" href="http://stab.st-andrews.ac.uk/wiki/index.php?title=Clearing_queue_errors&amp;diff=3317"/>
				<updated>2019-01-10T12:54:29Z</updated>
		
		<summary type="html">&lt;p&gt;Jw297: Created page with &amp;quot;If a job is stuck in E in ```qstat -f``` (or ```qstat -F``` for more details) do  qmod -c &amp;lt;queue list&amp;gt;  so all.q would be  qmod -c all.q  to clear it. It may have a repatative...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;If a job is stuck in E in ```qstat -f``` (or ```qstat -F``` for more details) do&lt;br /&gt;
 qmod -c &amp;lt;queue list&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so all.q would be&lt;br /&gt;
 qmod -c all.q&lt;br /&gt;
&lt;br /&gt;
to clear it. It may have a repatative cause, but sometimes it Just Happens. &lt;br /&gt;
&lt;br /&gt;
This was why node1 wasn&amp;#039;t accepting jobs in Dec &amp;#039;18.&lt;/div&gt;</summary>
		<author><name>Jw297</name></author>	</entry>

	</feed>