A python interface that uses the DRMAA C-binding API to automate parts of running queue jobs, to make the process a bit more mangeable.
It is not terribly sophisticated, so is not capable of miracles, though one or two of its abilities may come in handy.
Necessary environment variable
The user must have the following line in their .bashrc
This should be checked with
When a jobTemplate is defined, the queue options under which the job will be run are put into an opaque string, similar to how you would enter the details to qsub on the command line. You set it like this:
s = drmaa.Session() s.initialize() jt = s.createJobTemplate() jt.nativeSpecification='-q single.q -pe multi 2'
So this will run the script on the single.q queue with two threads per script.
So the script can be a normal bash without any pragma (i.e. #$ ) options for the script.
individual control of running scripts
The SGE_TASK_ID variable is typically the only way to control what each individual script in the job array will work on. However it also possible to use the argument list which is passed to each script to obtain further controls. This is not entirely robust, as the arguments become a shell characteristic and subject ot the shell limitations (primarily the size, whihc is something in the 10000's ... in parallel jobs it is easy to exceed this. However, right now it is the only solution without writing to disk.