Difference between revisions of "H: drive on cluster"

From wiki
Jump to: navigation, search
Line 5: Line 5:
 
  cfs.st-andrews.ac.uk/shared/med_research/res
 
  cfs.st-andrews.ac.uk/shared/med_research/res
  
One can of course simply copy files over to the cluster from the H: drive, but for large datasets, this is costly in terms of diskspace. An alternative is "mount" this network drive on marvin, which avoids this duplication. While not as simple as copying, the efficiency gains make it worth it. The procedure is documented here.
+
One can of course simply copy files over to the cluster from the H: drive, but for large datasets, this is costly in terms of diskspace. An alternative is to "mount" this network drive on marvin, which avoids this duplication. While not as simple as copying, the efficiency gains make it worth it. The procedure is documented here.
  
 
Mounting H: depends on individual authentication, and so cannot be mounted system wide. Every user, if they want it, must do it manually. This also means that it cannot be tested without the cooperation of the user, who must enter their ID and password.
 
Mounting H: depends on individual authentication, and so cannot be mounted system wide. Every user, if they want it, must do it manually. This also means that it cannot be tested without the cooperation of the user, who must enter their ID and password.
Line 28: Line 28:
 
GVFS is part of the Gnome mega project.  
 
GVFS is part of the Gnome mega project.  
  
To restart gdm, the followng rather rough method is actually the recommended one as can be seen here: https://access.redhat.com/solutions/36382
+
To restart gdm, the following rather rough method is actually the recommended one as can be seen here: https://access.redhat.com/solutions/36382
  
 
(This only applies for RHEL6 ... RHEL7 uses systemctl and the new Gnome 3, which are both coordinated and have a systemctl method for restarting)
 
(This only applies for RHEL6 ... RHEL7 uses systemctl and the new Gnome 3, which are both coordinated and have a systemctl method for restarting)
Line 39: Line 39:
  
 
GVFS will allow the user mount the filesystem, though it also requires a "running user d-bus session, typically started with desktop session on login".
 
GVFS will allow the user mount the filesystem, though it also requires a "running user d-bus session, typically started with desktop session on login".
 
 
  
  

Revision as of 16:10, 18 August 2016

Introduction

The H: is principally the following St. Andrews network drive:

cfs.st-andrews.ac.uk/shared/med_research/res

One can of course simply copy files over to the cluster from the H: drive, but for large datasets, this is costly in terms of diskspace. An alternative is to "mount" this network drive on marvin, which avoids this duplication. While not as simple as copying, the efficiency gains make it worth it. The procedure is documented here.

Mounting H: depends on individual authentication, and so cannot be mounted system wide. Every user, if they want it, must do it manually. This also means that it cannot be tested without the cooperation of the user, who must enter their ID and password.


The key to this is the Gnome Virtual File system, gvfs.

It is possible to get the h: drive mounted on the marvin frontend, mainly because it is running gnome.

However, the nodes are not, so currently they cannot mount the H: drive.

This means when working with the raw data, only the marvin.q can be used.

procedure

Admin Aspects

Environment

GVFS is part of the Gnome mega project.

To restart gdm, the following rather rough method is actually the recommended one as can be seen here: https://access.redhat.com/solutions/36382

(This only applies for RHEL6 ... RHEL7 uses systemctl and the new Gnome 3, which are both coordinated and have a systemctl method for restarting)

The command is as follows

pkill -f gdm-binary

Methods

GVFS will allow the user mount the filesystem, though it also requires a "running user d-bus session, typically started with desktop session on login".


Two tools are used for this: gvfs and fuse

  • a user must be a member of group "fuse"
  • a gvfs daemon must be running under user gdm: the system administrator should ensure this.
  • Script to use is
#!/bin/bash
export $(dbus-launch)
gvfs-mount smb://cfs.st-andrews.ac.uk/shared/med_research/res
/usr/libexec/gvfs-fuse-daemon ~/.gvfs

which can be launched as normal user,

Notes

  • gvfs-mount -l seems useless, reports nothing.

Relevant help pages

/usr/libexec/gvfs-fuse-daemon

usage: /usr/libexec/gvfs-fuse-daemon mountpoint [options]

general options:
    -o opt,[opt...]        mount options
    -h   --help            print help
    -V   --version         print version

FUSE options:
    -d   -o debug          enable debug output (implies -f)
    -f                     foreground operation
    -s                     disable multi-threaded operation

    -o allow_other         allow access to other users
    -o allow_root          allow access to root
    -o nonempty            allow mounts over non-empty file/dir
    -o default_permissions enable permission checking by kernel
    -o fsname=NAME         set filesystem name
    -o subtype=NAME        set filesystem type
    -o large_read          issue large read requests (2.4 only)
    -o max_read=N          set maximum size of read requests

    -o hard_remove         immediate removal (don't hide files)
    -o use_ino             let filesystem set inode numbers
    -o readdir_ino         try to fill in d_ino in readdir
    -o direct_io           use direct I/O
    -o kernel_cache        cache files in kernel
    -o [no]auto_cache      enable caching based on modification times (off)
    -o umask=M             set file permissions (octal)
    -o uid=N               set file owner
    -o gid=N               set file group
    -o entry_timeout=T     cache timeout for names (1.0s)
    -o negative_timeout=T  cache timeout for deleted names (0.0s)
    -o attr_timeout=T      cache timeout for attributes (1.0s)
    -o ac_attr_timeout=T   auto cache timeout for attributes (attr_timeout)
    -o intr                allow requests to be interrupted
    -o intr_signal=NUM     signal to send on interrupt (10)
    -o modules=M1[:M2...]  names of modules to push onto filesystem stack

    -o max_write=N         set maximum size of write requests
    -o max_readahead=N     set maximum readahead
    -o async_read          perform reads asynchronously (default)
    -o sync_read           perform reads synchronously
    -o atomic_o_trunc      enable atomic open+truncate support
    -o big_writes          enable larger than 4kB writes
    -o no_remote_lock      disable remote file locking

Module options:

[subdir]
    -o subdir=DIR           prepend this directory to all paths (mandatory)
    -o [no]rellinks         transform absolute symlinks to relative

[iconv]
    -o from_code=CHARSET   original encoding of file names (default: UTF-8)
    -o to_code=CHARSET      new encoding of the file names (default: UTF-8)