arcca:raven
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
arcca:raven [2012/12/04 11:57] – [LALSuite] stephen.fairhurst@LIGO.ORG | arcca:raven [2012/12/20 13:11] (current) – [Grid tools] stephen.fairhurst@LIGO.ORG | ||
---|---|---|---|
Line 26: | Line 26: | ||
* pkg-config: This is available by default on Raven (as / | * pkg-config: This is available by default on Raven (as / | ||
* gsl: This is available as a module. | * gsl: This is available as a module. | ||
- | * git: This is required to check out the LALSuite software stack. | + | * git: This is required to check out the LALSuite software stack. |
* fftw: This **should** be available as a module, but isn't working yet! I have installed it at / | * fftw: This **should** be available as a module, but isn't working yet! I have installed it at / | ||
* libframe/ | * libframe/ | ||
Line 45: | Line 45: | ||
I have successfully run a piece of LAL code (lalapps_tmpltbank to be precise), and it seems to have worked! | I have successfully run a piece of LAL code (lalapps_tmpltbank to be precise), and it seems to have worked! | ||
+ | |||
+ | ===== Grid tools ===== | ||
+ | |||
+ | Instructions to install the LIGO Data Grid Client from source are taken from [[https:// | ||
+ | |||
+ | < | ||
+ | wget http:// | ||
+ | tar xf gt5.2.0-all-source-installer.tar.gz | ||
+ | mkdir gt5.2.0-all | ||
+ | export GLOBUS_LOCATION=~/ | ||
+ | export PATH=/ | ||
+ | export FLAVOUR=gcc64dbg | ||
+ | |||
+ | cd gt5.2.0-all-source-installer | ||
+ | ./configure --prefix=$GLOBUS_LOCATION --with-flavor=$FLAVOUR | ||
+ | make gsi-openssh | ||
+ | make postinstall | ||
+ | . $GLOBUS_LOCATION/ | ||
+ | </ | ||
+ | |||
+ | The VDT Certificate Bundle can be installed using the instructions from [[https:// | ||
+ | |||
+ | < | ||
+ | wget http:// | ||
+ | tar xf osg-certificates-1.32.tar.gz -C $GLOBUS_LOCATION/ | ||
+ | globus-update-certificate-dir | ||
+ | </ | ||
+ | |||
+ | Now copy your Grid certificates into the '' | ||
+ | |||
+ | < | ||
+ | chmod 600 ~/ | ||
+ | chmod 400 ~/ | ||
+ | </ | ||
+ | |||
+ | To source the install, you need: | ||
+ | < | ||
+ | . ~spxph/ | ||
+ | </ | ||
===== Data ===== | ===== Data ===== | ||
- | Do we have any data and where is it? | + | * We have data at <code bash>/ |
+ | <code bash> | ||
+ | * I **believe** that /scratch on Raven is a different file-server from /scratch on Merlin/ | ||
+ | * We don't have a clear statement of the data that is available on Raven. | ||
+ | |||
+ | ===== Workflows ===== | ||
+ | |||
+ | * Raven is set up to run under PBSpro. | ||
+ | * We had been running by submitting requests to PBS to reserve nodes for condor which then reported back to GEO. This setup should work on the new cluster, provided we have a machine running condor that can talk to the nodes. | ||
+ | * In the medium term, we might set things up differently so that condor talks to the PBS submission machine and gets the right jobs submitted in the PBS queue. | ||
+ | * It would be nice to set up something to run a few jobs as proof of principle. | ||
+ | |||
+ | ====== BAM on Raven ====== | ||
+ | * Load these modules: | ||
+ | <code bash> | ||
+ | module load intel/ | ||
+ | module load bullxmpi/ | ||
+ | </ | ||
+ | * Use a standard MyConfig for bam -- you don't need to point it to specific mpi libraries; just use mpicc as the compiler. | ||
+ | * (Re)compile bam: | ||
+ | <code bash> | ||
+ | |||
+ | * Sample pbs script (adapted from the Merlin version with some changes): | ||
+ | <code bash> | ||
+ | # | ||
+ | |||
+ | #PBS -q workq | ||
+ | #PBS -l select=8: | ||
+ | #PBS -l place=scatter: | ||
+ | #PBS -l walltime=1: | ||
+ | |||
+ | #PBS -N R6_PN_64_128 | ||
+ | #PBS -o R6_PN_64.out | ||
+ | #PBS -e R6_PN_64.err | ||
+ | |||
+ | # | ||
+ | |||
+ | pardir=/ | ||
+ | parfile=R6_PN_64.par | ||
+ | bamexe=/ | ||
+ | |||
+ | cd / | ||
+ | cp $pardir/ | ||
+ | |||
+ | mpirun -np 128 $bamexe ./ | ||
+ | </ | ||
+ | |||
+ | * This pbs-script copies the parameter file over to the lustre filesystem in / | ||
+ | * Don't forget to mirror the data with rsync to minion, since files will get deleted after some time. | ||
+ | * At the moment we are using queue ' | ||
arcca/raven.1354622267.txt.gz · Last modified: 2012/12/04 11:57 by stephen.fairhurst@LIGO.ORG