How to install an application

From MetaCentrum
Jump to navigation Jump to search

Own way or ask for help

Applications are organized in the MetaCentrum in so called modules (physically located in the /software/... directory tree). To add a new application (not in the |list), you can directly ask for it at or you can install it on your own. Following specific guides bellow are for the ones that want to try it on his/her own.

Basically, you can install everything in your storage space under user account, mostly you just need to specify non-system location during the installation (e.g. --prefix in the configure). Please check following best practices points and specific guides bellow:

  • install the application in /storage/some-location/home/$USERNAME/ directory, then use full path (starting /storage/...) to run it
  • you can use many compilers (eg. GCC, Intel, PGI) or libraries (eg. MPI, LAPACK, beagle) to make the application more efficient,
  • for non resource-consuming compilation (i.e., not lasting several hours), the frontends can be used, in other case, if resource-consuming compilation is needed, the interactive jobs asking for a computing node in an interactive mode is preferred,
  • do not worry to try it (working under user account should be safe) and to ask us at for help.

Preparation of a new module

Warning.gif WARNING: Since March 2014, we started to use the so-called AFS read-only replicas for the software volumes in order to improve the reliability and accessibility of the software modules -- the basic directory /software/NAME/1.1 becomes available read-only, while the new working directory for installation/updating the SW becomes /afs/! (The working volume has to be released in order to make it available via read-only replicas -- see manual below...)

If you wish to install your own application, or one that is not in our list of applications, and later manage such application, then:

  1. ask at for preparation of an AFS volume, specify your login (or logins of your colleagues that will manage application with you), name, version and expected size of application
  2. once the AFS volume is created, you'll be able to to install the application into a folder, e.g /afs/
  3. the ideal structure of the installation folder is (it is mostly achieved by specification of --prefix=/software/NAME/1.1 to the configure command):
    • bin - binaries
    • lib - libraries
    • include - headers
    • man - man pages
    • doc - documentation
    • examples - examples
    • src - source codes
  4. for the compilation, the compilers available via modules (e.g., gcc-4.9.2, pgicdk-16.10, intelcdk-17, ...) and relevant MPI runtime/libraries (e.g., openmpi, openmpi-2.0.1-pgi, openmpi-2.0.1-intel, mpich3-xxx, ...) are comonly used
  5. note the installation steps into some file, e.g. /afs/ (it may become useful in case of upgrade)
  6. once you successfully test the application, perform so-called volume release -- a transformation of the read-write AFS volume into read-only replicas, which increase the availability and reliability of the SW volumes in cases of specific failures.
    • simultaneously, the main software directory become protected from accidental changes (in common case, one should start the applications from the read-only replicas, available through /software/NAME/1.1/)
    • perform the command remctl afs release soft.NAME
      • write us in cases of a permission failures
    • the replicas availability could be checked by the command fs where /software/NAME
  7. once the release process sucessfully finishes, prepare a modulefile for your application, and send it back to us (or make available somewhere)
    • let be inspired by modules available in /afs/
    • if you want to add some help to module, available then via 'module help' command, see similar files in /afs/ directory
    • if you don't know how to create the module, we'll prepare it for you (in such a case, inform us about the environtment variables and modules your application requires for proper run)
  8. at the end, write documentation on wiki MetaCentrum_Application_List according to Example_of_application_page
Feel free to write us in case of problems – we are here to help you!

Simple modulefile example

#! Title: gsl-1.16
#! Platforms: amd64_linux32
#! Version: 1.16
#! Description: GNU Scientific Library tools collection
#! Author: Petr Hanousek, #35289
proc ModulesHelp {} {
global ModulesCurrentModulefile
puts stdout "modulehelp $ModulesCurrentModulefile"

set basedir /software/gsl/1.16/gcc

prepend-path PATH ${basedir}/bin
prepend-path PKG_CONFIG_PATH ${basedir}/lib/pkgconfig
prepend-path LD_LIBRARY_PATH ${basedir}/lib
prepend-path MANPATH ${basedir}/share/man

General programming language (C, C++, Fortran, ...)

Choosing the compiler

You should choose what compiler you use for building an application. There are three main compilers:

  • GCC compiler – free compiler which is usually most compatible with the software being built. Better is to use the system version then some from module if that is possible.
  • Intel compiler – commercial compiler with excellent math libraries and optimization abilities.
  • PGI compiler – another commercial compiler with optimized math libraries. Not so wide used in Meta because of Intel architecture. But it will become important when we include some AMD clusters.

The suggested process is to first try the latest available Intel compiler (or the version from dependencies) with "automatic CPU dispatch" set (see INTEL_CDK#Optimizing_compilation) and when it fail, use GCC.

Configuration tuning

Usually you need to configure the software prior to building. And there are usually three different "configurers". Some advices for each are here:

  • General advice for using libraries
    • use MKL (math libraries from Intel CDK) when possible
    • use MPI when possible, preferably OpenMPI but MVAPICH is also available (MVAPICH is MPICH with support of InfiniBand)
    • use Thread (OpenMP) when possible
    • use CUDA in separate compilations
    • avoid all optimizations for machine (-xHost and -fast flags) where you are building your application
  • configure – use ./configure --help to get the available options. If no ./configure script is present, try first ./ (for newer than system version use module autotools-2.26). Then look above to general advices.
  • cmake – First add one of the cmake modules (ie. cmake-3.6.1). Then make a build directory (mkdir build && cd build) and run ccmake ../ to get and adjust configuration options. All options are available after pressing "t" key. Then look above to general advices. You can look in ccmake for options and then use them in command line with -D prefix. Like: cmake -DCMAKE_INSTALL_PREFIX=/software/prg/version.
  • Makefile – Sometimes all configuration is done only in pre-generated Makefile. Edit it using your favourite editor (vim, nano, mcedit). Don't forget to look above to general advices.

Environment variables and Flags

There are some usual environment variables in which you can put some "flags" that influence the compilation or linking. The standard make rule for compiling a C (or C++ or Fortran) program is:

   $(Compiler) $(PreprocessorFLAGS) $(CompilerFLAGS) -c -o $@ $<

The corresponding compilers and variables that influence it's behavior are described in following table

Compiler Preprocessor FLAGS Compiler FLAGS

So if you use C compiler (gcc, icc, pgcc) and want to influence the compilation phase, you should set some flags in variable $CFLAGS. If you use C++ compiler, use $CXXFLAGS variable. If you use both and want to have some common flags, use $CPPFLAGS variable. So for C/C++ projects you will normally need to use only the CPPFLAGS for compilation.

Linker flags are always $LDFLAGS. You should always set capital "L" paths prior to linked libraries.

What are the flags for Example
Preprocessor FLAGS Compiler inspecific optimization and include paths. CPPFLAGS="-I/software/prg1/include -I/software/prg2/include -O2 -msse -fPIC"
Compiler FLAGS Compiler specific optimization and include paths. CXXFLAGS="-I/software/prg1/include -I/software/prg2/include -O2 -msse -fPIC"
Linker FLAGS (LDFLAGS) Linker directives and library paths. LDFLAGS="-L/software/prg1/version/lib -L/software/prg2/version/lib -lcrypt -lmkl_blas95_lp64 -lpthread /software/prg1/lib/libprg.a"

If the programs you are dealing with support pkgconfig mechanism, it is a good idea to set the $PKG_CONFIG_PATH, usually /software/prg/version/lib/pkgconfig.

Scripts for setting the flags

In module meta-utils are available scripts set-* for setting of certain compilation environments. Use it at least for inspiration.

Math libraries introduction

Let's describe some relationships among the linear algebra libraries BLAS, LAPACK, BLACS and ScaLAPACK. For the quick overview we can look at this picture. So BLAS is a dependency of LAPACK and you can not link LAPACK without BLAS. LAPACK, BLAS and BLACS (pBLAS) are dependencies of ScaLAPACK and you shoul link it all if you are using ScaLapack. Note that BLACS (pBLAS) are dependent also on MPI implementation. You should choose the right library depending on MPI you are using (OpenMPI or M(VA)PICH). Math libraries linking examples are described on INTEL_CDK#Linking_MKL_libraries page.

Perl modules

First check if there is an already installed perl package in one of our modules. New packages should be install into bioperl-1.6.1 or bioperl-1.6.9-gcc module.

To list all available perl modules you can use script Usage:

module add bioperl-1.6.1
Note: If you want to check a list of available modules for bioperl-1.6.1, is necessary replace default perl-5.10.1 by newest version e.g. perl-5.20.1-gcc. Use module rm and module add commands.

easiest way - using CPANMINUS tool

cpanm is the specialized tool for installing and uninstalling of Perl packages from CPAN. It is available via modules. Use it like this:

load module bioperl-1.6.9-gcc or bioperl-1.6.1
cpanm -l /specified/local/directory/ GD::Graph::bars – to install Perl library and all of it's dependencies to specified directory
cpanm -L /specified/local/directory/ GD::Graph::bars – to install Perl library and all of it's dependencies including the libraries already present in system to specified directory

After that don't forget to set PATH and PERL5LIB to the specified directory bin and lib folders to be able to use the installed binaries and libraries.

easy way - using CPAN tool

Using the CPAN tool is quite effective way to install perl packages. For better effectivity it needs some configuration first. Example:

perl -MCPAN -e shell
o conf makepl_arg PREFIX=/afs/
o conf mbuildpl_arg "--prefix /afs/"
o conf build_dir /scratch/$LOGIN/.cpan/build
o conf commit
  • Prefix settings are according to the program directory you are trying to install the software in.
  • Build directory should be on some fast disk, so the best way is to use /scratch
  • To acquire information about all CPAN tool settings issue the o conf command without parameters.

The module installation is then simple:

m /regexp/ - list available modules matching case insensitive regexp or accurate module info
install module_name - installs the module with all dependencies
force install module_name - continues even if error occurs
? - help use the CPAN tool
q - exit the CPAN tool

Don't forget to properly set the $PERL5LIB environment variable pointing to lib directory of installed module. In our case it is:

export PERL5LIB=/software/bioperl-1.6.1/lib/perl/5.10.1:$PERL5LIB

hard way - manual package installation

If everything goes wrong, you have the option to install the packages manually. This means to download a package and all of it's dependencies and than install it in the proper order.

perl Makefile.PL PREFIX=$TOIN
make test
make install

to test

export PERL5LIB=$TOIN/lib/perl/5.10.1
> use My::Module;


perl -e 'use Bio::SeqIO; print join "\n", %INC; print "\n"'

Python packages


easy way - using PyPI tool

There are some different PyPI installations. You can use the 'pip' command installed on every frontend and tied with system Python version.

Or you can use the PyPI that we have prepared in following modules:


You can just add one of these modules and than install required package (see bellow).

If you don't like our PyPI installation and want your own one, just get PiPY from here and then invoke

python --root /some/new/directory/with/modules

PyPI operation

pip search <module name>
pip install <module name> --root /some/python/modules/folder
pip install <module name> --prefix /some/python/modules/folder
pip install git+https://path/to/git/file

Ideally the root is in case of use of one of ours python26-modules the path /software/python26-modules

Don't forget to properly set the $PATH and $PYTHONPATH environment variables if you are not using one of ours python26-modules and installing modules to some new dir. For details see the hard way chapter.

Sometimes help to specify more options, eg.

module add python27-modules-gcc
pip install -v --upgrade dendropy --install-option="--prefix=/afs/" --root /afs/ --ignore-installed
Upgrading the python27-modules-gcc and python27-modules-intel packages

When you want to add or update a package from the python27-modules-gcc collection, the right command line is:

module add python27-modules-gcc
pip install -v --upgrade multiqc --root /software/python27-modules

If you are adding a new package / upgrading an old package from the python27-modules-intel module, use a command like this:

module add python27-modules-intel
pip install -v --upgrade multiqc --root /software/python27-modules

These commands will first uninstall all old versions of packages involved, then install the new versions.

You must use the --root switch together with the /software/python27-modules paths, otherwise the pip installer will look into wrong directories and you risk total chaos!

Detailed walkthrough - using PyPI tool

A very convenient feature is to use the --user option of pip install. This will install modules, additional to the available system python install, in the location defined by the PYTHONUSERBASE environment variable. A convenient choice for this variable is a location visible from the NFSv4 infrastructure, which means you could use for example export PYTHONUSERBASE=/storage/home/<user_name>/.local

If you install such modules at this location, you will also need to add them to your path and pythonpath so that they are accessible from any folder you wish to execute your code. For this purpose, export PATH=$PYTHONUSERBASE/bin:$PATH and export PYTHONPATH=$PYTHONUSERBASE/bin:$PYTHONPATH will do the job.

If you wish to execute such commands at each login on a front end, you will therefore have to add the following lines to you .profile:

module add python27-modules-intel
# Set pip path for --user option
export PYTHONUSERBASE=/storage/plzen1/home/<user_name>/.local
# set PATH and PYTHONPATH variables
export PYTHONPATH=$PYTHONUSERBASE/lib/python2.7/site-packages:$PYTHONPATH

With this, you can install any module you need with the following command:

pip install <module-name> --user --process-dependency-links

without any need for administrator rights, and you will be able to use it. When launching jobs from the scheduler, remember that you .profile is not executed, you will therefore need to do module add and to define the relevant environment variables before the job is acutally executed.

another easy way - using Conda

Conda is another package manager and installation of Python modules is very convenient with it. Just download one of the clients, install it somewhere and use it. Example of basic installation to user defined directory (in MetaCentrum we already have conda module, it is not necessary to install it):

./ -p /user/specific/directory/
conda install package_name

To create a new environment with python37 called NAME and install SW there

conda create --name NAME --clone py37
conda activate NAME
conda install SW

To create a user local environment use prefix

conda create --prefix ./myenv python=2.7

later you can activate it by full path, eg.

conda activate /auto/brno6/home/fsbrno2/xfibich/myenv

You can also create SW profiles, called "environments" (see ) to install Python modules for different Python versions. Example which installs and activates new Python environment to the envs subdirectory of conda installation:

module add conda-modules-py37
conda create -n env_name python=3.6
activate env_name

For example

module add conda-modules-py37
conda create -n py37 python=3.7
activate py37

Create copy/clone of py37 environment

conda create --name orgasm --clone py37
conda activate orgasm

Create local user specific conda environment

mkdir -p conda-envs/py37
conda create -p ./conda-envs/rgi --clone py37
conda activate /auto/brno2/home/xfibich/conda-envs/rgi
conda install --channel bioconda rgi=3.1.1

List of environments

conda env list

Setting of env

conda info

hard way - manual package installation

If everything goes wrong, you have the option to install the packages manually. This means to download a package and all of it's dependencies and than install it in the proper order.

python install --install-scripts=$TOIN/bin/ --install-purelib=$TOIN/lib --install-lib=$TOIN/lib

to test instalation

export PATH=$TOIN/bin:$PATH
python -c "import package;"

sometimes is necessary to export PYTHONPATH before installation, but it must be definitely exported before using of package.

Lua/Torch rocks

User specific Lua library

To add new lua/toch rock, you can use own directory to make personal library of rocks, eg.

module add torch7cuda-deb8
mkdir /storage/plzen1/home/$USER/.luarocks
luarocks install elfs --tree=/storage/plzen1/home/$USER/.luarocks

And than check it

th> require 'elfs'

System Lua library

To add a rock to system library

module add torch7cuda-deb8
luarocks install NAME

than check it by

luarocks list | grep -A 1 NAME

or by

require 'NAME'

Lua complications and links

Some rocks need to add cmake module for the rock installation and paste settings to it, eg.

module add cmake-3.2.3
export CMAKE_INCLUDE_PATH=/software/mkl-11.0/composer_xe_2013.0.079/mkl/include:/software/torch/20160425-deb8/include:$CMAKE_INCLUDE_PATH
export CMAKE_LIBRARY_PATH=/software/mkl-11.0/composer_xe_2013.0.079/mkl/lib/intel64:/software/torch/20160425-deb8/lib:$CMAKE_LIBRARY_PATH

If you got errors with non existing file in tmp or quota exceed, redirect cmake directory to $SCRATCHDIR.

Having rockspec already, you can just

luarocks make stnbhwd-scm-1.rockspec

To check list of already installed rocks run

luarocks list

To search rocks

luarocks search NAME


R packages

User specific R library

Everyone can easily create own R packages library, you just need some folder (ideally on the /storage tree), eg.


($LOGNAME is your login in MetaCentrum). Than you can install package by


and load of such package


To set the directory as default, you need to set R_LIBS variable before running R. To check installation path, you just run R a .libPaths() function and now you do not need to specify location of the directory with your own packages any more

export R_LIBS="/storage/brno6/home/$LOGNAME/Rpackages"
> .libPaths()
[1] "/auto/brno6/home/$LOGNAME/Rpackages"
[2] "/afs/"
> install.packages("PACKAGENAME")
> library("PACKAGENAME")

System R library

Mostly you can follow User specific R library, but you just need to set R_LIBS to /afs/ tree, eg.

module add R-3.4.0-gcc
R_LIBS=/afs/ R
> install.packages("PACKAGENAME")

and release R.soft AFS volume.

R complications and links

Some R packages requires libraries in the system (eg. rgdal, mpi), than you must get them (eg. add them as module, by downloading as DEB packages, asking MetaProvoz to install them).

Bioconductor has own way dealing with packages, mostly you must install its repository by


than you can install packages by biocLite() function, eg.

biocLite(c("GenomicFeatures", "AnnotationDbi"))


DEBian packages

Sometimes it is quite fast to extract content of the DEBian package and append it to the application that requires it. Download package by apt-get, eg.

apt-get download libargtable2-0

than you must extract it by ar and check content, e.g

ar -x libargtable2-0_12-1.1_amd64.deb; ls

Now you should extract data file. If data.* file have xz suffix use

unxz data.tar.xz
tar -xvf data.tar

Files are now extracted in the relative paths, starting from the actual directory. Last step is just copy files (in the example, libraries) where you need, eg.

cp ./usr/lib/* /afs/

Do not forgot to set LD_LIBRARY_PATH on the final directory with .so files (here export LD_LIBRARY_PATH=/software/guenomu/201308/lib:$LD_LIBRARY_PATH after the releasing soft.guenomu AFS volume)!

Galaxy tools

New Meta wiki supports easy installation with automatic dependencies handling via Conda.

  • Go to the admin interface on frontend
  • Select Tools and Tool Shed -> Search tool Shed and find your desired tool
  • Click on the tool and select Preview and install. In the top right corner you should see Install to Galaxy button
  • You will be redirected to the Found page -- this is a bug and hopefully gonna be fixed soon because it the installation of the tool is supposed to begin automatically
  • Copy the given url into a new tab and edit http to https in the url (if it does not start up an installation try it in Mozilla)
  • Installation should begin and it should handle all dependencies

MetaSW machine

For MetaCentrum staff are prepared 2 private frontends ( with Debian7 and with Debian8) with the same environment as normal frontends. Any defined user who is able to log into these machines is then able to run sudo su command to become root user and modify the necessary things. Also it is configured to load module meta-utils at login.

Login and "accounting"

  • You should be able to log into the system when you are listed in provozmeta:RT:metasw_watchers Perun group. So when you receive mails from metasw RT queue you should be able to log in to metasw machines. You don't need to have any admin kerberos principal for log in, use your standard credentials. When you need to become root for some reason, just use sudo su command.
  • You should use the /scratch/$USERNAME for your work. Scratch dir is the special partition and therefore it's exhaustion won't affect the machine stability. After end of the work you should clean your scratch to avoid disk space exhaustion for others.

meta-utils module

There are some useful scripts to ease metasw group operation. Run it without parameters to get some help.

  • sro, srw, slo – change the target of /software link to AFS read-only (/afs/, read-write (/afs/ or local location (/scratch/software).
  •, – search for modulefiles two different ways.
  • – set afs crypt level on or off. Without encryption are AFS operations faster so use it for filesystem exhaustive operations.
  •,,,, – operate with AFS volumes.
  •, – grant specified user (or metasw group) the specified rights to all subfolders of current directory (the second script will grant all rights to metasw group without asking).
  • septik – copies all information from specified PBS job to current directory. You must be root to do that.


The machines are maintained with puppet system. It runs by default every half an hour (x:00 and x:30), installs updates and sets up the environment to some default. For example changes the /software link back to RO AFS version. You can rule the puppet as root with these commands:

  • puppet-stop – stop puppet operation. You are not obliged to give a reason but you can do that.
  • puppet-start – start puppet operation
  • puppet-status – check the current status of puppet operation.
  • puppet-test – run puppet right now. Use --noop for a dry run only (no changes are made).

General installation advices

  • If there is already installed a package with binaries in system, you don't have to compile it again for headers. Just install via apt the -dev package, compile your program, release it and it will work on public machines.


  • to see if binary was compiled with -Wl,rpath= option run: objdump -x binary_name |grep -i runpath
  • to get "-fPIC" status of some object try: readelf --relocs foo.o | egrep '(GOT|PLT|JU?MP_SLOT)'. It should print something if the object is compiled with -fPIC option.
  • to get list of symbols (function names and variables) in a library try: nm library_name.a.