How to install an application
Metacentrum wiki is deprecated after March 2023
Dear users, due to integration of Metacentrum into https://www.e-infra.cz/en (e-INFRA CZ service), the documentation for users will change format and site. The current wiki pages won't be updated after end of March 2023. They will, however, be kept for a few months for backwards reference. The new documentation resides at https://docs.metacentrum.cz. |
Applications are organized in the MetaCentrum in so called modules. This guide is for situations when you want to add a new software or possibly a new version of existing one.
The ways to do it
Ask us to do it for you
If you are not familiar with compiling and installing software or if you run into problems while trying to install on your own, simply ask us for help at meta@cesnet.cz.
With this option:
- You don't need to worry about pretty much anything
- Depending on amount of other user requests and complexity of the installation, it may take some time (days, sometimes a week or two)
Install on your own to your home directory
If you are confident about your installation skills and if you need the new software only for yourself, install into your home directory.
With this option:
- You don't need to wait for User support to create a system volume for your software, to publish it etc.,
- You don't need to interact with User support in any way (unless you wish :)
- You can't break much when working in your home under user account, and if you do, it won't hurt other users
- The installation will not be 100 % robust, i.e. if the disk arrays where your home is fail, the software will be unavailable
We recommend this option also for fine-tuning of the installation process and testing of new software.
Install on your own to system directories
If you are confident about your installation skills and if the software will be used by a group of users, install into system directories.
With this option:
- You will have to ask User support to create disk volume for new software, to publish it to read-only copies etc.
- In general this process takes longer, as there the User support may not react immediately
- You will get robust installation independent on your user account or the accessibility of your home directory which can be used by any user of Metacentrum
- All users will have access to this software automatically
Install into home directory: basic steps
- First, make sure you are familiar with how Application modules work.
- For non resource-consuming compilation (i.e., not lasting several hours), the frontends can be used, in other case, if resource-consuming compilation is needed, ask for interactive jobs instead.
- If you want let another user to use this software, make sure that he has right to read your modulefiles and execute the binaries.
The process will be illustrated on trivial example of installing a small pre-compiled piece software.
Install the software
$ pwd # where we are /storage/brno2/home/melounova/paup $ wget "http://phylosolutions.com/paup-test/paup4a168_centos64.gz" # get the precompiled binaries of software called "paup" $ gunzip paup4a168_centos64.gz $ unzip it $ ls bin # there is only a bin directory $ ls bin/ paup4a168_centos64.bin # the bin/ directory contains only the executable $ ln bin/paup4a168_centos64.bin bin/paup # make a link to this executable with a simpler name
Write modulefile
$ cd /storage/brno2/home/melounova/my_modules # directory where local modulefiles reside $ vi paup-4a168 # make a modulefile paup-4a168 which contains the following $ cat paup-4a168 #%Module1.0 #! #! Title: PAUP* #! Platforms: amd64_linux26 #! Version: 4a168 #! Description: PAUP* is a computational phylogenetics program for inferring evolutionary trees. #! #! Author: Anezka Melounova #! proc ModulesHelp {} { global ModulesCurrentModulefile puts stdout "modulehelp $ModulesCurrentModulefile" } set basedir /storage/brno2/home/melounova/paup prepend-path PATH ${basedir}/bin
Modify MODULEPATH
Normally the MODULEPATH does not search for modules in your home directory. You can either modify the path on command line
export MODULEPATH=$MODULEPATH:/storage/brno2/home/melounova/my_modules
or put the export command to your ~/.bashrc
file, so that the path is exported automatically every time you log in.
Install into system directories: basic steps
Installing into system directories is much similar to the case above, except that both the software and modulefies are released to read-only (RO) replicas of filesystem. This ensures more robust installation, but also brings additional steps into the process.
- First, make sure you are familiar with how Application modules work.
Ask User support for preparation of AFS volume
- write to meta@cesnet.cz
- specify your login or logins of your colleagues that will manage application with you
- specify the name (NAME), version (VERSION) and expected size of application
We will prepare a volume and a directory for the application at /afs/.ics.muni.cz/software/NAME/VERSION
.
Install the application into /afs/.ics.muni.cz/software/NAME/VERSION
- use
--prefix=/software/NAME/1.1
to theconfigure
command - for non resource-consuming compilation (i.e., not lasting several hours), the frontends can be used, in other case, if resource-consuming compilation is needed, ask for interactive jobs instead.
Typical structure of the installation folder is as folows:
- bin - binaries
- lib - libraries
- include - headers
- man - man pages
- doc - documentation
- examples - examples
- src - source codes
Plus: note the installation steps into some file, e.g. /afs/.ics.muni.cz/software/NAME/VERSION/howto_install.txt
. It may become useful in case of upgrade or re-install.
Release the application volume
Basically this means copying the current RW volume to RO replicas, which increase the availability and reliability of the SW volumes in cases of specific failures. At the same time, the main software directory become protected from accidental changes.
remctl kdccesnet.ics.muni.cz afs release soft.NAME
You can check the sites hosting the replicas by
fs where /software/NAME
Send us a modulefile for your application or ask us to prepare it
After the installation is ready, either prepare the modulefile and make it available to us, or tell us to prepare the modulefile for you.
- check Application modules
- look at modules in
/afs/ics.muni.cz/packages/amd64_linux26/modules-2.0/modulefiles/
for examples - if you want to add some help to module, available then via 'module help' command, see similar files in
/afs/ics.muni.cz/packages/amd64_linux26/modules-2.0/helpfiles
directory - in order to prepare the modulefile for you, we§ll need to know about the environtment variables and modules your application requires for proper run
Release the modulefile
As in case of installed software, the modulefiles are kept in RO replicas for higher redundancy.
remctl kdccesnet.ics.muni.cz afs release packages.amd64_linux26
Feel free to write us in case of problems – we are here to help you!
|
General installation tips
In what follows we have collected various tips for installing software in general. Some may be relevant to your case, others may not.
General programming language (C, C++, Fortran, ...)
Choosing the compiler
You should choose what compiler you use for building an application. There are three main compilers:
- GCC compiler – free compiler which is usually most compatible with the software being built. Better is to use the system version then some from module if that is possible.
- Intel compiler – commercial compiler with excellent math libraries and optimization abilities.
- PGI compiler – another commercial compiler with optimized math libraries. Not so wide used in Meta because of Intel architecture. But it will become important when we include some AMD clusters.
The suggested process is to first try the latest available Intel compiler (or the version from dependencies) with "automatic CPU dispatch" set (see INTEL CDK) and when it fail, use GCC.
Configuration tuning
Usually you need to configure the software prior to building. And there are usually three different "configurers". Some advices for each are here:
- General advice for using libraries
- use MKL (math libraries from Intel CDK) when possible
- use MPI when possible, preferably OpenMPI but MVAPICH is also available (MVAPICH is MPICH with support of InfiniBand)
- use Thread (OpenMP) when possible
- use CUDA in separate compilations
- avoid all optimizations for machine (-xHost and -fast flags) where you are building your application
- configure – use
./configure --help
to get the available options. If no ./configure script is present, try first ./autogen.sh (for newer than system version use module autotools-2.26). Then look above to general advices. - cmake – First add one of the cmake modules (ie. cmake-3.6.1). Then make a build directory (
mkdir build && cd build
) and runccmake ../
to get and adjust configuration options. All options are available after pressing "t" key. Then look above to general advices. You can look in ccmake for options and then use them in command line with-D
prefix. Like:cmake -DCMAKE_INSTALL_PREFIX=/software/prg/version
. - Makefile – Sometimes all configuration is done only in pre-generated Makefile. Edit it using your favourite editor (vim, nano, mcedit). Don't forget to look above to general advices.
Environment variables and Flags
There are some usual environment variables in which you can put some "flags" that influence the compilation or linking. The standard make rule for compiling a C (or C++ or Fortran) program is:
%.o:%.file_type $(Compiler) $(PreprocessorFLAGS) $(CompilerFLAGS) -c -o $@ $<
The corresponding compilers and variables that influence it's behavior are described in following table
Compiler | Preprocessor FLAGS | Compiler FLAGS |
---|---|---|
C | CPPFLAGS | CFLAGS |
C++ | CPPFLAGS | CXXFLAGS |
Fortran 77 | FPPFLAGS | F77FLAGS, FFLAGS |
Fortran 90 | FPPFLAGS | F90FLAGS, FFLAGS |
So if you use C compiler (gcc, icc, pgcc) and want to influence the compilation phase, you should set some flags in variable $CFLAGS. If you use C++ compiler, use $CXXFLAGS variable. If you use both and want to have some common flags, use $CPPFLAGS variable. So for C/C++ projects you will normally need to use only the CPPFLAGS for compilation.
Linker flags are always $LDFLAGS. You should always set capital "L" paths prior to linked libraries.
What are the flags for | Example | |
---|---|---|
Preprocessor FLAGS | Compiler inspecific optimization and include paths. | CPPFLAGS="-I/software/prg1/include -I/software/prg2/include -O2 -msse -fPIC" |
Compiler FLAGS | Compiler specific optimization and include paths. | CXXFLAGS="-I/software/prg1/include -I/software/prg2/include -O2 -msse -fPIC" |
Linker FLAGS (LDFLAGS) | Linker directives and library paths. | LDFLAGS="-L/software/prg1/version/lib -L/software/prg2/version/lib -lcrypt -lmkl_blas95_lp64 -lpthread /software/prg1/lib/libprg.a" |
If the programs you are dealing with support pkgconfig mechanism, it is a good idea to set the $PKG_CONFIG_PATH, usually /software/prg/version/lib/pkgconfig.
Scripts for setting the flags
In module meta-utils
are available scripts set-*
for setting of certain compilation environments. Use it at least for inspiration.
Math libraries introduction
Let's describe some relationships among the linear algebra libraries BLAS, LAPACK, BLACS and ScaLAPACK. For the quick overview we can look at this picture. So BLAS is a dependency of LAPACK and you can not link LAPACK without BLAS. LAPACK, BLAS and BLACS (pBLAS) are dependencies of ScaLAPACK and you shoul link it all if you are using ScaLapack. Note that BLACS (pBLAS) are dependent also on MPI implementation. You should choose the right library depending on MPI you are using (OpenMPI or M(VA)PICH). Math libraries linking examples are described on INTEL CDK page.
Perl modules
First check if there is an already installed perl package in one of our modules. New packages should be install into bioperl-1.6.1
or bioperl-1.6.9-gcc
module.
To list all available perl modules you can use script perl_installed_modules.pl. Usage:
module add bioperl-1.6.1 perl_installed_modules.pl
Note: If you want to check a list of available modules for
bioperl-1.6.1 , is necessary replace default perl-5.10.1 by newest version e.g. perl-5.20.1-gcc . Use module rm and module add commands. |
easiest way - using CPANMINUS tool
cpanm is the specialized tool for installing and uninstalling of Perl packages from CPAN. It is available via modules. Use it like this:
load module bioperl-1.6.9-gcc
or bioperl-1.6.1
cpanm -l /specified/local/directory/ GD::Graph::bars – to install Perl library and all of it's dependencies to specified directory cpanm -L /specified/local/directory/ GD::Graph::bars – to install Perl library and all of it's dependencies including the libraries already present in system to specified directory
After that don't forget to set PATH and PERL5LIB to the specified directory bin and lib folders to be able to use the installed binaries and libraries.
easy way - using CPAN tool
Using the CPAN tool is quite effective way to install perl packages. For better effectivity it needs some configuration first. Example:
perl -MCPAN -e shell o conf makepl_arg PREFIX=/afs/.ics.muni.cz/software/bioperl/1.6.9/gcc o conf mbuildpl_arg "--prefix /afs/.ics.muni.cz/software/bioperl/1.6.9/gcc" o conf build_dir /scratch/$LOGIN/.cpan/build o conf commit
- Prefix settings are according to the program directory you are trying to install the software in.
- Build directory should be on some fast disk, so the best way is to use /scratch
- To acquire information about all CPAN tool settings issue the o conf command without parameters.
The module installation is then simple:
m /regexp/ - list available modules matching case insensitive regexp or accurate module info install module_name - installs the module with all dependencies force install module_name - continues even if error occurs ? - help use the CPAN tool q - exit the CPAN tool
Don't forget to properly set the $PERL5LIB environment variable pointing to lib directory of installed module. In our case it is:
export PERL5LIB=/software/bioperl-1.6.1/lib/perl/5.10.1:$PERL5LIB
hard way - manual package installation
If everything goes wrong, you have the option to install the packages manually. This means to download a package and all of it's dependencies and than install it in the proper order.
TOIN=/software/EXPECTED_FOLDER perl Makefile.PL PREFIX=$TOIN make make test make install
to test
export PERL5LIB=$TOIN/lib/perl/5.10.1 perl > use My::Module;
or
perl -e 'use Bio::SeqIO; print join "\n", %INC; print "\n"'
Python packages
easy way - using PyPI tool
PyPI is part of pythonX-modules, for example
module add python27-modules-gcc
or newer
module add python36-modules-gcc
If you don't like our PyPI installation and want your own one, just get PiPY from here and then invoke
python get-pip.py --root /some/new/user/specific/directory
PyPI operation
pip search <module name> pip install <module name> --root /some/user/specific/python/modules/folder # Install everything relative to this alternate root directory pip install <module name> --prefix /some/user/specific/python/modules/folder # Installation prefix where lib, bin and other top-level folders are placed pip install git+https://path/to/git/file
Don't forget to properly set the $PATH and $PYTHONPATH environment variables if you are not using one of ours python-modules and installing modules to some new dir. For details see the hard way chapter.
A brief set of commands how to install python package (for example nata) to the user specific directory (storage/brno2/home/user_name/python_pip_test).
module add python36-modules-gcc mkdir python_pip_test # It will create a new folder for python packages pip3 install nata --root /storage/brno2/home/user_name/python_pip_test/ export PYTHONUSERBASE=/storage/city/home/user_name/python_pip_test/software/python-3.6.2/gcc export PATH=$PYTHONUSERBASE/bin:$PATH export PYTHONPATH=$PYTHONUSERBASE/lib/python3.6/site-packages:$PYTHONPATH
Sometimes help to specify more options, eg.
pip install -v --upgrade nata --install-option="--prefix=/storage/brno2/home/user_name/python_pip_test/ " --root /storage/brno2/home/user_name/python_pip_test/ --ignore-installed
Detailed walkthrough - using PyPI tool
A very convenient feature is to use the --user option of pip install. This will install modules, additional to the available system python install, in the location defined by the PYTHONUSERBASE environment variable. A convenient choice for this variable is a location visible from the NFSv4 infrastructure, which means you could use for example export PYTHONUSERBASE=/storage/home/<user_name>/.local
If you install such modules at this location, you will also need to add them to your path and pythonpath so that they are accessible from any folder you wish to execute your code. For this purpose, export PATH=$PYTHONUSERBASE/bin:$PATH and export PYTHONPATH=$PYTHONUSERBASE/bin:$PYTHONPATH will do the job.
If you wish to execute such commands at each login on a front end, you will therefore have to add the following lines to you .profile:
module add python27-modules-intel # Set pip path for --user option export PYTHONUSERBASE=/storage/plzen1/home/<user_name>/.local # set PATH and PYTHONPATH variables export PATH=$PYTHONUSERBASE/bin:$PATH export PYTHONPATH=$PYTHONUSERBASE/lib/python2.7/site-packages:$PYTHONPATH
With this, you can install any module you need with the following command:
pip install <module-name> --user --process-dependency-links
without any need for administrator rights, and you will be able to use it. When launching jobs from the scheduler, remember that you .profile is not executed, you will therefore need to do module add and to define the relevant environment variables before the job is actually executed.
another easy way - using Conda
Conda is an open-source, cross-platform, language-agnostic package manager and environment management system. MetaCentra users can use conda and create new environments on their own and install application tools from various channels. The most straightforward way how to install the required tool is via the general module conda-modules-py37
in the user's home directory.
module add conda-modules-py37 conda --help conda create --help conda install --help ... etc
The following tutorial will briefly explain all the necessary steps on how to create and activate a new conda environment and install the selected application tool. As an example, we will install the BLAST tool from the Bioconda channel. Detailed information can be found in the official documentation.
module add conda-modules-py37 conda create --prefix /storage/city/home/user_name/my_blast_env
First of all, load module conda-modules-py37
. The second command will create a new environment (basically a new folder with some necessary components) named my_blast_env
in the specified path. Absolute or relative paths can be used, and folder name can be changed at will. The default python version is 3.6. If needed, a different python version can be installed by the python flag. E.g. conda create --prefix ... my_blast_env python=3.10
. When the new environment is created, it has to be activated before the installation.
conda activate /storage/city/home/user_name/my_blast_env conda install -c bioconda blast
After the installation, everything is ready, and a new tool can be immediately used. When the calculation is finished, the loaded environment should be deactivated.
blastn -db DATABASE_NAME -query INPUT_FASTA -out OUTPUT_NAME ... conda deactivate
Later on, within interactive and/or batch jobs, just activate the already existing environment and start the calculation.
module add conda-modules-py37 conda activate /storage/city/home/user_name/my_blast_env blastn ... conda deactivate
Alternativelly, the creation of a new environment and the installation can be done by one command.
conda create -n my_blast_env -c bioconda blast
In this case, the environment can be created only without the path specification (/
character is not allowed) and will be placed in the home directory in a hidden folder .conda
.
All available environments (prepared by MetaCentrum admins and by user) can be listed by the command:
conda env list
If for some reason the general conda installation is not suitable enough, users can use the local installation of miniconda client. Miniconda is a free minimal installer for conda. It is a small, bootstrap version of Anaconda that includes only conda, Python, the packages they depend on, and a small number of other useful packages, including pip, zlib and a few others.
wget https://repo.anaconda.com/miniconda/Miniconda3-py39_4.12.0-Linux-x86_64.sh bash Miniconda3-py39_4.12.0-Linux-x86_64.sh # and follow the interactive installation procedure
Miniconda contains the conda package manager and Python. Once Miniconda is installed, you can use the conda command to install any other packages and create environments as usual.
/storage/city/home/user_name/miniconda3/bin/conda --help
During the installation of huge and complex packages, native conda can be very slow, especially in Solving environment
phase. To speed up the entire installation process, users can use the mamba installer, which is a reimplementation of the conda package manager with the same functionalities and syntax but much faster. There are two easy ways how to use mamba (instead of conda) or micromamba (instead of miniconda), respectively.
1) Usage of mamba in conda environments
A quite direct usage of mamba is to create a conda environment as described before and install mamba prior to the installation of the required tool.
module add conda-modules-py37 conda create --prefix /storage/city/home/user_name/my_env conda activate /storage/city/home/user_name/my_env conda install -c conda-forge mamba mamba install some_tool # perform your calculation conda deactivate
The installation of mamba is fast, and following speed up is significant. For the installation of other tools just replace the conda
command with mamba
, further syntax is the same. 100% compatibility between conda and mamba can not be guaranteed, but in most cases it works well.
2) Mamba from the module
Possibly users can directly use mamba from the module mambaforge-22.9.0
module add mambaforge-22.9.0 mamba create ...
3) Local installation of micromamba
Micromamba supports a subset of all mamba or conda commands and is distributed as a stand-alone precompiled binary. Basic but fully functional usage should be as follow. The initial local installation of microbamba should be skipped and replaced by MetaCentrum module micromamba-1.1.0
.
curl micro.mamba.pm/install.sh | bash
source ~/.bashrc
# this will download and configure micromamba and activate changes in the user's local bashrc
# by default new folder microbamba (set as variable MAMBA_ROOT_PREFIX) will be created where new environments will be stored
# or use module add micromamba-1.1.0
micromamba info
# show information about micromamba configuration
micromamba config append channels conda-forge
micromamba config append channels bioconda
# for more convenient future usage, the user can set some default channels; conda-forge and bioconda are pretty popular
micromamba create -n my_new_env
# create a new and empty environment; the name should be changed at will
micromamba env list
# show a list of all available environments
micromamba activate my_new_env
# activate the selected environment
micromamba install blast=2.12.0
# now it is possible to install specific tool, for example blast version 2.12.0
# if the appropriate channels were not previously set as default, they would have to be additionally specified using the -c flag in micromamba install command
micromamba install -c bioconda -c conda-forge blast=2.12.0
blastn --help
# run the calculation
micromamba deactivate
# leave the activated environment
hard way - manual package installation
If everything goes wrong, you have the option to install the packages manually. This means to download a package and all of it's dependencies and than install it in the proper order.
TOIN=/software/EXPECTED_FOLDER python setup.py install --install-scripts=$TOIN/bin/ --install-purelib=$TOIN/lib --install-lib=$TOIN/lib
to test instalation
export PYTHONPATH=$TOIN/lib:$PYTHONPATH export PATH=$TOIN/bin:$PATH python -c "import package;"
sometimes is necessary to export PYTHONPATH before installation, but it must be definitely exported before using of package.
Lua/Torch rocks
User specific Lua library
To add new lua/toch rock, you can use own directory to make personal library of rocks, eg.
module add torch7cuda-deb8 mkdir /storage/plzen1/home/$USER/.luarocks luarocks install elfs --tree=/storage/plzen1/home/$USER/.luarocks
And than check it
th th> require 'elfs'
System Lua library
To add a rock to system library
module add torch7cuda-deb8 luarocks install NAME
than check it by
luarocks list | grep -A 1 NAME
or by
th require 'NAME'
Lua complications and links
Some rocks need to add cmake module for the rock installation and paste settings to it, eg.
module add cmake-3.2.3 export CMAKE_INCLUDE_PATH=/software/mkl-11.0/composer_xe_2013.0.079/mkl/include:/software/torch/20160425-deb8/include:$CMAKE_INCLUDE_PATH export CMAKE_LIBRARY_PATH=/software/mkl-11.0/composer_xe_2013.0.079/mkl/lib/intel64:/software/torch/20160425-deb8/lib:$CMAKE_LIBRARY_PATH
If you got errors with non existing file in tmp or quota exceed, redirect cmake directory to $SCRATCHDIR.
Having rockspec already, you can just
luarocks make stnbhwd-scm-1.rockspec
To check list of already installed rocks run
luarocks list
To search rocks
luarocks search NAME
Links
- Check our application page about Lua/Torch
R packages
User specific R library
Everyone can easily create own R packages library, you just need some folder (ideally on the /storage tree), eg.
/storage/brno6/home/$LOGNAME/Rpackages/
($LOGNAME is your login in MetaCentrum). Than you can install package by
R
>install.packages("PACKAGE_NAME",lib="/storage/brno6/home/$LOGNAME/Rpackages/")
and load of such package
>library(PACKAGE_NAME,lib.loc="/storage/brno6/home/$LOGNAME/Rpackages/")
To set the directory as default, you need to set R_LIBS_USER variable before running R. To check installation path, you just run R a .libPaths() function and now you do not need to specify location of the directory with your own packages any more
export R_LIBS_USER="/storage/brno6/home/$LOGNAME/Rpackages"
R
> .libPaths()
[1] "/auto/brno6/home/$LOGNAME/Rpackages"
[2] "/afs/ics.muni.cz/software/R-3.1.0/lib/R/library"
> install.packages("PACKAGENAME")
> library("PACKAGENAME")
System R library
Mostly you can follow User specific R library, but you just need to set R_LIBS to /afs/.ics.muni.cz/... tree, eg.
module add R-3.4.0-gcc R_LIBS=/afs/.ics.muni.cz/software/R/3.4.0/gcc/lib/R/library R > install.packages("PACKAGENAME")
and release R.soft AFS volume.
R complications and links
Some R packages requires libraries in the system (eg. rgdal, mpi), than you must get them (eg. add them as module, by downloading as DEB packages, asking MetaProvoz to install them).
Bioconductor has own way dealing with packages, mostly you must install its repository by
source("https://bioconductor.org/biocLite.R") biocLite()
than you can install packages by biocLite() function, eg.
biocLite(c("GenomicFeatures", "AnnotationDbi"))
Links
- Check our application page about R module.
- See R View about [ https://cran.r-project.org/web/views/HighPerformanceComputing.html | High-Performance and Parallel Computing with R]
DEBian packages
Sometimes it is quite fast to extract content of the DEBian package and append it to the application that requires it. Download package by apt-get, eg.
apt-get download libargtable2-0
than you must extract it by ar and check content, e.g
ar -x libargtable2-0_12-1.1_amd64.deb; ls
Now you should extract data file. If data.* file have xz suffix use
unxz data.tar.xz tar -xvf data.tar
Files are now extracted in the relative paths, starting from the actual directory. Last step is just copy files (in the example, libraries) where you need, eg.
cp ./usr/lib/libargtable2.so.0* /afs/.ics.muni.cz/software/guenomu/201308/lib/
Do not forgot to set LD_LIBRARY_PATH on the final directory with .so files (here export LD_LIBRARY_PATH=/software/guenomu/201308/lib:$LD_LIBRARY_PATH after the releasing soft.guenomu AFS volume)!
Galaxy tools
New Meta wiki supports easy installation with automatic dependencies handling via Conda.
- Go to the admin interface on frontend
- Select Tools and Tool Shed -> Search tool Shed and find your desired tool
- Click on the tool and select Preview and install. In the top right corner you should see Install to Galaxy button
- You will be redirected to the Found page -- this is a bug and hopefully gonna be fixed soon because it the installation of the tool is supposed to begin automatically
- Copy the given url into a new tab and edit http to https in the url (if it does not start up an installation try it in Mozilla)
- Installation should begin and it should handle all dependencies
Puppet
The machines are maintained with puppet system. It runs by default every half an hour (x:00 and x:30), installs updates and sets up the environment to some default. For example changes the /software link back to RO AFS version. You can rule the puppet as root with these commands:
- puppet-stop – stop puppet operation. You are not obliged to give a reason but you can do that.
- puppet-start – start puppet operation
- puppet-status – check the current status of puppet operation.
- puppet-test – run puppet right now. Use
--noop
for a dry run only (no changes are made).