Z MetaCentrum
Přejít na: navigace, hledání
Elixir czechrepublic.png
Related topics
Available applications
Working with data
Scheduling system
[MetaCloud video tutorial]
PBS Pro Quick Start for ELIXIR-CZ [PDF]
ELIXIR CZ IT services leaflet [PDF]
Hands-on workshop on the ELIXIR CZ conference [PDF]

VO Elixir was created to support bioinformatics researchers however connected with Elixir project.

ELIXIR CZ operates an infrastructure of computing and storage resources. This page provides an overview how to apply for the access to the resources and where to get further support.

ELIXIR CZ clusters and storage facilities are currently being purchased by the project. They will be fully integrated into MetaCentrum and CERIT-SC infrastructures. For the time being, by applying for the resources, you will get access to the general MetaCentrum and CERIT-SC infrastructures. MetaCentrum and CERIT-SC resources will be available for ELIXIR users even after dedicated ELIXIR resources become available. The principles and tools to access them will not change, either. There are several machines with prioritised and/or dedicated access for ELIXIR CZ members in MetaCentrum.

How to register to get access

Access to all available services requires registration into a special group.

  • Registration form (standard ELIXIR CZ account) – for access to ELIXIR and MetaCentrum/CERIT-SC resources: https://perun.cesnet.cz/elixircz/registrar/?vo=elixir-cz&group=cz-users, follow instructions provided there.
    • Authentication via eduID.cz or eduGAIN is recommended for full access to ELIXIR and MetaCentrum/CERIT-SC services. Use the authentication via ELIXIR ID only if you can not use eduID.cz or eduGAIN and your ELIXIR ID account is associated with an academic institution. Otherwise you will get access to ELIXIR resources only (limited account).

Who can apply, terms and conditions

Unrestricted membership in this group is available for persons from academic environment of the Czech Republic and/or their research partners from abroad with research objectives directly related to ELIXIR activities. Members of the computation group are required to:

Acknowledgement formula

The user of MetaCentrum/CERIT-SC resources offered to the ELIXIR community is obliged to use the following acknowledgement formula in all his publications created with the support of CESNET/CERIT-SC/ELIXIR CZ:

Computational resources were provided by the ELIXIR-CZ project (LM2015047), part of the international ELIXIR infrastructure.

Optional formula when MetaCentrum and CERIT-SC general resources have been deployed:

Computational resources were provided by the CESNET LM2015042 and the CERIT Scientific Cloud LM2015085, provided under the programme "Projects of Large Research, Development, and Innovations Infrastructures".

Users are also free to acknowledge all the infrastructures mentioned above when appropriate.

Services available for ELIXIR CZ

NGI HPC Computing and storage resources





Running job on Elixir nodes or in MetaCentrum/CERIT-SC

PBSPro server (batch system) is running on all nodes.

Step by Step tutorial How to compute/Quick start shows how to run a job on MetaCentrum/CERIT-SC machines.

See also the How to compute topic for more information or PBS Pro Quick Start [PDF] to run an example job. You can also see video tutorial , which covers this topic (works with PuTTY and Windows)

To access the priority "elixircz" queue (https://metavo.metacentrum.cz/pbsmon2/queue/elixircz) you must specify the queue name and walltime n the qsub:

$ qsub -q elixircz@arien-pro.ics.muni.cz -l select=1:ncpus=2:mem=2gb:scratch_local=1gb -l walltime=24:00:00 script.sh

Start with MetaCloud

Technical support

Should you run into any trouble applying for the resources, using them, or if you just need an advice regarding anything not mentioned here, please, feel free to write an email to MetaCentrum/CERIT-SC/ELIXIR CZ support staff (directed into a request tracking system):

mailto: support@elixir-czech.cz

Actual dedicated hardware for ELIXIR

New nodes from ELIXIR CZ project, in operation from March 2018:

  • elmo1.hw.elixir-czech.cz - 224 CPU in total, SMP, 4 nodes with 56 CPUs, 768 GB RAM (Praha UOCHB)
  • elmo2.hw.elixir-czech.cz - 96 CPU in total, HD, 4 nodes with 24 CPUs, 384 GB RAM (Praha UOCHB)
  • elmo3.hw.elixir-czech.cz - 336 CPU in total, SMP, 6 nodes with 56 CPUs, 768 GB RAM (Brno)
  • elmo4.hw.elixir-czech.cz - 96 CPU in total, HD, 4 nodes with 24 CPUs, 384 GB RAM (Brno)
  • elmo5.hw.elixir-czech.cz - 896 CPU in total, HD, 27 nodes with 24 CPUs, 192 GB RAM (Brno)

frontend: http://elmo.elixir-czech.cz/ - designed for apache, mascot, samba


OLD 3(4) nodes in UOCHB AV ČR Praha (http://www.uochb.cas.cz) starting 12.3.2012

  • OLD elixir.grid.cesnet.cz ( : front-end, 24 cores AMD Opteron(TM) Processor 6238, 128GB RAM, 2x600GB SCSI disk (with 966 GB for scratch)

- designed for apache, mascot, samba

  • OLD elixir-comp.grid.cesnet.cz ( : computational node, 48 cores AMD Opteron(TM) Processor 6238 (2.9GHz), 512GB RAM, 2x600GB SCSI disk (with 966 GB for scratch)
  • OLD STORAGE elixir-data.grid.cesnet.cz ( -- /storage/praha2-elixir/home (95TB): old fileserver, 16 cores, AMD Opteron(TM) Processor 6220, 3.3GHz, 128GB RAM, 2x250GB SCSI disk (with 326 GB for scratch), with disk array 95TB in raid5 connected by Areca 1690ix-12 SAS RAID

- designed for databases (postgre, mysql)

  • NEW STORAGE elixir-storage1.grid.cesnet.cz (storage1.elixir-czech.cz) ( -- /storage/praha5-elixir/home (142TB): new fileserver, 40 cores Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz, 132 GB RAM, spinning disks raid60 (142T, /mnt/data), SSD disks raid6 (OS /dev/sda 372G LV, DB /dev/sdc 372G /mnt/db)
      • Supermicro X10DRi, 2x CPU E5-2630 v4 @ 2.20GHz, 128 GB memory
        • 142 TB storage (28x 6TB raid 60), 4x 400GB SSD
      • elixir-storage1-ipmi.grid.cesnet.cz
      • elixir-switch.grid.cesnet.cz
      • elixir-nexus.grid.cesnet.cz

Other components:

  • rack
  • KVM CS-1708i, 8 ports
  • InfiniScale IV IS5023Q IB-QDR switch
  • Supermicro SSE-G24-TG4 L3 switch - 24GbE, 4xSFP+/C
  • Box (JBOD) SAS/SATA,rPS with 62 x Seagate Constelation ES - 2TB/7200rpm/SAS

IPMI interfaces Infiniband switch (addresses are{4,5,6}). Router, KVM and switch(supermicro sse-g24-tg4) There is firewall at UOCHB.

1 node in PRF.JCU Ceske Budejovice (http://www.prf.jcu.cz)

  • hagrid.prf.jcu.cz ( computational node, 20 cores Intel Xeon 4850 @ 2GHz, 512GB RAM, 1.1 TB scratch (6 x 220GB SSD in RAID5), home from /storage/budejovice1 (!)

idrac, switch (

Current usage at https://metavo.metacentrum.cz/pbsmon2/nodes/physical

Usage of space (from 2016) - elixir


old documetation



  • Jiří Vondrášek (UOCHB AV ČR Praha)
  • Jiří Vohradský (MBÚ AV ČR)
  • Jan Pačes (UMG AV ČR Praha)
  • Miroslav Ruda (CESNET/CERIT-SC)
  • Aleš Křenek (CESNET/CERIT-SC)
  • Pavel Fibich (CESNET) - technically responsible person MetaCentrum side
  • Jiri Polach (UOCHB) - technically responsible person UOCHB side