Gromacs

Z MetaCentrum
Skočit na navigaci Skočit na vyhledávání

Description

GROMACS is program package, which enables to define two basic tasks in the area of mechanics of molecules and dynamics:

  • 1. Minimalization of energy of system.
  • 2. Dynamic behaviour of molecular systems.

Utilizing these basic approaches one can easily define tasks concerning system response at a molecular level. Mechanical response after deformation of system in equilibrium can serve as such example. Another possibility is drug design domain. To have an active drug it is neccessary that its structure has to be complemetary to the active side in a cell (i.e. key-lock approach). Diffusion can serve as an example of dynamic behaviour study. One can need to obtain several properties of a system as pressure, compresibility factor, heat capacity. It is neccessary to take into account two restrictions - real time of simulation (nanoseconds) and number of atoms (10.000).

License

Program is available under GPL licence.

Usage

Upcoming modulesystem change alert!

Due to large number of applications and their versions it is not practical to keep them explicitly listed at our wiki pages. Therefore an upgrade of modulefiles is underway. A feature of this upgrade will be the existence of default module for every application. This default choice does not need version number and it will load some (usually latest) version.

You can test the new version now by adding a line

source /cvmfs/software.metacentrum.cz/modulefiles/5.1.0/loadmodules

to your script before loading a module. Then, you can list all versions of gromacs and load default version of gromacs as

module avail gromacs/ # list available modules
module load gromacs   # load (default) module


If you wish to keep up to the current system, it is still possible. Simply list all modules by

module avail gromacs

and choose explicit version you want to use.

For a list of newer Gromacs modules, see the output of the module ava gromacs command.

  • version 2018.6
    • gromacs-2018.6-gpu-mpi – installation with support of MPI and GPU
    • gromacs-2018.6-plumed-2.4-gpu-mpi – installation with support of MPI, GPU and PLUMED extension
  • version 2016.4
    • gromacs-2016.4-phi – installation with support of phi computing
    • gromacs-2016.4-gpu-mpi – installation with support of MPI and GPU
    • gromacs-2016.4-plumed-2.4-gpu-mpi – installation with support of MPI, GPU and PLUMED extension
    • gromacs-2016.4-plumed-2.4-phi – installation with support of PHI, GPU and PLUMED extension
  • version 5.1.3
    • gromacs-5.1.3-mpi – installation with support of MPI computing
    • gromacs-5.1.3-gpu-mpi – installation with support of MPI and GPU
    • gromacs-5.1.3-gpu-mpi-plumed – installation with support of MPI, GPU and PLUMED extension
  • version 5.1.1
    • gromacs-5.1.1 – basic installation
    • gromacs-5.1.1-mpi – installation with support of MPI computing
    • gromacs-5.1.1-plumed – installation with support of PLUMED extension
    • gromacs-5.1.1-plumed-mpi – installation with support of MPI and PLUMED extension
  • version 5.0.5
    • gromacs-5.0.5-gpu – installation with GPU-computing support
  • version 5.0.4
    • gromacs-5.0.4-plumed-2.2-gcc-mpi – installation with MPI-computing support and PLUMED extension, compiled with gcc
    • gromacs-5.0.4-plumed-2.2-intel-gpu – installation with GPU-computing support and PLUMED extension, compiled with intel
    • gromacs-5.0.4-plumed-2.2-intel-mpi – installation with MPI-computing support and PLUMED extension, compiled with intel
  • version 4.6.1 (thanks to O. Kroutil for compilation) – modules:
    • gromacs-4.6.1
    • gromacs-4.6.1-parallel (with MPI support)
    • gromacs-4.6.1-gpu (with GPU-computing support)
    • gromacs-4.6.1-plumed (GPU-computing and MPI support, compiled with PLUMED extension)
    • gromacs-4.6.1-plumed_d (GPU-computing and MPI support with double precision, compiled with PLUMED extension)
    • gromacs-4.6.1qmmm (with MPI support)
  • version 4.6.5
    • gromacs-4.6.5 – basic installation with support of GPU computing and PLUMED extension
  • version 4.5.5
    • gromacs-4.5.5
    • gromacs-4.5.5-parallel
    • gromacs-4.5.5qm

Notice: This application supports parallel computing (MPI, OpenMP) which can have weird consequences. For more details about parallel computing visit the page Parallelization.

Warning.gif WARNING: Gromacs requires AVX instruction. If you get Illegal instruction error, ask for a machine with avx property

mdrun command will not work since version 5.1.x, use gmx command instead. See the official page for more info

Initialize environment by loading the appropriate module (see above) and run program. For basic command of Gromacs is needed two executable files grompp for compilation of the task and mdrun for running of the task. To this executable files are needed source files. File with axis of all athoms with suffix *.gro, program with topology with *.top suffix, in which are registrated linkage between each athom, program with *.mdp suffix, in which are written parameters of simulation.

Example of task:

./grompp –f ParametersOfSimulation.mdp –c Axis.gro –p Topology.top
./mdrun –s Topology.tpr -x Trajectory.xtc -o Trajectory.trr –e Energy.edr

Suffixes are not compulsory. In case you do not fill it, program will proceed automatically files called grompp.mdp, conf.gro, topol.top, topol.trr, traj.xtc, ener.edr. In enter directory must be files gro, top and mdp. If the program doesnt find them, reports error.

Module gromacs makes access to defaut single-precision version and double-precison programs with _d suffix. Required MPI library is implemented in module gromacs-2016.4-gpu-mpi. Parallel version mdrun is also termed by suffix _mpi (_mpi_d in case double precision). It supports integration with PBS (it detects allocated nodes itself, in case of killing a job, it kills all processes at all machines) and automatic best choice of available connection (Infiniband or Myrinet, where available). So to run a job in PBS you need:

module add gromacs-parallel
mpirun mdrun_mpi ...

Limitation for parallel calculations: If your system supports threading, mdrun will be compiled with the -nt option, which can then be used to execute parallel calculations. If you do not use -nt, then GROMACS will use the maximum number it thinks is available (see http://www.gromacs.org/Documentation/Terminology/Threading)!

Using GPU

#!/bin/bash
#PBS -l select=1:ncpus=2:ngpus=1:mpiprocs=1:mem=10gb:scratch_local=10gb
#PBS -q gpu@meta-pbs.metacentrum.cz
#PBS -l walltime=24:0:0
#PBS -N My_job_name
#PBS -j oe

trap "clean_scratch" TERM EXIT

# let's add the maple module
module add gromacs/gromacs-2020.3-intel-19.0.4-gpu-krnajqz

# let's set the necessary variables
DATADIR="$PBS_O_WORKDIR"

# let's copy the input data
cp $DATADIR/{pme_verlet.mdp,conf.gro,topol.top,topol.tpr} $SCRATCHDIR

# let's change the working directory
cd $SCRATCHDIR

# let's perform the computation
gmx_mpi mdrun ...

# let's gzip and copy-out the result
tar czf output.tgz ./*

cp output.tgz $DATADIR || export CLEAN_SCRATCH=false

MPI parallelization notes

MPI and OpenMP combination

In some situations you can accelerate your computiation by combining MPI and OpenMP. See more detailed documentation for the exact cases. Use example:

mpirun -env OMP_NUM_THREADS $TORQUE_RESC_PROC -np $TORQUE_RESC_TOTAL_NODECT -rmk user -hosts $(echo $(sort -u < $PBS_NODEFILE) |sed 's/ /,/g') mdrun_mpi -s job.tpr -maxh 23.

Large scale computations

If you need to run a large computation across many nodes, it's worth to use the least number of same machines than spread it across HW different nodes. Use the whole machines with #excl qsub option. It is also possible to tie up the threads with specific processor cores (option -pin on). In case of HyperThreading capable cluster you can use the HT double core feature. For example:

$ qsub -l nodes=2:ppn=1:cl_luna#excl ...
mpirun -env OMP_NUM_THREADS $((2*TORQUE_RESC_PROC)) -np $TORQUE_RESC_TOTAL_NODECT -rmk user -hosts $(echo $(sort -u < $PBS_NODEFILE) |sed 's/ /,/g') mdrun_mpi -pin on -s job.tpr -maxh 23.

or

mpirun -env OMP_NUM_THREADS 1 -np $((2*TORQUE_RESC_PROC*TORQUE_RESC_TOTAL_NODECT)) -rmk user -hosts $(echo $(sort -u < $PBS_NODEFILE) |sed 's/ /,/g') mdrun_mpi -pin on -s job.tpr -maxh 23.

Known issues

New version of Gromacs 4.5.5 is using different naming conventions (e.g. use NA instead of NA+).

Documentation

Manual pages are avalaible on web page of software: http://www.gromacs.org/Documentation

Homepage

http://www.gromacs.org/