Gromacs

Z MetaCentrum
Přejít na: navigace, hledání

Description

GROMACS is program package, which enables to define two basic tasks in the area of mechanics of molecules and dynamics:

  • 1. Minimalization of energy of system.
  • 2. Dynamic behaviour of molecular systems.

Utilizing these basic approaches one can easily define tasks concerning system response at a molecular level. Mechanical response after deformation of system in equilibrium can serve as such example. Another possibility is drug design domain. To have an active drug it is neccessary that its structure has to be complemetary to the active side in a cell (i.e. key-lock approach). Diffusion can serve as an example of dynamic behaviour study. One can need to obtain several properties of a system as pressure, compresibility factor, heat capacity. It is neccessary to take into account two restrictions - real time of simulation (nanoseconds) and number of atoms (10.000).

Availability

  • version 5.1.3
    • gromacs-5.1.3-mpi – installation with support of MPI computing
    • gromacs-5.1.3-gpu-mpi – installation with support of MPI and GPU
    • gromacs-5.1.3-gpu-mpi-plumed – installation with support of MPI, GPU and PLUMED extension
  • version 5.1.1
    • gromacs-5.1.1 – basic installation
    • gromacs-5.1.1-mpi – installation with support of MPI computing
    • gromacs-5.1.1-plumed – installation with support of PLUMED extension
    • gromacs-5.1.1-plumed-mpi – installation with support of MPI and PLUMED extension
  • version 5.0.5
    • gromacs-5.0.5-gpu – installation with GPU-computing support
  • version 5.0.4
    • gromacs-5.0.4-plumed-2.2-gcc-mpi – installation with MPI-computing support and PLUMED extension, compiled with gcc
    • gromacs-5.0.4-plumed-2.2-intel-gpu – installation with GPU-computing support and PLUMED extension, compiled with intel
    • gromacs-5.0.4-plumed-2.2-intel-mpi – installation with MPI-computing support and PLUMED extension, compiled with intel
  • version 4.6.5
    • gromacs-4.6.5 – basic installation with support of GPU computing and PLUMED extension
  • version 4.6.1 (thanks to O. Kroutil for compilation) – modules:
    • gromacs-4.6.1
    • gromacs-4.6.1-parallel (with MPI support)
    • gromacs-4.6.1-gpu (with GPU-computing support)
    • gromacs-4.6.1-plumed (GPU-computing and MPI support, compiled with PLUMED extension)
    • gromacs-4.6.1-plumed_d (GPU-computing and MPI support with double precision, compiled with PLUMED extension)
  • version 4.5.5 -- module gromacs-4.5.5 and module gromacs-4.5.5-parallel (MPI support)
  • version 4.0.7 -- module gromacs and module gromacs-parallel (MPI support)
  • version 4.6.5
    • gromacs-4.6.5 – basic installation with support of GPU computing and PLUMED extension
  • version 4.6.1 (thanks to O. Kroutil for compilation) – modules:
    • gromacs-4.6.1
    • gromacs-4.6.1-parallel (with MPI support)
    • gromacs-4.6.1-gpu (with GPU-computing support)
    • gromacs-4.6.1-plumed (GPU-computing and MPI support, compiled with PLUMED extension)
    • gromacs-4.6.1-plumed_d (GPU-computing and MPI support with double precision, compiled with PLUMED extension)
  • version 4.5.5 -- module gromacs-4.5.5 and module gromacs-4.5.5-parallel (MPI support)
  • version 4.0.7 -- module gromacs and module gromacs-parallel (MPI support)

Use

Notice: This application supports parallel computing (MPI, OpenMP) which can have weird consequences. For more details about parallel computing visit the page How to compute/Parallelization.

Warning.gif WARNING: Gromacs requires AVX instruction. If you get Illegal instruction error, ask for a machine with avx property

mdrun command will not work since version 5.1.x, use gmx command instead. See the official page for more info

Initialize environment by loading the appropriate module (see above) and run program. For basic command of Gromacs is needed two executable files grompp for compilation of the task and mdrun for running of the task. To this executable files are needed source files. File with axis of all athoms with suffix *.gro, program with topology with *.top suffix, in which are registrated linkage between each athom, program with *.mdp suffix, in which are written parameters of simulation.

Example of task:

./grompp –f ParametersOfSimulation.mdp –c Axis.gro –p Topology.top
./mdrun –s Topology.tpr -x Trajectory.xtc -o Trajectory.trr –e Energy.edr

Suffixes are not compulsory. In case you do not fill it, program will proceed automatically files called grompp.mdp, conf.gro, topol.top, topol.trr, traj.xtc, ener.edr. In enter directory must be files gro, top and mdp. If the program doesnt find them, reports error.

Module gromacs makes access to defaut single-precision version and double-precison programs with _d suffix. Required MPI library is implemented in module gromacs-parallel. Parallel version mdrun is also termed by suffix _mpi (_mpi_d in case double precision). It supports integration with PBS (it detects allocated nodes itself, in case of killing a job, it kills all processes at all machines) and automatic best choice of available connection (Infiniband or Myrinet, where available). So to run a job in PBS you need:

module add gromacs-parallel
mpirun mdrun_mpi ...

Limitation for parallel calculations: If your system supports threading, mdrun will be compiled with the -nt option, which can then be used to execute parallel calculations. If you do not use -nt, then GROMACS will use the maximum number it thinks is available (see http://www.gromacs.org/Documentation/Terminology/Threading)!

Using GPU

#!/bin/bash
#PBS -l nodes=1:ppn=4:gpu=1
#PBS -q gpu@arien.ics.muni.cz
#PBS -l mem=10g
#PBS -j oe

trap "clean_scratch" TERM EXIT

# let's add the maple module
module add gromacs-gpu

# let's set the necessary variables
DATADIR="$PBS_O_WORKDIR"

# let's copy the input data
cp $DATADIR/{pme_verlet.mdp,conf.gro,topol.top,topol.tpr} $SCRATCHDIR

# let's change the working directory
cd $SCRATCHDIR

# let's perform the computation
grompp -f pme_verlet.mdp -c conf.gro -p topol.top 
mdrun -nt $PBS_NUM_PPN -s topol.tpr

# let's gzip and copy-out the result
tar czf output.tgz ./*

cp output.tgz $DATADIR || export CLEAN_SCRATCH=false

MPI parallelization notes

MPI and OpenMP combination

In some situations you can accelerate your computiation by combining MPI and OpenMP. See more detailed documentation for the exact cases. Use example:

mpirun -env OMP_NUM_THREADS $TORQUE_RESC_PROC -np $TORQUE_RESC_TOTAL_NODECT -rmk user -hosts $(echo $(sort -u < $PBS_NODEFILE) |sed 's/ /,/g') mdrun_mpi -s job.tpr -maxh 23.

Large scale computations

If you need to run a large computation across many nodes, it's worth to use the least number of same machines than spread it across HW different nodes. Use the whole machines with #excl qsub option. It is also possible to tie up the threads with specific processor cores (option -pin on). In case of HyperThreading capable cluster you can use the HT double core feature. For example:

$ qsub -l nodes=2:ppn=1:cl_luna#excl ...
mpirun -env OMP_NUM_THREADS $((2*TORQUE_RESC_PROC)) -np $TORQUE_RESC_TOTAL_NODECT -rmk user -hosts $(echo $(sort -u < $PBS_NODEFILE) |sed 's/ /,/g') mdrun_mpi -pin on -s job.tpr -maxh 23.

or

mpirun -env OMP_NUM_THREADS 1 -np $((2*TORQUE_RESC_PROC*TORQUE_RESC_TOTAL_NODECT)) -rmk user -hosts $(echo $(sort -u < $PBS_NODEFILE) |sed 's/ /,/g') mdrun_mpi -pin on -s job.tpr -maxh 23.

Documentation

Manual pages are avalaible on web page of software: http://www.gromacs.org/Documentation

Licence

Program is available under GPL licence.

Program administrator

meta@cesnet.cz

Homepage

http://www.gromacs.org/

Known issues

New version of Gromacs 4.5.5 is using different naming conventions (e.g. use NA instead of NA+).