Elk FP-LAPW

Z MetaCentrum
Přejít na: navigace, hledání


Description

Elk FP-LAPW is an All-Electron Full-Potential Linearised Augmented-Plane Wave code for determining the properties of crystalline solids.

Availability

  • Module elk-2.2.9: Elk FP-LAPW version 2.2.9, 64-bit version, supporting OpenMP and MPI, compiled against the Intel MKL and FFTW3 libraries.
  • Module elk-1.4.22: Elk FP-LAPW version 1.4.22, 64-bit version, supporting OpenMP and MPI, compiled against the Intel MKL library.

Use

1. Running the application -- Interactive mode:

  • ask the scheduler for an interactive job having a desired number of nodes (nodes attribute) and a desired number of processors (ppn attribute) reserved
$ qsub -I -l nodes=X:ppn=Y -l mem=Zg
Note: Do not forget to apppropriately set the amount of requested memory (mem attribute) and/or another job requirements.
  • load the application module
$ module add elk-2.2.9
  • change the working directory to the one containing the ELK input file and run the computation
    • parallel computation on a single node (OpenMP):
$ cd $SCRATCHDIR/my_computation
$ elk >computation.log
  • Note: Check the setting of the OMP_NUM_THREADS environment variable -- for such a computation, the variable should be set to the number of dedicated processors on a node (Y -- see above).
  • distributed computation combined with parallel run (MPI + OpenMP):
$ cd $SCRATCHDIR/my_computation
$ mpirun -pernode elk >computation.log
  • Note: Check the setting of the OMP_NUM_THREADS environment variable -- for such a computation, the variable should be set to the number of dedicated processors on a node (Y -- see above).
  • distributed computation through multiple nodes (MPI only processes):
$ cd $SCRATCHDIR/my_computation
$ export OMP_NUM_THREADS=1
$ mpirun elk >computation.log
  • Note: Check the setting of the OMP_NUM_THREADS environment variable -- for such a computation, the variable should be set to 1.

2. Running the application -- Batch mode:

  • prepare the job description script -- use a general skeleton supplemented by the following lines:
...
# load the application module
module add elk-2.2.9

# change the working directory to the one containing the ELK input file and run the computation
cd $SCRATCHDIR/my_computation
elk >computation.log          # parallel computation on a single node
mpirun elk >computation.log   # distributed computation through multiple nodes (see other variants above)
...
  • pass the job description file to the scheduler together with (at least) the requested number of nodes, processors and requested amount of memory
$ qsub -l nodes=X:ppn=Y -l mem=Zg mydescriptionscript.sh

Documentation

The documentation is available either at producer's webpage (direct link to PDF manual) or locally in the program directory (/software/elk-1.4.22/docs/).

License

open-source under terms of the GNU General Public License version 3

Supported platforms

amd64

Program administrator

Tom Rebok meta@cesnet.cz

Homepage

http://elk.sourceforge.net