Elk FP-LAPW
Description
Elk FP-LAPW is an All-Electron Full-Potential Linearised Augmented-Plane Wave code for determining the properties of crystalline solids.
License
open-source under terms of the GNU General Public License version 3
Usage
Upcoming modulesystem change alert!
Due to large number of applications and their versions it is not practical to keep them explicitly listed at our wiki pages. Therefore an upgrade of modulefiles is underway. A feature of this upgrade will be the existence of default module for every application. This default choice does not need version number and it will load some (usually latest) version.
You can test the new version now by adding a line
source /cvmfs/software.metacentrum.cz/modulefiles/5.1.0/loadmodules
to your script before loading a module. Then, you can list all versions of elk and load default version of elk as
module avail elk/ # list available modules module load elk # load (default) module
If you wish to keep up to the current system, it is still possible. Simply list all modules by
module avail elk
and choose explicit version you want to use.
1. Running the application -- Interactive mode:
- ask the scheduler for an interactive job having a desired number of nodes (
nodes
attribute) and a desired number of processors (ppn
attribute) reserved
$ qsub -I -l nodes=X:ppn=Y -l mem=Zg
- Note: Do not forget to apppropriately set the amount of requested memory (
mem
attribute) and/or another job requirements.
- load the application module
$ module add elk-2.2.9
- change the working directory to the one containing the ELK input file and run the computation
- parallel computation on a single node (OpenMP):
$ cd $SCRATCHDIR/my_computation
$ elk >computation.log
- Note: Check the setting of the
OMP_NUM_THREADS
environment variable -- for such a computation, the variable should be set to the number of dedicated processors on a node (Y
-- see above).
- Note: Check the setting of the
- distributed computation combined with parallel run (MPI + OpenMP):
$ cd $SCRATCHDIR/my_computation
$ mpirun -pernode elk >computation.log
- Note: Check the setting of the
OMP_NUM_THREADS
environment variable -- for such a computation, the variable should be set to the number of dedicated processors on a node (Y
-- see above).
- Note: Check the setting of the
- distributed computation through multiple nodes (MPI only processes):
$ cd $SCRATCHDIR/my_computation
$ export OMP_NUM_THREADS=1
$ mpirun elk >computation.log
- Note: Check the setting of the
OMP_NUM_THREADS
environment variable -- for such a computation, the variable should be set to 1.
- Note: Check the setting of the
2. Running the application -- Batch mode:
- prepare the job description script -- use a general skeleton supplemented by the following lines:
...
# load the application module
module add elk-2.2.9
# change the working directory to the one containing the ELK input file and run the computation
cd $SCRATCHDIR/my_computation
elk >computation.log # parallel computation on a single node
mpirun elk >computation.log # distributed computation through multiple nodes (see other variants above)
...
- pass the job description file to the scheduler together with (at least) the requested number of nodes, processors and requested amount of memory
$ qsub -l nodes=X:ppn=Y -l mem=Zg mydescriptionscript.sh
Documentation
The documentation is available at producer's webpage (direct link to PDF manual).