Sie sind hier: Startseite / Systems / Mistral / Programming
Alle Inhalte des Nutzerportal sind nur auf Englisch verfügbar.


Compilers and MPI for HPC applications

In order to compile source code (written in C/C++ or FORTRAN) to be executed on mistral we provide different compilers and MPI implementations. Please read the short overview.


Intel compiler

Using the compiler option -xCORE-AVX2 causes the Intel compiler to use full AVX2 support/vectorization (with FMA instructions) which might results in binaries that do not produce MPI decomposition independent results. Switching to -xAVX should solve this issue but result in up to 15% slower runtime.

The optimal environment settings strongly depend on the type of application and used MPI library. For most MPI versions installed on Mistral, we provide some recommended runtime settings.


Starting with version 2.0.0 all optimizations by BULL/ATOS, which were previously implemented in bullxMPI, are now given in OpenMPI. Also these versions are built using the Mellanox HPC-X toolkit to directly benefit from the underlying IB architecture. The latest OpenMPI modules will automatically load the appropriate hpcx modules.


Although the bullxMPI library was used throughout for the benchmarks of the HLRE-3 procurement, we no longer recommend to use bullxMPI with FCA. The old FCA/2.5 version depends on a central FCA-manager that is no longer available. As an alternative OpenMPI >2.0.0 should be used in combination with HCOLL.


We recommend using IntelMPI versions 2017 and newer, since prior versions might get stuck in MPI_Finalize and therefore waste CPU time without real computations.

Libraries for HPC applications

There is no module to set NetCDF paths for the user. If you need to specify such paths in Makefiles or similar, please use the nc-config and nf-config tools to get the needed compiler flags and libraries, for example

# Get paths to netCDF include files
$ /sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/bin/nc-config --cflags

-I/sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/include \
-I/sw/rhel6-x64/sys/libaec-0.3.2-gcc48/include \

# Get options needed to link a C program to netCDF
$ /sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/bin/nc-config --libs

-L/sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/lib \
-Wl,-rpath,/sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/lib -lnetcdf

# Get paths to Fortran netCDF include files
$ /sw/rhel6-x64/netcdf/netcdf_fortran-4.4.2-intel14/bin/nf-config --fflags


# Get options needed to link a Fortran program to netCDF
$ /sw/rhel6-x64/netcdf/netcdf_fortran-4.4.2-intel14/bin/nf-config --flibs

-L/sw/rhel6-x64/netcdf/netcdf_fortran-4.4.2-intel14/lib -lnetcdff \
-Wl,-rpath,/sw/rhel6-x64/netcdf/netcdf_fortran-4.4.2-intel14/lib \
-L/sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/lib \
-Wl,-rpath,/sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/lib \
-L/sw/rhel6-x64/hdf5/hdf5-1.8.14-threadsafe-gcc48/lib \
-Wl,-rpath,/sw/rhel6-x64/hdf5/hdf5-1.8.14-threadsafe-gcc48/lib \
-L/sw/rhel6-x64/sys/libaec-0.3.2-gcc48/lib \
-Wl,-rpath,/sw/rhel6-x64/sys/libaec-0.3.2-gcc48/lib \
-lnetcdf -lhdf5_hl -lhdf5 -lsz -lcurl -lz

You will find all NetCDF or HDF5 version, as well as other libraries, installed at



Python environments

An increasing number of pre- and postprocessing steps in EarthSystemModelling is done using Python. Therefore, we provide different precompiled environments on mistral but users can also install their own environment based on miniconda. Please read the overview on how to make use of these options.

Working with Jupyter Notebook

In addition to the Python environment that can be used on mistral at the command line, one might also use Jupyter notebooks to execute Python code using a web browser. Different options are given to run the notebook directly on mistral with full access to the lustre file system. For further information please read the documentation.