You are here: Home / Systems / Mistral / Programming

Programming

Compilers and MPI for HPC applications

In order to compile source code (written in C/C++ or FORTRAN) to be executed on mistral we provide different compilers and MPI implementations. Please read the short overview.
 

Recommendations

Intel compiler

Using the compiler option -xCORE-AVX2 causes the Intel compiler to use full AVX2 support/vectorization (with FMA instructions) which might results in binaries that do not produce MPI decomposition independent results. Switching to -xAVX should solve this issue but result in up to 15% slower runtime.

The optimal environment settings strongly depend on the type of application and used MPI library. For most MPI versions installed on Mistral, we provide some recommended runtime settings.

 OpenMPI

Starting with version 2.0.0 all optimizations by BULL/ATOS, which were previously implemented in bullxMPI, are now given in OpenMPI. Also these versions are built using the Mellanox HPC-X toolkit to directly benefit from the underlying IB architecture. The latest OpenMPI modules will automatically load the appropriate hpcx modules.

bullxMPI

Although the bullxMPI library was used throughout for the benchmarks of the HLRE-3 procurement, we no longer recommend to use bullxMPI with FCA. The old FCA/2.5 version depends on a central FCA-manager that has shown to fail from time to time and causes the application to break. As an alternative OpenMPI >2.0.0 should be used.

From BULL/ATOS point of view, bullxMPI should be used with Mellanox tools (ie. MXM and FCA), hence load the specific environment before compiling

$ module add intel mxm/3.4.3082 fca/2.5.2431 bullxmpi_mlx/bullxmpi_mlx-1.2.9.2
$ mpif90 -O2 -xCORE-AVX2 -o mpi_prog program.f90

One must respect the order of loading the modules: compiler, MXM/FCA and afterwards bullxMPI. The bullxMPI module with mellanox tools (i.e. bullxmpi_mlx or the multithreaded variant bullxmpi_mlx_mt) will inform if the required mxm and fca modules are not loaded.

IntelMPI

We recommend using IntelMPI versions 2017 and newer, since prior versions might get stuck in MPI_Finalize and therefore waste CPU time without real computations.

Libraries for HPC applications

There is no module to set NetCDF paths for the user. If you need to specify such paths in Makefiles or similar, please use the nc-config and nf-config tools to get the needed compiler flags and libraries, for example

# Get paths to netCDF include files
$ /sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/bin/nc-config --cflags

-I/sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/include \
-I/sw/rhel6-x64/sys/libaec-0.3.2-gcc48/include \
-I/sw/rhel6-x64/hdf5/hdf5-1.8.14-threadsafe-gcc48/include


# Get options needed to link a C program to netCDF
$ /sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/bin/nc-config --libs

-L/sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/lib \
-Wl,-rpath,/sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/lib -lnetcdf


# Get paths to Fortran netCDF include files
$ /sw/rhel6-x64/netcdf/netcdf_fortran-4.4.2-intel14/bin/nf-config --fflags

-I/sw/rhel6-x64/netcdf/netcdf_fortran-4.4.2-intel14/include


# Get options needed to link a Fortran program to netCDF
$ /sw/rhel6-x64/netcdf/netcdf_fortran-4.4.2-intel14/bin/nf-config --flibs

-L/sw/rhel6-x64/netcdf/netcdf_fortran-4.4.2-intel14/lib -lnetcdff \
-Wl,-rpath,/sw/rhel6-x64/netcdf/netcdf_fortran-4.4.2-intel14/lib \
-L/sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/lib \
-Wl,-rpath,/sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/lib \
-L/sw/rhel6-x64/hdf5/hdf5-1.8.14-threadsafe-gcc48/lib \
-Wl,-rpath,/sw/rhel6-x64/hdf5/hdf5-1.8.14-threadsafe-gcc48/lib \
-L/sw/rhel6-x64/sys/libaec-0.3.2-gcc48/lib \
-Wl,-rpath,/sw/rhel6-x64/sys/libaec-0.3.2-gcc48/lib \
-lnetcdf -lhdf5_hl -lhdf5 -lsz -lcurl -lz

You will find all NetCDF or HDF5 version, as well as other libraries, installed at

/sw/rhel-x64

 

Python environments

An increasing number of pre- and postprocessing steps in EarthSystemModelling is done using Python. Therefore, we provide different precompiled environments on mistral but users can also install their own environment based on miniconda. Please read the overview on how to make use of these options.

Working with Jupyter Notebook

In addition to the Python environment that can be used on mistral at the command line, one might also use Jupyter notebooks to execute Python code using a web browser. Different options are given to run the notebook directly on mistral with full access to the lustre file system. For further information please read the documentation.

Document Actions