You are here: Home / Systems / Mistral / Programming

Programming

Compilers and MPI

On Mistral we provide the Intel, GCC (GNU Compiler Collection), NAG, and PGI compilers and several Message Passing Interface (MPI) implementations: BullxMPI, IntelMPI, OpenMPI and MVAPICH2. No compilers and MPIs are loaded by default.

For most applications we recommend to use the Intel compiler in its latest version which fully supports the underlying CPU architecture. Regarding the MPI version to be used, we can provide full support via direct contact to the vendor:

  • IntelMPI, all versions later than 2017.1.132
  • OpenMPI starting from version 2.0.2p1 including Mellanox HPC-X toolkit
  • BullxMPI including Mellanox MXM and FCA tools was used for all benchmarks of the procurement - but is no longer supported upstream.

Compiler and an appropriate MPI library can be selected by loading the corresponding module files, for example:

# Use the "latest" versions of Intel compiler and Intel MPI
$ module load intel intelmpi

# Use a specific version of Intel compiler and OpenMPI $ module load intel/17.0.2 openmpi/2.0.2p1_hpcx-intel14
We recommend to specify the module version number explicitly otherwise the lexicographically highest version is loaded by default, which might not be latest version.


The following table shows the names of the Intel compilers as well as names of IntelMPI and BullxMPI/OpenMPI compiler wrappers. The MPI compiler wrappers build up the MPI environment (i.e. set paths to MPI include files and MPI libraries) for your compilation task automatically.

LanguageIntel CompilerIntelMPI wrapperBullxMPI/OpenMPI wrapper
Fortran 90/95/2003 ifort mpiifort mpif90
Fortran 77  ifort  mpiifort  mpif77
C++ icpc mpiicpc mpic++
C icc mpiicc mpicc

The table below lists some useful compiler options that are commonly used for the Intel compiler. For further information please refer to the man pages of the compiler or the comprehensive documentation on Intel website.

OptionDescription

 -qopenmp

 Enables the parallelizer to generate multi-threaded code based
on the OpenMP directives

-g Creates debugging information in the object files. This is necessary if you want to debug your program

-O[0-3] Sets the optimization level
-L<libary path> A path can be given in which the linker searches for libraries
-D Defines a CPP macro
-U Undefines a CPP macro
-I<include directory> Allows to add further directories to the include file search path
 -sox Stores useful information like compiler version, options used etc. in the executable
-ipo Inter-procedural optimization
-xAVX or
-xCORE-AVX2
Indicates the processor for which code is created
-help Gives a long list of quite a big amount of options

 

Compilation Examples

 

Compile a hybrid MPI/OpenMP program using Intel Fortran compiler and OpenMPI with Mellanox HPC-X toolkit:

$ module add intel/17.0.2 openmpi/2.0.2p1_hpcx-intel14
$ mpif90 -openmp -O2 -xCORE-AVX2 -fp-model source -o mpi_omp_prog program.f90

 

Compile a MPI program in Fortran using Intel Fortran compiler and Intel MPI:

$ module add intel/17.0.2 intelmpi/2017.3.196
$ mpiifort -O2 -xCORE-AVX2 -fp-model source -o mpi_prog program.f90

 

Recommendations

Intel compiler

Using the compiler option -xCORE-AVX2 causes the Intel compiler to use full AVX2 support/vectorization (with FMA instructions) which might results in binaries that do not produce MPI decomposition independent results. Switching to -xAVX should solve this issue but result in up to 15% slower runtime.

The optimal environment settings strongly depend on the type of application and used MPI library. For most MPI versions installed on Mistral, we provide some recommended runtime settings.

 

OpenMPI

Starting with version 2.0.0 all optimizations by BULL/ATOS, which were previously implemented in bullxMPI, are now given in OpenMPI. Also these versions are built using the Mellanox HPC-X toolkit to directly benefit from the underlying IB architecture. The latest OpenMPI modules will automatically load the appropriate hpcx modules.

bullxMPI

Although the bullxMPI library was used throughout for the benchmarks of the HLRE-3 procurement, we no longer recommend to use bullxMPI with FCA. The old FCA/2.5 version depends on a central FCA-manager that has shown to fail from time to time and causes the application to break. As an alternative OpenMPI >2.0.0 should be used.

 From BULL/ATOS point of view, bullxMPI should be used with Mellanox tools (ie. MXM and FCA), hence load the specific environment before compiling

$ module add intel mxm/3.4.3082 fca/2.5.2431 bullxmpi_mlx/bullxmpi_mlx-1.2.9.2
$ mpif90 -O2 -xCORE-AVX2 -o mpi_prog program.f90

One must respect the order of loading the modules: compiler, MXM/FCA and afterwards bullxMPI. The bullxMPI module with mellanox tools (i.e. bullxmpi_mlx or the multithreaded variant bullxmpi_mlx_mt) will inform if the required mxm and fca modules are not loaded.

IntelMPI

We recommend using IntelMPI versions 2017 and newer, since prior versions might get stuck in MPI_Finalize and therefore waste CPU time without real computations.

Libraries

There is no module to set NetCDF paths for the user. If you need to specify such paths in Makefiles or similar, please use the nc-config and nf-config tools to get the needed compiler flags and libraries, for example

# Get paths to netCDF include files
$ /sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/bin/nc-config --cflags

-I/sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/include \
-I/sw/rhel6-x64/sys/libaec-0.3.2-gcc48/include \
-I/sw/rhel6-x64/hdf5/hdf5-1.8.14-threadsafe-gcc48/include


# Get options needed to link a C program to netCDF
$ /sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/bin/nc-config --libs

-L/sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/lib \
-Wl,-rpath,/sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/lib -lnetcdf


# Get paths to Fortran netCDF include files
$ /sw/rhel6-x64/netcdf/netcdf_fortran-4.4.2-intel14/bin/nf-config --fflags

-I/sw/rhel6-x64/netcdf/netcdf_fortran-4.4.2-intel14/include


# Get options needed to link a Fortran program to netCDF
$ /sw/rhel6-x64/netcdf/netcdf_fortran-4.4.2-intel14/bin/nf-config --flibs

-L/sw/rhel6-x64/netcdf/netcdf_fortran-4.4.2-intel14/lib -lnetcdff \
-Wl,-rpath,/sw/rhel6-x64/netcdf/netcdf_fortran-4.4.2-intel14/lib \
-L/sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/lib \
-Wl,-rpath,/sw/rhel6-x64/netcdf/netcdf_c-4.3.2-gcc48/lib \
-L/sw/rhel6-x64/hdf5/hdf5-1.8.14-threadsafe-gcc48/lib \
-Wl,-rpath,/sw/rhel6-x64/hdf5/hdf5-1.8.14-threadsafe-gcc48/lib \
-L/sw/rhel6-x64/sys/libaec-0.3.2-gcc48/lib \
-Wl,-rpath,/sw/rhel6-x64/sys/libaec-0.3.2-gcc48/lib \
-lnetcdf -lhdf5_hl -lhdf5 -lsz -lcurl -lz

 

Document Actions