You are here: Home / Systems / Mistral / Containers on Mistral

Containers on Mistral

In some cases, there is a need to bundle or package your application(s) into isolated portable environments (containers). Some use cases include:

  • testing new code
  • reproducing and sharing results (workflows)
  • workaround for missing system libraries (see ML example)

At DKRZ we have started providing some containerization tools. Currently on Mistral, Singularity is available. In this page you will learn how to use, create, and run Singularity containers.

Containers on Mistral is at early stage. There are especially a few pitfalls due to the old kernel of the rhel6 such that not all common recipes from the web will work!
We encourage you to share your containers or recipes, so it can be beneficial for other Mistral users. We will inform you on how and where can you achieve this.

How to use Singularity?

 

Please check the official Singularity user guide here for  detailed information.

 

To start using Singularity commands on Mistral, you need to enable it first:

module load singularity/3.5.3-gcc-9.1.0

For now, only version 3.5 is available, this will be extended once we deploy new versions. Just hit this command to show available modules:

module avail singularity/

and to check the version:

singularity --version

For correspondent help:

singularity --help

File system inside containers

Inside the container, some system-defined bind paths are automatically mounted, e.g. host directories like $HOME (user’s home directory) or /tmp are already available.
In case you want to mount other directories from the host in the container:

  • use the option --bind/-B src[:dest[:opts]]

  • src and dest are paths outside and inside of the container respectively

  • opts are mount options (ro: read-only, rw:read-write)

It is also possible to bind directories using environment variables:

$ export SINGULARITY_BINDPATH="/scratch, /work"
$ singularity shell CONTAINER.sif

More details here.

GPU/CUDA support

Requirements:

  • the version of CUDA inside the container has to be supported by the compute node.
    You can always use
module avail

to check the versions of CUDA available where you run the Singularity container.

This rule can apply for other software, e.g. MPI.

To run Singularity with CUDA/GPU support you only need to add –nv flag to run/exec commands, e.g.:

singularity run --nv ...

More details can be found here.

MPI support

  • version of MPI must be the same on the host and in the container.
  • InfiniBand mount
srun --mpi=pmi2 Options singularity exec Container_Image.sif Command

Singularity with Batch support

It is possible and even recommended to run Singularity containers as interactive job or batch job (Slurm).

k22222@mlogin1% srun --pty -A "Account" -p "PARTITION" singularity shell CONTAINER.sif 

You can also run the container as a batch job, e.g.:

#!/bin/bash
#SBATCH -J mistral_singularity
#SBATCH -o mistral_singularity.out
#SBATCH -p shared
#SBATCH -t 0-00:60

module purge
module load singularity 
module load cuda

# Singularity command line options
singularity exec container.sif bash

If you name it mistral_script.sbatch, then you can run it:

sbatch run mistral.sbatch

Output log will be saved in mistral_singularity.out

Singularity and Docker

It is even possible to pull and run Docker images:

singularity pull docker://centos:7
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
Getting image source signatures
Copying blob ab5ef0e58194 done
Copying config 0a7908e1b9 done
Writing manifest to image destination
...
INFO:    Creating SIF file...
INFO:    Build complete: centos_7.sif

To access and run a shell within the container:

$ singularity shell centos_7.sif
Singularity> 

Now you are running in the container. You can check the kernel and OS in the container:

Singularity> uname -r 
2.6.32-754.14.2.el6.x86_64

Singularity> cat /etc/os-release 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"


Examples

Containerized Machine learning with PyTorch/GPU/CUDA

PyTorch GPU Singularity

Build your own

Currently, it is not possible to build container images directly on Mistral, it requires sudo, which is not granted to Mistral users.
However, you can build the image on your laptop and then push it (scp) to Mistral and run it.

CACHE and TMP directories

To avoid overwhelming your $HOME with singularity cache and tmp files, you can overwrite the default variables by exporting:

  • SINGULARITY_TMPDIR: Used with the build command, to consider a temporary location for the build.
  • SINGULARITY_CACHEDIR: Specifies the directory for image downloads to be cached in.

before singularity commands.

e.g.

  1. mkdir -p /SCRATCH/$USER/singularity/{cache,tmp}
  2. export SINGULARITY_TMPDIR=/SCRATCH/$USER/singularity/tmp
  3. export SINGULARITY_CACHEDIR=/SCRATCH/$USER/singularity/cache

Troubleshooting

  • FATAL: kernel too old: this error is due to the kernel version on the host where the container should run.
    The Operating System on the Mistral cluster is Red Hat Enterprise Linux release 6.4 (RHEL6). Since there is no plan to upgrade the OS on MIstral, a workaround would be to downgrade the OS in the container. For example, containers with the following OS can be used:
    • Centos 7
    • Ubuntu 14 (trusty) or Ubuntu 16 (xenial)

Tips & Tricks

Mounting software tree

To use softwares/modules from mistral in your container, there are some specific directories that need to be mounted first:

  • /sw/spack-rhel6
  • /sw/rhel-x64
  • /mnt/lustre01/spack-workplace

To bind the above paths when you shell into the container:

$ singularity shell --bind /mnt/lustre01/spack-workplace/spack-rhel6/:/sw/spack-rhel6 --bind /mnt/lustre01 --bind /sw/rhel6-x64/:/sw/rhel6-x64 CONTAINER.sif  

Once these directories mounted in the container, you can setup the spack environment:

$ . /sw/spack-rhel6/spack/share/spack/setup-env.sh 

To check if spack/module available:

$ module av

Or 

$ spack list

Contact

Please report any issue using one of our contact channels.

Document Actions