Info
Alle Inhalte des Nutzerportal sind nur auf Englisch verfügbar.
Sie sind hier: Startseite / Services / Data Analysis and Visualization / Visualization / Visualization on Mistral

Visualization on Mistral

Our new supercomputer Mistral has 21 GPU nodes, which can be used for 3D visualization, data analysis, and pre/post processing of data. This page explains how to reserve and access a GPU node, and how to run the 3D visualization software for the analysis of your data.

Access and reservation of a GPU node

Our supercomputer Mistral currently includes 21 GPU nodes. This built-in visualization server is described in more technical detail here. Unlike the nodes in our former visualization cluster Halo, these nodes are equipped with the same CPUs and software as the supercomputer.

The complete reservation/login procedure is described below, but if you are working on a personal Linux or Mac system, you can use this script to speed up this procedure. Please complete the script by adding your personal project and account numbers, as well as the correct path to you local vncviewer installation. A public key based login to MISTRAL is also recommended. This script automatically connects to Mistral, reserves a GPU node, starts a vnc server and connects to this server using your local vncviwer. More information can be found here.

The reservation has to be done by hand using a console on one of Mistrals login nodes. Furthermore, the VNC server required for remote 3D rendering need to be started by hand. You should also as cleanup your session after your work is done.

To access Mistral, simply ssh into the machine.

somewhere:~> ssh [Email protection active, please enable JavaScript.]

Reservation/allocation of a GPU node: On Mistral, we have to allocate a GPU node. This is done using the SLURM command "salloc", in which you have to provide your account group ("-A <your project id>"),  the number of nodes ("-N 1"), the maximum number of parallel tasks ("-n 24"), as well as the partition type ("-p gpu"). The option "-t hh:mm:ss" sets the duration of the session. Currently, the maximum time allowed is 12 hours and is set automatically. If 12 hours are not sufficient for your specific purposes please contact our user consultancy.

More information on SLURM and salloc can be found in our SLURM documentation.

After login to Mistral, you can allocate a GPU node and automatically ssh into the node reserved.

mistral:~> salloc -N 1 -n 12 --mem=128000 -p gpu -A <project> -t12:00:00 -- /bin/bash -c 'ssh -X $SLURM_JOB_NODELIST'
salloc: Granted job allocation 284896

user_name@mg100's password:

Shared/exclusive usage: The option "-n 12" requests SLURM to allocate 12 physical cores. A total of 24 physical cores is available in the phase-1 nodes (mg100-mg111), respectively 36 phyiscal cores in the phase-2 nodes (mg200-mg208). The fraction of the node's main memory you could use is limited according to fraction of the physical cores you request, but the default memory reserved per core is only 1.25 GB. With the option "--mem=MB" you could request a larger fraction of the memory, i.e. with "mem=128000" 128 GB. By requesting only 12 cores on phase-1 nodes, you could use up to half of the the total memory and the GPU node can be shared, i.e. another user might work on the same node at the same time. If you need more resources, i.e. the full memory of the node, you can allocate all physical cores or simply by using the "--exclusive" option.

mistral:~> salloc --exclusive -N 1 -p gpu -A <project> -t12:00:00 -- /bin/bash -c 'ssh -X $SLURM_JOB_NODELIST'
salloc: Granted job allocation 284896

user_name@mg100's password:

As soon as you have made your allocation, the project given in the salloc command will be charged according to the resources allocated for the duration of your session.

Starting a VNC-Server: Now you have to start a VNC server to connect to the virtual desktop. Otherwise it is not possible to use any of the visualization applications!

mg100:~> /opt/TurboVNC/bin/vncserver -geometry 1920x1200
Desktop 'TurboVNC: mg100:1 (user_name)' started on display mg100:1

At the first time, you have to supply a password, that you later need to access your VNC session remotely. (Hint: although you might e.g. choose the same password as your LDAP password, it will be encrypted and saved separately in your .vnc directory and therefore not automatically be changed when you change your LDAP password!)

Starting a VNC-Viewer on the client: If everything is set, start a vncviewer in the console on your workstation (linux), or using a windows client application such as TurboVNC.

somewhere:~> vncviewer mg100.dkrz.de:1

Now a window opens, showing you the virtualized X11 session running on the GPU node. The system runs a Gnome desktop environment. Here you need to open a console window to start your visualization application.

Running visualization applications

On Mistral, all our visualization software is made available using modules. The command

mg100:~> module avail

provides you with a list of modules that are available. For data analysis and 3D visualization you can use either or all of the following modules:

  • avizo/9.1.1 - For running AvizoGreen 9.1.1
  • paraview/5.1.0 - For running ParaView 5.1.0
  • vapor/2.4 - For running Vapor 2.4
  • simvis/3.4.4 - For running SimVis 3.4.4

 

Several 2D visualization solutions are available as well, and can also be used on nodes without a GPU being installed:

  • ncl/6.3 - NCAR's visualization command language
  • grads/2.1.a3
  • ferret/6.9.3
  • gmt/5.1.2
  • idl/8.5 - Interactive Data Language (IDL) version 8.5

 

The applications can be loaded and run very easily, as in this example Vapor 2.4 showing a visualization of ICON ocean data.

mg100:~> module load vapor
mg100:~> vapor

Vapor on Mistral

 Vapor2.4 with ICON ocean data showing a 3D volume rendering for temperature.

Artikelaktionen