You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 25 Next »

Here we don't provide a complete list of available software. To check, what is currently available, please also check the output of

module avail

Compiler & MPI

We currently provide the following compilers:

NameVersionmoduleNotes
gcc8.5.0-system default

12.1.0gcc/12.1.0activated support for offloading on Nvidia GPUs (nvptx)
intel-oneapi-compilers2022.1.0intel-oneapi-compilers/2022.1.0
nvhpc22.3nvhpc/22.3 NVIDIA HPC Software Development Kit (SDK) 
aocc3.2.0aocc/3.2.0 AMD Optimizing C/C++ and Fortran Compilers (“AOCC”)




openmpi4.1.3openmpi/4.1.3
intel-oneapi-mpi2021.6.0intel-oneapi-mpi/2021.6.0

Compiler options for optimization

Some remarks from Natalja with respect to the intel compiler and the FESOM2 benchmark (used by NEC in there albedo offer):

  1. do not  use -xHost, because Intel does not "recognize" AMD (officially for security reasons (wink). Therefore use: -xcore-avx2
  2. These options were used by NEC during the FESOM2 benchmark:
    PT = -O3 -qopt-report5 -no-prec-div -ip -fp-model=fast=2 -implicitnone -march=core-avx2 -fPIC –qopenmp -qopt-malloc-options=2 -qopt-prefetch=5 -unroll-aggressive 
    These are (at least partially) quite important for good performance. However, we do not have the experience which are more or less critical. Be careful, some options might kill reproducibility (e.g., -fp-model=fast=2).
  3. Natalja is now responsible for this: https://docs.dkrz.de/doc/levante/running-jobs/runtime-settings.html#open-mpi-4-0-0-and-lat let's stll benefit from here knowledge (wink)

Spack

On albedo we mainly use spack to install software and provide module files.

On albedo it can also be used by users to

  • install their own software into their $HOME
  • load specific packages (similarly to what environment modules do)

Simply load the spack module:

module load spack

This version is configured, such that it makes use of the global software tree, installed by the admins, and your installations go into $HOME/.spack .

Please consult the official documentation on how to use spack: https://spack.readthedocs.io

Python, R, conda, Jupyter

Python

We provide Python modules (currently only 3.10) with a series of useful packages pre-installed. This python module is actually a conda environment which simply sets the correct shell variables for you. You can also use it as a starting location to create your own environments. The definition file may be found here: /albedo/soft/install_templates/conda/plotting/requirements.yml

If you would like additional globally available python environments, just ask!

R

Similar to the Python, the R modules are also conda environments. The module r/4.2 also includes r-studio, which you can open from the terminal and then use via X.

Conda

Conda is a package manager for Python, R, and Julia software. You can use it on our HPC system by:

$ module load conda

Thereafter, you should be able to use conda to manage your Python/R/Julia environments. Typically you will want to install software via:

$ conda install -c conda-forge <PACKAGE_NAME>

Note that this allows you to install both Python as well as R packages. Full documentation for conda is provided here: https://docs.conda.io/en/latest/ 

A useful cheat sheet for using conda can be found here: https://docs.conda.io/projects/conda/en/4.6.0/_downloads/52a95608c49671267e40c689e0bc00ca/conda-cheatsheet.pdf

Jupyter

Jupyter is an interactive computing environment which lets you execute notebooks which can mix code, text, graphics, and LaTeX all in a single document. To start a local Jupyter server, ensure that you have Jupyter installed in your currently activated conda environment, and then run:

$ jupyter lab --no-browser --ip 0.0.0.0 /path/to/start/from

The printed output will direct you to a website where you can then open up the Jupyter interface. 

We are currently working on an experimental JupyterHub which will allow you to log on to the login nodes and run notebooks directly from the browser. If you want to test this out, you can try here: http://paleosrv3.dmawi.de/jupyterhub-hpc Note that VPN is required!

Singularity

Still under constuction

Singularity support is still under construction!


Singularity (now renamed Apptainer) is a containerization software similar to Docker but with several additional security features which make it feasible to use on HPC systems. You can find more information about the software here: https://apptainer.org/docs/user/main/

We offer Apptainer/Singularity as a module which can be loaded with:

$ module load apptainer

Thereafter, you should have both the singularity and apptainer executables available to you so you can download and run containers. These programs are interchangeable.

Important note about building containers from scratch: Building requires root privileges! However, the generated container files are portable and can be copied from (e.g.) your personal laptop to the HPC system. Alternatively, you can consider to use the "remote builder" hosted at Sylabs.io: https://cloud.sylabs.io/builder

Matlab

Currently the most recent version (R2022b) of Matlab is available on albedo.

We offer the software as a module accessible by

module load matlab

After loading the module the program is started by

matlab

Right now the usage on compute nodes is still being set up so please use Matlab temporarily on albedo0 or albedo1 until usage on compute nodes is fully set up.

In addition to that, the Live Editor features are not yet working properly, for the time being please use some external editor to create or modify MATLAB scripts.

Please activate personal license if you have one. This is done just like on any other platform by running

activate_matlab.sh

(after loading the module). A GUI will guide you through the necessary steps for activation of your personal license. Please note that you need to activate a license on albedo0 and albedo1 in case you would like to use Matlab on both  login nodes. 

Currently the nodes fat-00[3,4] are reserved for Matlab users to activate their personal licenses. These nodes can be accessed via the slurm partition matlab.

Please note that this might change again in the future. We will keep you informed!



IDE/ENVI

We were reported that idede, the graphical Interface for IDL, might crash because it might require more virtual memory than we allow. The IDL support provided the following solution:

I have shared your case with the team and would like to share the additional information:
1) First of all, when using older IDL version such as IDL 8.6, you can try to start IDLDE with the below steps to try to workaround the issue:
To be sure everything works fine, please first delete your ".idl" folder which can be found in the user home directory: "/home/user/"
Then run your IDLDE session with the following command:   " idlde -outofprocess "
(This will separate the java process which is running in the background.)

2) The virtual memory is somewhat alarming, but the good thing is that we are sure it’s not actually using that much memory.
However, that can still cause problems, like you are currently experiencing.

First of all, can you please confirm that you're not using an IDL startup script that is pre-allocating a bunch of array space? 

In addition, we noticed that you are using a non-standard system and it might be due to the used kernel and glibc version.
Theoretically, given the right kernel and glibc version, IDL should be able to run.
But you might need to do some tweaking (like that environment variable below) to get IDL to run properly on this specific OS.
We have found a thread on the web, which says that it isn’t Eclipse but could be related to glibc: https://www.eclipse.org/forums/index.php/t/1082034/

They recommend setting some flag, MALLOC_ARENA_MAX=4.
We have never heard of that but it could be worth trying on this specific system.
Here is another thread that also mentions that same environment variable: https://stackoverflow.com/questions/561245/virtual-memory-usage-from-java-under-linux-too-much-memory-used

  • No labels