Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents
outlinetrue
indent10px
absoluteUrltrue
exclude1

Login

  • You have to be member of HPC_user (can be applied for on id.awi.de
  • The login nodes can be accessed via
    ssh albedo0.dmawi.de and ssh albedo1.dmawi.de
    If you do not familiar with ssh and/or bash you should start here https://spaces.awi.de/x/-q1-Fw for a basic introduction.
  • Please do not use these nodes for computing, please use the compute nodes (see section Computenodes&Slurm below)
  • HPC resources are not available from remote for security reasons (VPN is possible, https://spaces.awi.de/x/Smr-Eg).
  • By using albedo you accept our HPC data policy https://spaces.awi.de/x/GgrXFw

...

  • You can access your online space on the Isilon in Bremerhaven (see https://spaces.awi.de/x/a13-Eg for more information) via the nfs-mountpoints
    /isibhv/projects
    /isibhv/projects-noreplica
    /isibhv/netscratch
    /isibhv/platforms
    /isibhv/home
  • albedo is connected to the AWI backbone (including the Isilon and the HSM) via four eth-100 Gb interfaces.
    Each single albedo node has a 10 Gb interface.

Local (temporary) node storage on NVMe disks

Node type/tmp/scratch
prod314 GB--
fat6.5 TB--
gpu3.2 TB7.0 TB

Note: /tmp will be deleted on a regular basis and /scratch will be deleted whenever necessary (filling state exceeds 95% or data >60 days old)

Compute nodes & Slurm

  • To work interactively on a compute node use salloc. You can use all options (more CPU, RAM, time, partition, qos, ...) described below.
  • To submit a job from the login nodes to the compute nodes you need slurm (job scheduler, batch queueing system and workload manager)
  • A submitted job has/needs the following information/resources:

    WhatUseDefaultComment
    Name
    -J or --job-name=


    Account -A or --account=primary section, e.g.
    clidyn.clidyn
    computing.computing

    New on albedo (was not necessary on ollie)

    You can add a project (defined in eResources, you must be a member of the project) with -A <section>.<project>. This is helpful for reporting.

    e.g.,
    clidyn.clidyn
    computing.tsunami
    clidyn.p_fesom

    Number of nodes
    -N or --nodes=
    1
    Number of (MPI-)tasks (per node)
    -n or --ntasks=
    --ntasks-per-node=
    1Needed for MPI
    Number of cores/threads per task
    -c or --cpus-per-task=
    1

    Needed for OpenMP
    If -n N and -c C is given, you get N x C cores.

    Memory/RAM per CPU

    --mem=
    ntasks x nthreads x 1.6 GBOnly needed for smp-jobs, mpp-jobs get whole nodes (cores and memory)

    Maximum walltime

    -t or --time=
    01:00
    Partition
    -p or --partition=
    smp
    qos (quality of service)-q or --qos=normal


  • Please take a look at our examples scripts (from ollie) SLURM Example Scripts

...