Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents
outlinetrue
indent10px
absoluteUrltrue

Login

  • The login nodes can be accessed via
    ssh albedo0.awi.de and ssh albedo1.awi.

...

  • de
  • Please do not use these nodes for computing, please use the compute nodes (see section Computenodes&Slurm below)
  • HPC resources are not available from remote for security reasons (VPN is possible).

Storage

Local Users storage


Personal directoriesProject directories
Mountpoint/albedo/home/$USER/albedo/work/user/$USER/albedo/scratch/user/$USER/albedo/work/projects/$PROJECT/albedo/scratch/projects/$PROJECT/albedo/burst
Quota (soft)100 GB3TB50 TB

variable
30 €/TB/yr

variable
10 €/TB/yr

--
Quota (hard)100 GB15 TB for 90 days--

2x soft quota for 90 days

--


Delete90 days after user account expiredall data after 90 days90 days after project expiredall data after 90 days after 10 days
SecuritySnapshots for 6 months--Snapshots for 6 months----
Owner$USER:hpc_user$OWNER:$PROJECTroot:root
Permission2700 → drwx--S---2770 → rwxrws---1777 → rwxwrxrwt
Focusmany small files
large files, large bandwidth

low latency, huge bandwidth

...

If you need space here, please contact hpc@awi.de

Remote user storage

  • You can access your online space on the Isilon in Bremerhaven (see https://spaces.awi.de/x/a13-Eg for more information) via the mountpoints
    /isibhv/projects
    /isibhv/projects-noreplica
    /isibhv/netscratch
    /isibhv/platforms
    /isibhv/home
  • albedo is connected with the AWI backbone (including the Isilon) via four 100 Gb interfaces.
    Each single albedo node has a 10 Gb interface.

Compute nodes & Slurm

  • To work interactively on a compute node use salloc. You can use all options (more CPU, RAM, time, partition, qos, ...) described below.
  • To submit a job from the login nodes to the compute nodes you need slurm (job scheduler, batch queueing system and workload manager)
  • A submitted job has/needs the following information/resources:

    WhatUseDefaultComment
    A Name
    --job-name=


    Number of nodes
    -N or --nodes=
    1
    Number of (MPI-)tasks (per node)
    -n or --ntasks=
    --ntasks-per-node=
    1Needed for MPI
    Number of cores/threads per task
    -c or --cpus-per-task=


    1

    Needed for OpenMP

    If -n N and -c C is given, you get NxC cores.

    Memory/RAM per CPU

    --mem=
    ntasks x nthreads x 1.6 GBOnly needed for smp-jobs, mpp-jobs get whole nodes (cores and memory)

    A maximum walltime

    --time=
    01:00
    A partition
    -p 
    smp
    A qos (quality of service)--qos=normal


  • Please take a look at our examples scripts (from ollie) SLURM Example Scripts

...