Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • To work interactively on a compute node use salloc. You can use all options (more CPU, RAM, time, partition, qos, ...) described below.
  • To submit a job from the login nodes to the compute nodes you need slurm (job scheduler, batch queueing system and workload manager)
  • A submitted job has/needs the following information/resources:

    WhatUseDefaultComment
    Name
    -J or --job-name=


    Account -A or --account=primary section, e.g.
    clidyn.clidyn
    computing.computing

    New on albedo (was not necessary on ollie)

    You can add a project (defined in eResources, you must be a member of the project) with -A <section>:.<project>. This is helpful for reporting.

    e.g.,
    clidyn.clidyn
    computing.tsunami
    clidyn.fesom

    Number of nodes
    -N or --nodes=
    1
    Number of (MPI-)tasks (per node)
    -n or --ntasks=
    --ntasks-per-node=
    1Needed for MPI
    Number of cores/threads per task
    -c or --cpus-per-task=
    1

    Needed for OpenMP
    If -n N and -c C is given, you get N x C cores.

    Memory/RAM per CPU

    --mem=
    ntasks x nthreads x 1.6 GBOnly needed for smp-jobs, mpp-jobs get whole nodes (cores and memory)

    Maximum walltime

    -t or --time=
    01:00
    Partition
    -p or --partition=
    smp
    qos (quality of service)-q or --qos=normal


  • Please take a look at our examples scripts (from ollie) SLURM Example Scripts

...