Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

...

titleAlbedo's documentation


Support

You can open a support ticket on helpdesk.awi.de or by writing an email to hpc@awi.de. Please do not send a personal email to any admin!

... can be found here:

http://hpc-docs.awi.de/

Questions/Suggestions

hpc@awi.de

Hardware

  • 2x interactive login nodes
    • 64bit-processor, AMD Rome (Epyc 7702)

    • 2 sockets per node
    • 64 core, cTDP reduced from 200W to 165W
    • → 128 cores/node
    • 2GHz (3.3GHz Boost)
    • 512 GiB RAM (8 DIMMs per socket with 32GiB DDR4-3200)
    • 2x SSD 480 GB SATA
    • 1x SSD 1,92 TB
  • 1x interactive GPU node
    • 2x NVIDIA A40
    • 2x AMD Rome (Epyc 7702) 
    • 512 GiB RAM (8 DIMMs per socket with 32GiB DDR4-3200)
    • for testing, Jupyter notebooks, Matlab, ...
  • 240x standard compute nodes (je vier in einem NEC HPC2824Ri-2)

    • 256 GiB RAM (8 DIMMs  per socket with 16GiB DDR4-3200)
    • 512 GiB NVMe pro Knoten

  • 4x fat nodes (jeweils in einem NEC HPC2104Ri-1)
    • 4 TiB RAM (16 DIMMs per socket with 128 GiB DDR4-3200)
    • 512 GiB NVMe
    • 7.68 TiB NVMe
  • 1x GPU nodes (1x HPC22G8Ri)
    • 4x Nvidia A100/80
    • 2x AMD Rome (Epyc 7702) 
    • 512/1024 GiB RAM
    • 3x3.84 TiB = 11.52 TiB NVMe
    • More GPU nodes will follow later, after we gained first experience of what you really need, and to offer most recent hardware
  • 3x Management nodes
    •  one AMD 7302P(NEC HPC2104Ri-1) each
    • 1 socket
    • 16 cores
    • 128 GiB RAM (8 DIMMs  per socket with 16GiB DDR4-3200)
    • 2x SATA 960 GiB SSD
    • 1x 1.92 TiB NVMe
  • File Storage:
    • IBM Spectrum Scale (GxFS)
    • Tier 1: 220 TiB NVMe as fast cache and/or burst buffer
    • Tier 2: ~5.38 PiB NL-SAS HDD (NetApp EF300)
    • future extension of both tiers (capacity, bandwidth) is possible

...