Table of Contents |
---|
Support
You can open a support ticket on helpdesk.awi.de or by writing an email to hpc@awi.de. Please do not send a personal email to any admin!
Hardware
...
64bit-processor, AMD Rome (Epyc 7702)
- 2 sockets per node
- 64 core, cTDP reduced from 200W to 165W
- → 128 cores/node
- 2GHz (3.3GHz Boost)
- 512 GiB RAM (8 DIMMs per socket with 32GiB DDR4-3200)
- 2x SSD 480 GB SATA
- 1x SSD 1,92 TB
...
- 2x NVIDIA A40
- 2x AMD Rome (Epyc 7702)
- 512 GiB RAM (8 DIMMs per socket with 32GiB DDR4-3200)
- for testing, Jupyter notebooks, Matlab, ...
...
240x standard compute nodes (je vier in einem NEC HPC2824Ri-2)
- 256 GiB RAM (8 DIMMs per socket with 16GiB DDR4-3200)
512 GiB NVMe pro Knoten
...
- 4 TiB RAM (16 DIMMs per socket with 128 GiB DDR4-3200)
- 512 GiB NVMe
- 7.68 TiB NVMe
...
- 4x Nvidia A100/80
- 2x AMD Rome (Epyc 7702)
- 512/1024 GiB RAM
- 3x3.84 TiB = 11.52 TiB NVMe
- More GPU nodes will follow later, after we gained first experience of what you really need, and to offer most recent hardware
...
- IBM Spectrum Scale (GxFS)
- Tier 1: 220 TiB NVMe as fast cache and/or burst buffer
- Tier 2: ~5.38 PiB NL-SAS HDD (NetApp EF300)
- future extension of both tiers (capacity, bandwidth) is possible
- Fast interconnect: HDR Infiniband (100 Gb)
- All nodes connected to /isibhv (NFS, 10 Gb ethernet)
- Alma Linux ("free RedHat", version 8.x)
The FESOM2 Benchmark we used for the procurement on 240 Albedo nodes compares to 800 Ollie nodes.
Preliminary schedule for the transition Ollie → Albedo
...