Table of Contents |
---|
Hardware
- 2 2x interactive login nodes
- 512 GiB RAM
- 2x SSD 480 GB SATA
- 1x SSD 1,92 TB
- 1x interactive GPU node with
- 2x NVDIA A40
- for testing, Jupyter notebooks, Matlab, ...
240x standard compute nodes (je vier in einem NEC HPC2824Ri-2) ????????
64bit-Prozessoren, AMD Rome (Epyc 7702) Serie
- 2 processors per node ???????????????
- 2x AMD Rome Epyc 7702 2GHz (3.3GHz Boost), 64 Core, cTDP reduced from 200W to 165W
- 2GHz (3.3GHz Boost)
- 256 GB RAM,
- 500GB SSD
- 4 "fat" nodes as above, 4TB RAM, 500GB SSD + 7.5TB SSD
- 1 GPU node with 1TB RAM, 4x NVIDIA A100/80
- More GPU nodes will follow later, after we gained first experience of what you really need, and to offer most recent hardware
- Our small test node with NEC's new vector engine "SX-Aurora TSUBASA" can be integrated
- Fast interconnect: HDR Infinband or OmniPath
- RAM
8 DIMMs pro Prozessor mit 16GB DDR4-3200 → 128 Cores und 256 GiB RAM pro Standard-Knoten.
512 GiB NVMe pro Knoten
- 4x fat nodes (jeweils in einem NEC HPC2104Ri-1)
- 4 TiB RAM
- 512 GiB + 7.68 TiB NVMe (je Knoten) ???????????????
- 7.5TB SSD
- 4x GPU nodes (HPC22G8Ri)
- Nvidia A100
- 512 GiB RAM
- 3x3.84 TiB = 11.52 TiB NVMe
- File Storage:
- IBM Spectrum Scale (GxFS)
- Tier 1: 220 TB NVMe
- 220 TB as NVMe SSDs as fast cache and/or burst buffer
- Tier 2: ~5.38 PB NL-SAS HDD (NetApp EF300)
- future extension of both tiers (capacity, bandwidth) is possible
- Fast interconnect: HDR Infinband
- All nodes connected to /isibhv (NFS, 10GbE)
- Alma Linux ("free RedHat", version 8.x)
- ??????????????????
- More GPU nodes will follow later, after we gained first experience of what you really need, and to offer most recent hardware
- Our small test node with NEC's new vector engine "SX-Aurora TSUBASA" can be integrated
The FESOM2 Benchmark we used for the procurement on 240 Albedo nodes compares to 800 Ollie nodes.
...