Here we collect first user feedbacks with respect to albedos performance.
Table of Contents |
---|
With/out Hyperthreading (SMT)
Model | User | Pro SMT | Contra SMT |
---|---|---|---|
-- | admin | ~2500 kJ (∑ Esocket[0-7] according to lm_sensors), ~30% less | Nodes need more power (3500 kJ) and get warmer in idle state |
FESOM | NEC | Using 128 Threads per node: 3% faster (probably because the (buggy) GXFS daemon can use a virtual core) | Using 256 Thread per node: 10% slower |
Python AI | vhelm | no impact/difference | no impact/difference |
GPU nodes
...
(A40 vs. A100)
Model | User | A40 vs. A100 | |
---|---|---|---|
tensorflow-gpu AI application |
...
vhelm | no difference | ||
Disk Access
Application | user | node internal NVMe: /tmp | 100 Gb Infinniband GPFS: /albedo | 10 Gb Ethernet /isibhv | |
---|---|---|---|---|---|
- ...
Runtime compared with ollie
...