Here we collect first user feedbacks with respect to albedos performance.
With/out Hyperthreading (SMT)
Model | User | Pro SMT | Contra SMT |
---|---|---|---|
idle | admin | - | (∑ Esocket[0-7] according to lm_sensors: |
stress-ng stream | admin | -- | ~13% slower |
FESOM | NEC | Using 128 Threads per node: 3% faster (probably because the (buggy) GXFS daemon can use a virtual core) | Using 256 Thread per node: 10% slower |
Python AI | vhelm | no impact/difference | no impact/difference |
GPU nodes (A40 vs. A100)
Model | User | A40 vs. A100 | |
---|---|---|---|
tensorflow-gpu AI application | vhelm | no difference | |
Disk Access
Application | user | node internal NVMe: /tmp | 100 Gb Infinniband GPFS: /albedo | 10 Gb Ethernet /isibhv | |
---|---|---|---|---|---|
- ...
Runtime compared with ollie
- ...