Here we collect first user feedbacks with respect to albedos performance.
With/out Hyperthreading (SMT)
Model | User | Pro SMT | Contra SMT |
---|---|---|---|
idle | admin | - | (∑ Esocket[0-7] according to lm_sensors: |
stress-ng stream | admin | -- | ~13% slower |
FESOM | NEC | Using 128 Threads per node: 3% faster (probably because the (buggy) GXFS daemon can use a virtual core) | Using 256 Thread per node: 10% slower |
Python AI | vhelm | no impact/difference | no impact/difference |
GPU nodes (A40 vs. A100)
Model | User | A40 vs. A100 | |
---|---|---|---|
tensorflow-gpu AI application | vhelm | no difference | |
python3, matrix operations with with numpy (fat) vs cupy (gpu) | sviquera | ||
Disk Access
albedo | ollie | ||||||
---|---|---|---|---|---|---|---|
Application | user | node internal /tmp (NVMe) | 100 Gb Infiniband /albedo (GPFS) | 10 Gb Ethernet /isibhv (NVMe) | node internal | 100 Gb Omnipath /work (BeeGFS) | 10 Gb Ethernet |
idm | vhelm | ~9 sec | 10~13 sec | 8~11 sec spikes up to 181 sec | 27~29 sec | 27~37 sec | 29~60 sec spikes up to 98 sec |
ls -f ls # default with stat/color | directory with 30000 entires | 0.08 sec 0.19 sec | 0.04 sec 6~15 sec | 0.03 sec 0.2 sec | 0.1 sec 0.4 sec | 0.2 sec 1.6 sec | 0.08 sec 0.3~0.7 sec |
- ...
Runtime compared with ollie
- ...