You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

Here we collect first user feedbacks with respect to albedos performance.

With/out Hyperthreading (SMT)

ModelUserPro SMT (smile)Contra SMT (sad)
idleadmin-

(∑ Esocket[0-7] according to lm_sensors:
Nodes need 30% more power (3500 kJ) and get warmer compared to without SMT (~2500 kJ)

stress-ng streamadmin--

~13% slower

FESOMNECUsing 128 Threads per node: 3% faster (probably because the (buggy) GXFS daemon can use a virtual core)Using 256 Thread per node: 10% slower
Python AIvhelmno impact/differenceno impact/difference
matlab
#SBATCH tasks=8
#SBATCH cpus-per-task=16
vhelm
Runtime: 1440s instead of 1366s → ~5% slower


GPU nodes (A40 vs. A100)

ModelUserA40 vs. A100
 tensorflow-gpu AI applicationvhelmno difference

python3, matrix operations with with numpy (fat) vs cupy (gpu)

sviquera







Disk Access



albedoollie
Applicationusernode internal
/tmp (NVMe)
100 Gb Infiniband
/albedo (GPFS)
10 Gb Ethernet
/isibhv (NVMe)

node internal
/tmp (SSD)

100 Gb Omnipath
/work (BeeGFS)

10 Gb Ethernet
/isibhv (NVMe)

idmvhelm~9 sec10~13 sec8~11 sec
spikes up to 181 sec
27~29 sec27~37 sec29~60 sec
spikes up to 98 sec
ls -f
ls # default with stat/color
directory with
30000 entires
0.08 sec
0.19 sec
0.04 sec
~16 sec
0.03 sec
0.2 sec
0.1 sec
0.4 sec
0.2 sec
1.6 sec
0.08 sec
0.3~0.7 sec








  • ...

Runtime compared with ollie

  • ...


  • No labels