...
Personal directories | Project directories | |||||
---|---|---|---|---|---|---|
Mountpoint | /albedo/home/$USER | /albedo/work/user/$USER | /albedo/scratch/user/$USER | /albedo/work/projects/$PROJECT | /albedo/scratch/projects/$PROJECT | /albedo/burst |
Quota (soft) | 100 GB | 3TB | 50 TB | variable | variable | -- |
Quota (hard) | 100 GB | 15 TB for 90 days | -- | 2x soft quota for 90 days | -- | |
Delete | 90 days after user account expired | all data after 90 days | 90 days after project expired | all data after 90 days | after 10 days | |
Security | Snapshots for 6 months | -- | Snapshots for 6 months | -- | -- | |
Owner | $USER:hpc_user | $OWNER:$PROJECT | root:root | |||
Permission | 2700 → drwx--S--- | 2770 → rwxrws--- | 1777 → rwxwrxrwt | |||
Focus | many small files | large files, large bandwithbandwidth | low latency, huge bandwitchbandwidth |
System storage
Is installed and maintained in /albedo/home/soft/:
...
If you need space here, please contact hpc@awi.de
Compute nodes & Slurm
- To work interactively on a compute node use salloc. You can use all options (more CPU, RAM, time, partition, qos, ...) described below.
- To submit a job from the login nodes to the compute nodes you need slurm (job scheduler, batch queueing system and workload manager)
A submitted job has/needs the following information/ressourcesresources:
What Use Default Comment A Name --job-name=
Number of nodes -N or --nodes=
1 Number of (MPI-)tasks (per node) -n or --ntasks=
--ntasks-per-node=
1 Needed for MPI Number of cores/threads per task -c or --cpus-per-task=
1 Needed for OpenMP
If -n N and -c C is given, you get NxC cores.
Memory/RAM per CPU
--mem=
ntasks x nthreads x 1.6 GB Only needed for smp-jobs, mpp-jobs get whole nodes (cores and memory) A maximum walltime
--time=
01:00 A partition -p
smp A qos (quality of service) --qos= normal - Please take a look at our examples scripts (from ollie) SLURM Example Scripts
...