You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 39 Next »

HSM at AWI

The HSM (Hierarchical Storage Management) provides for your data

  1. unlimited storage space and
  2. two replicates on tape (in different buildings)
  3. a third copy on disk (for selected data, smaller files)

However, there are two caveats:

  1. Your Data is archived on a tape, so it will take a while to get it back (unless it is online or has a disk-copy).
  2. You need a project to archive your data (apply for a project here: eResources).

Four domains of archiving/storing at AWI


DomainPurpose
APermanent archive for irrecoverable data. Metadata is needed for this domain. Please contact Stefanie Schumacher or Janine Felden for more information about PANGAEA and how to submit your data
PLong term project data with predefined life time. A project can be created with eResources https://cloud.awi.de/#/projects 
The setgid on directories ensures that new (sub-)directories belong to the project (POSIX) group automatically
IB, IPAutomatic replication of project data (when selected in eResources) in Bremerhaven and Potsdam, respectively
DFor internal IT use (used for additional backups and storing of expired user/project data)
DomainFile SystemsDisk Copy
How to apply

A

/hs/platforms


(tick)


Archive of sensor.awi.de

Only possible with sufficient metadata.

/hs/usera


(tick)


Archive of individual Projects

/hs/usero
/hs/pangaea
(tick)PANGAEA
P

/hs/Proj-...

(tick)Project data

Please use eResources to create a project.

C

/hs/userc
/hs/userm

No

Has vanished since early 2020


IB, IP/hs/isirep-...(minus)Project replica from the Isilons (Bhv, Pot)Disaster recovery only
D/hs/backup(tick)samfsdumps and logfilesIT internal use only
/hs/platf-back(minus)selection of /hs/platforms for disaster recovery
/hs/store(minus)10-year storing of expired user and project data

A disk archive is available for specific (smaller) files for some file systems. This allows a fast access of offline files. The availability of this disk archive depends on the actual resources/usage and might change.

Access to hssrv1

Read Access

Files might be offline for several reasons. If you want to access these they are read from tape automatically, but this takes some time. If you want to read/copy more than one file staging is strongly recommended (background see below, command see right column).

The Windows Way

Connect to \\hssrv1.awi.de\ within windows explorer (use the right mouse button and add a network device). You can either search for shared directories or directly connect with e.g., \\hssrv1.awi.de\csys\<project>.

The Linux Way

All HSM file systems are shared and mounted automatically on most Linux clients. A simple ls /hs/csys/ should do. If you miss any directory please contact hsm-support@awi.de

Read/Write Access

The HSM system ist shared/mounted read only. To write data into the tape archive use one of the following options:

SuggestionCommandImportant Notes
(big grin) Best choice

rsync -e ssh -Pauv <file|dir> <username>@hssrv1.awi.de:<destination-dir>

rsync is the most versatile way of transfering data. E.g., it allows updates with the -u option. This ensures that only new files are copied (and overwritten), existing (unchanged) files are not touched. This is important to reduce tape access. You do not want to use -a, because this would  stage all files from tape to the disk-cache for a complete file-comparison.

When copying directories you need -r (recursive, already included in -a).

(big grin) Fast choicesftp
filezilla
sftp provides fast way of transferring large amounts of data. Use your favourite ftp-client. However, note, that only two connections per user are allowed. If you request more, your connection will terminate. sftp uses the secure ssh-protocol and should be preferred. Use port 22 for sftp.
(minus) Do not use!scp <file|dir> <username>@hssrv1.awi.de:<destination-dir>scp seems convenient, but it is slightly slower when transferring data compared to ftp and/or rsync. It also just overrides existing files and no update (like rsync -u) is possible. This would also create new tape copies, you do not want to do that!!!

Note: If you have to archive many (>100 000) small (<100 MB) files this will stress the system more than necessary. Please zip or tar[.gz] your directories and upload these compressed files.

Execute commands on hssrv1

Direct access (login) to hssrv1 is not possible. However, you can execute remote commands on hssrv1 in a restricted shell to get information about your data. E.g., you can release and stage your data if necessary. See the right column for some useful commands. They are executed with: ssh <username>@hssrv1 <command>

Change the permissions

Linux way:
As file systems are mounted read only, you have to execute a chmod on the server. As permissions for directories and files are slightly different (most files should not have the 'x') the following suggestion might be useful to restore the default permissions:

ssh hssrv1.awi.de find /hs/<DIR> -type d -exec chmod 2775 {} \;   # directories get  drwxrws---
ssh hssrv1.awi.de find /hs/<DIR> -type f -exec chmod 0664 {} \;   # files get -rw-rw----

Windows way
File permissions can be changed with filezilla or another sFTP program.

Create an ssh-key for hssrv1

Your $HOME on hssrv1 is the standard UNIX home directory. You can use a ssh-key for hssrv1. Execute these commands in a terminal (e.g., putty on windows):

  • Execute >ssh-keygen -t rsa< on your computer/client and just press enter three times (confirmation of key location and empty passphrase).
  • You can not use ssh-copy-id, because login on hssrv1 is not possible, hence you need either one of the following approaches:
    1. If you have access to another Linux Server, e.g. linsrv1.awi.de  ($HOME is identical): ssh-copy-id -i ~/.ssh/id_rsa.pub linsrv1.awi.de
    2. If you want to add your key:
      • scp hssrv1.awi.de:.ssh/authorized_keys /tmp/$$
      • cat ~/.ssh/*pub /tmp/$$
      • scp /tmp/$$ hssrv1.awi.de:.ssh/authorized_keys
    3. If this is your first ssh key:
      • ssh hssrv1.awi.de mkdir .ssh
      • scp ~/.ssh/id_rsa.pub hssrv1.awi.de:~/.ssh/authorized_keys
      • ssh hssrv1.awi.de "chmod 700 ~/.ssh/authorized_keys"

Note: Starting with the 7.0 release of OpenSSH, support for ssh-dsa keys has been disabled by default. You can re-enable support locally by updating your sshd_config (in /etc/ssh, /opt/local/etc/ssh, or ~/.ssh/config) with:

PubkeyAcceptedKeyTypes=+ssh-dss

Some general Information about SamFS & HSM

Principle Idea

  • SamFS stands for (S)torage (a)rchive (m)anager (F)ile (S)ystem
  • SamFS: is a (H)ierarchical (S)torage (M)anagement system (HSM). The HSM consists of two storage systems: The cache to speed up access, and the hierarchy. Based on a set of rules, data is stored on certain connected storage devices (tapes and maybe disks).

The Circle of Life

Archiving

  • When creating a file in SamFS (e.g., by rsync, scp, ftp) the data is stored on a fast cache system (a hard drive).
  • Depending on predefined policies (e.g., when the file has not been modified for a specific amount of time) the file is automatically archived on slower (and much cheaper) tapes.
  • A file just created is online

Releasing

  • The metadata (filename, size, ownership, permissions, etc.) of a file stay always on the cache system and are visible, but 
  • when the cache system fills upl (e.g, 90% capacity) the data of large files and files that have not been touched fro some time is released.
  • If the data of the file is released it is called offline.
  • The user does (in the first instance) not see any difference between a online and offline file.

Staging

  • When offline data is accessed the SamFS intercepts the call and automatically gathers the data from the archive media. SamFS uses informations from the metadata to find the media.
  • In the meantime the reads from this file will be blocked, thus the process accessing the data blocks, too.
  • When accessing more than a few files, prior staging is strongly recommended (see User commands)!

Recycling

  • If the content of a file changes a new archive copy has to be produced. (You can not modify just the relevant bits on the tape.)
  • The previous archive copy becomes useless (aside from having an additional backup of a previous version).
  • If a file is deleted, the archive copy becomes useless, too.
  • Both processes result in unused (invalid) sections on a tape. 
  • Eventually only a small part of a tape contains relevant (up to date) information. The residual data is archived on other tapes, the old tape is erased and can be used for future archive copies.
  • This happens by the following tasks:
    1. The recycler marks the tape and/or files with R.
    2. The next archiver run finds these files and starts re-archiving, the R flag of the file vanishes and a new vsn (volume serial name) for this copy is set.
    3. The recycler recognises that all files are copied somewhere else, because they have a new unique vsn. The old tape gets the status c (old candiadat) and depending on the settings in /etc/opt/SUNWsamfs/recycler.delay an atq job is scheduled for /etc/opt/SUNWsamfs/scripts/recycler.sh.


Related articles

Contact

If you have problems please contact: hsm-support@awi.de

User commands

 To be executed as

ssh hssrv1 <command>

User commandDescription
mkdir

Create a new directory, e.g., ssh hssrv1 mkdir /hs/csys/<project>/newdir

stage You can stage a file before you access it. If you use stage -w <file>, the command ends when the file is online. If you want to access more than one file (e.g., a complete directory) you should use stage -r <dir> (recursive) and additionally (optional) stage -r -w <dir> if you want the terminal to wait until all files are staged. NOTE: Never use '-w' in your first stage-command, because you would prevent samfs from optimizing the tape access.
sls -D (or sls -2)equivalent to ls, shows detailed information about a file and its archive status (see ssh hssrv1 man sls for more information)
sdu -hequivalent to du, shows detailed information about a file and its archive status (see ssh hssrv1 man sdu for more information)
releaserelease disk space of archived files if online quota reaches the limit. Try release -r <dir> to recursively delete files in (sub-)directories.
sfindequivalent to find, shows correct informations about the file size on tape (and not only on the disk cache, see ssh hssrv1 man sfind for more information)
saminfo.sh -qGet quota of all groups on HSM
saminfo.sh -sShow staging status
saminfo.sh -tShow tape drive status

A useful combination if you want to get all netcdf-files in a specific directory online would be something like:

ssh hssrv1 "sfind <absolut-path> -offline -name *.nc -exec stage.sh {} \;"



  • No labels