You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 21 Next »

HSM at AWI

The HSM provides for your data

  1. unlimited storage space and
  2. two backups on tape

However, there are two caveats:

  1. Your Data is archived on a tape, so it will take a while to get it back (unless it is in the cache or has a disk-copy).
  2. You need a project to archive your data (apply for a project here: https://cloud.awi.de/#/projects).

Two domain of archiving at AWI


DomainPurpose
APermanent archive for irrecoverable data. Metadata is needed for this domain. Please contact Stefanie Schumacher for more information about Pangaea.
BLong term project data with predefined life time. A project can be created with eResources https://cloud.awi.de/#/projects A sticky bit in the project area protects files from being deleted by anyone other than the admin or the owner. The setgid on directories ensures that new (sub-)directories belong to the UNIX-group automatically.
CDeprecated, will vanish soon. This former personal user data will be deleted after user leaves AWI.
DomainFile SystemsTape ArchiveDisk Archive
How to apply

A

/hs/usera


Yes


Yes


Archive of individual Projects



/hs/useroYesYesPangaea
B


/hs/bsys

Yes

Yes

Biological science

Please use eResources https://cloud.awi.de/#/projects to create a project.



/hs/csysYesYesClimate science
/hs/gsysYesYesGeophysics science
/hs/techYesYesTechnical science
/hs/potsdamYesNoData from Potsdam
C

/hs/userc



Yes

No

Depricated user data




/hs/usermYesNoDepricated user data

A disk archive is available for specific (smaller) files at domains A and B. This allows a fast access of offline files. The availability of this disk archive depends on the actual resources/usage and might change.

Access to hssrv1

Read Access

Files might be offline for several reasons. If you want to access these they are read from tape automatically, but this takes some time. If you want to read/copy more than one file staging is strongly recommended (see below).

The Windows Way

Connect to \\hssrv1.awi.de\ within windows explorer (use the right mouse button and add a network device). You can either search for shared directories or directly connect with e.g., \\hssrv1.awi.de\csys\<project>.

The *nix Way

All HSM file systems are shared and mounted automatically on all AWI computers. A simple ls /hs/userm/ should do.

Read/Write Access

SuggestionCommandImportant Notes
Best choice :-)

rsync -e ssh -uvP[r] <file|dir> <username>@hssrv1.awi.de:<destination-dir>

rsync is the most versatile way of transfering data. E.g., it allows updates with the -u option. This ensures that only new files are copied (and overwritten), existing (unchanged) files are not touched. This is important to reduce tape access. You do not want to use -a, because this would  stage all files from tape to the disk-cache for a complete file-comparison.

When copying directories you need -r (recursive).

Fast choice :-)sftp
filezilla
sftp provides fast way of transferring large amounts of data. Use your favourite ftp-client. However, note, that only two connections per user are allowed. If you request more, your connection will terminate. sftp uses the secure ssh-protocol and should be preferred. Use port 22 for sftp.
Do not use! :-(scp <file|dir> <username>@hssrv1.awi.de:<destination-dir>scp seems convenient, but it is slightly slower when transferring data compared to ftp and/or rsync. It also just overrides existing files and no update (like rsync -u) is possible. This would also create new tape copies, you do not want to do that!!!

Note: If you have to archive many (>100 000) small (<100 MB) files this will stress the system more than necessary. Please zip or tar your directories and upload these compressed files.

Execute commands on hssrv1

Direct access (login) to hssrv1 is not possible. However, you can execute remote commands on hssrv1 in a restricted shell to get information about your data. E.g., you can release and stage your data if necessary. See the right column for some useful commands. They are executed with: ssh <username>@hssrv1 <command>

Create an ssh-key for hssrv1

Your $HOME on hssrv1 is the standard UNIX home directory. You can use a ssh-key for hssrv1. Execute these commands in a terminal (e.g., putty on windows):

  • Execute >ssh-keygen -t rsa< on your computer/client and just press enter three times (confirmation of key location and empty passphrase).
  • ssh-copy-id or another way to add the content of  ~/.ssh/id_rsa.pub to <username>@hssrv1.awi.de:~/.ssh/authorized_keys

Note: Starting with the 7.0 release of OpenSSH, support for ssh-dsa keys has been disabled by default. You can re-enable support locally by updating your sshd_config (in /etc/ssh, /opt/local/etc/ssh, or ~/.ssh/config) with:

PubkeyAcceptedKeyTypes=+ssh-dss

Some general Information about SamFS & HSM

Principle Idea

  • SamFS stands for (S)torage (a)rchive (m)anager (F)ile (S)ystem
  • SamFS: is a (H)ierarchical (S)torage (M)anagement system (HSM). The HSM consists of two storage systems: The cache to speed up access, and the hierarchy. Based on a set of rules, data is stored on certain connected storage devices (tapes and maybe disks).

The Circle of Life

Archiving

  • When creating a file in SamFS (e.g., by rsync, scp, ftp) the data is stored on a fast cache system (a hard drive).
  • Depending on predefined policies (e.g., when the file has not been modified for a specific amount of time) the file is automatically archived on slower (and much cheaper) tapes.
  • A file just created is online

Releasing

  • The metadata (filename, size, ownership, permissions, etc.) of a file stay always on the cache system and are visible, but 
  • when the cache system fills upl (e.g, 90% capacity) the data of large files and files that have not been touched fro some time is released.
  • If the data of the file is released it is called offline.
  • The user does (in the first instance) not see any difference between a online and offline file.

Staging

  • When offline data is accessed the SamFS intercepts the call and automatically gathers the data from the archive media. SamFS uses informations from the metadata to find the media.
  • In the meantime the reads from this file will be blocked, thus the process accessing the data blocks, too.
  • When accessing more than a few files, prior staging is strongly recommended (see User commands)!

Recycling

  • If the content of a file changes a new archive copy has to be produced. (You can not modify just the relevant bits on the tape.)
  • The previous archive copy becomes useless (aside from having an additional backup of a previous version).
  • If a file is deleted, the archive copy becomes useless, too.
  • Both processes result in unused (invalid) sections on a tape. 
  • Eventually only a small part of a tape contains relevant (up to date) information. The residual data is archived on other tapes, the old tape is erased and can be used for future archive copies.
  • This happens by the following tasks:
    1. The recycler marks the tape and/or files with R.
    2. The next archiver run finds these files and starts re-archiving, the R flag of the file vanishes and a new vsn (volume serial name) for this copy is set.
    3. The recycler recognises that all files are copied somewhere else, because they have a new unique vsn. The old tape gets the status c (old candiadat) and depending on the settings in /etc/opt/SUNWsamfs/recycler.delay an atq job is scheduled for /etc/opt/SUNWsamfs/scripts/recycler.sh.


Related articles

Contact

If you have problems please contact: hsm-support@awi.de (Tel. -1828)

User commands

 To be executed as

ssh hssrv1 <command>

User commandDescription
mkdir

Create a new directory, e.g., ssh hssrv1 mkdir /hs/csys/<project>/newdir

stage You can stage a file before you access it. If you use stage -w <file>, the command ends when the file is online. If you want to access more than one file (e.g., a complete directory) you should use stage -r <dir> (recursive) and additionally (optional) stage -r -w <dir> if you want the terminal to wait until all files are staged. NOTE: Never use '-w' in your first stage-command, because you would prevent samfs from optimizing the tape access.
sls -D (or sls -2)equivalent to ls, shows detailed information about a file and its archive status (see ssh hssrv1 man sls for more information)
sdu -hequivalent to du, shows detailed information about a file and its archive status (see ssh hssrv1 man sdu for more information)
releaserelease disk space of archived files if online quota reaches the limit. Try release -r <dir> to recursively delete files in (sub-)directories.
sfindequivalent to find, shows correct informations about the file size on tape (and not only on the disk cache, see ssh hssrv1 man sfind for more information)
saminfo.sh -qGet quota of all groups on HSM
saminfo.sh -sShow staging status
saminfo.sh -tShow tape drive status

A useful combination if you want to get all netcdf-files in a specific directory online would be something like:

ssh hssrv1 "sfind <absolut-path> -offline -name *.nc -exec stage.sh {} \;"


  • No labels