Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Jupyter is an interactive computing environment which lets you execute notebooks which can mix code, text, graphics, and LaTeX all in a single document. To start a local Jupyter server, ensure that you have Jupyter installed in your currently activated conda environment, and then run:

$ jupyter lab --no-browser --ip 0.0.0.0 /path/to/start/from

The printed output will direct you to a website where you can then open up the Jupyter interface. 

There are different ways to use Jupyter from Albedo listed bellow.

JupyterHub


Info

We are currently working on an experimental JupyterHub which will allow you to log on to the login nodes and run notebooks directly from the browser. If you want to test this out, you can try here: http://albedo0.dmawi.

Info

We are currently working on an experimental JupyterHub which will allow you to log on to the login nodes and run notebooks directly from the browser. If you want to test this out, you can try here: http://albedo0.dmawi.de:8000 Note that VPN is required! Currently access is provided only by request, please open a ticket on hpc@awi.de to get on the list during the testing phase.


You will be presented with a login page, and after login, with a selection of job profiles. You can either run your notebook on a login node (not recommended), a compute node, or a GPU node. In case of a compute node or GPU node, yu need to specify which computing account SLURM should use. A list of available computing accounts is provided for you. Additionally, in the case of a GPU node, you need to specify which type of GPU you want to use (A40 or A100) and how many GPUs you wish to use. 


On the JupyterLab page (the interface you are provided after SLURM launches your job), you can select the Python3 kernel (bare-bones Python only) or the Analysis Toolbox kernel (most common scientific analysis and plotting packages). If you want to install your own kernel, you can do the following:

Code Block
languagebash
linenumberstrue
$ jupyter kernelspec install /albedo/soft/sw/conda-sw/analysis-toolbox/03.2023/share/jupyter/kernels/python3 --name "analysis-toolbox_03.2023" 

Ensure you replace the path and name with appropriate values for your path! In your conda environment, you need to have the ipykernel package installed.  

SLURM-enabled Jupyterhub jobs are restricted to 12 hours.

JupyterLab from a login node

Load the analysis-toolbox:

[mandresm@albedo1:~]$ module load analysis-toolbox
[mandresm@albedo1:~]$ jupyter notebook --no-browser --ip=0.0.0.0
...
[I 15:26:36.310 NotebookApp] Jupyter Notebook 6.5.3 is running at:
[I 15:26:36.310 NotebookApp] http://albedo1:8891/?token=asdasdads
[I 15:26:36.310 NotebookApp]  or http://127.0.0.1:8891/?token=asdasdasd
[I 15:26:36.310 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 15:26:36.313 NotebookApp] 
    
    To access the notebook, open this file in a browser:
        file:///albedo/home/mandresm/.local/share/jupyter/runtime/nbserver-3890270-open.html
    Or copy and paste one of these URLs:
        http://albedo1:8891/?token=asdasdaas
     or http://127.0.0.1:8891/?token=asdasda

On the local machine, paste the URL including the albedo0 or albedo1 word into your browser, but replace albedo0 or albedo1 by albedo0.dmawi.de or albedo1.dmawi.de respectively.

JupyterLab from a COMPUTE or a GPU node

This example covers how to request a GPU node, but doing the same with a COMPUTE node would be almost identical, you will just need to remove the GPU parts on the salloc call. 

Background: By default, Jupyer notebook uses /run/user/<uid> as default directory for small files like notebook_cookie_secret. If you log in by ssh, /run/user/<uid> is created and it is removed when you close your last login session on the computer. However, if you enter a node via Slurm sbatch, salloc, or srun, /run/user/<uid> is not available. XDG_RUNTIME_DIR sets a different path.

mandresm@albedo1:~$ salloc --partition=gpu --gpus=1 -A computing.computing --time=00:30:00
salloc: Pending job allocation 6526219
salloc: job 6526219 queued and waiting for resources
salloc: job 6526219 has been allocated resources
salloc: Granted job allocation 6526219
salloc: Waiting for resource configuration
salloc: Nodes gpu-001 are ready for job

mandresm@gpu-001:~$ export XDG_RUNTIME_DIR="/tmp/tmp_$SLURM_JOBID"
mandresm@gpu-001:~$ module load analysis-toolbox
mandresm@gpu-001:~$ jupyter notebook --no-browser --ip=0.0.0.0
...
[I 15:37:11.953 NotebookApp] Serving notebooks from local directory: /albedo/home/mandresm
[I 15:37:11.953 NotebookApp] Jupyter Notebook 6.5.3 is running at:
[I 15:37:11.953 NotebookApp] http://gpu-001:8888/?token=123
[I 15:37:11.953 NotebookApp]  or http://127.0.0.1:8888/?token=123
[I 15:37:11.953 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 15:37:11.958 NotebookApp] 
    
    To access the notebook, open this file in a browser:
        file:///albedo/home/mandresm/.local/share/jupyter/runtime/nbserver-698858-open.html
    Or copy and paste one of these URLs:
        http://gpu-001:8888/?token=123
     or http://127.0.0.1:8888/?token=123

Now, we have to establish an SSH tunnel from your PC to the compute node, in this case gpu-001, to forward the Jupyter Notebook. Check the port number - usually 8888 for jupyter notebook, but it might differ. Open a new local terminal and execute:

mandresm@blik0256:~$ ssh -NL localhost:8888:gpu-001:8888 mandresm@albedo1.dmawi.de

If you have ssh automatically configured to connect to Albedo then the process will be idling at this point. If you are requested your password then enter your password and again there will be no output. You don't need to do anything here anymore, just leave this local terminal open.

Now copy the address that looks like http://gpu-001:8888/?token=123 pate it in your browser, and substitute gpu-001 with 0.0.0.0. Jupyter Lab should open now!

Singularity

Warning
titleStill under constuction

Singularity support is still under construction!

...