Skip to content

Access

HAL head nodes

Connection to IPSL ESPRI Mesocenter is made by SSH on the clusters head nodes. Clusters head nodes are WorldWide Open via ssh but only with ED25519 or RSA (4096 bits) key types and without any password asked upon connection (whereas a password is asked when deciphering your private key).

Head nodes addresses

The fully qualified address of the head nodes of HAL:

  • hal.obs.uvsq.fr or 194.254.41.50

The head nodes of HAL are just for access and file transferts, there is not any scientific software installed.

hal.obs.uvsq.fr is meant to submit job on hal1 to hal6. See this page about the computing nodes of HAL.

Warning

The head nodes are set to run short and low memory administrative commands. You may read the interactive jobs page so as to run long interactive jobs with or without GPU support.

HAL access

The HAL cluster is composed of the machine hal.obs.uvsq.fr (or hal0.obs.uvsq.fr) called hal0 and the computing nodes hal[1-6].obs.uvsq.fr. hal0 is the head node of the cluster, on which connections are possible from the internet with SSH key authentication. The hal[1-6] machines can only be accessed by bouncing on hal0 providing you have a running job on that machine.

Connection without JupyterLab/Jupyter Notebook

After requesting the creation of an account on the computing and data centre of the IPSL, and once it is accepted by the IPSL, the connection to the cluster is established with the SSH protocol using the SSH key declared to the centre (more details at this page). If you want to have a graphical return (graphical windows deported on your computer), add the -X option to the ssh command (X forwarding). Feel free to create connection aliases using a config file (more details at this page).

Example from a terminal on your machine:

ssh -X <cluster_login>@hal.obs.uvsq.fr

Connection with Jupyter Lab/Notebook

It is about creating an SSH tunnel that allows sending the outgoing flow from the Jupyter server to your machine. We propose two methods that create an interactive job (more information here) on the cluster and an SSH tunnel.

Warning

At the end of your work session, close all your terminals connected to the cluster (press CTRL and C several times) in order to free the allocated resources (because simply leaving the notebook does not stop your interactive job). If any sessions remain blocked, you can cancel all your jobs with this command executed on hal0: scancel -u <cluster_login>.

Method 1 (semi-automatic)

Example from a terminal on your machine:

ssh <cluster_login>@hal.latmos.ipsl.fr
srun --gres=gpu:1 /net/nfs/tools/bin/jupytercluster.sh 'pytorch/1.10.1' # pytorch/1.10.1 Or another ML module containing JupyterLab!

Info

The jupytercluster.sh script displays a command line in the terminal to be executed in another terminal on your machine (not on hal0). Example: ssh -N -L 30920:hal4.obs.uvsq.fr:30920 delcambre@hal.obs.uvsq.fr . This command line produces no output, which is normal. Then in a web browser on your machine, copy/paste the URL that is displayed at the very end of the first terminal and that starts with http://127.0.0.1:........ Example : http://127.0.0.1:30920/?token=c3a46af69f1eb10fc0e3b8aa270050d7d3046a30508d9376. Keep both terminals until the end of your Jupyter session.

By default, JupyterLab is started. If you prefer Jupyter Notebook, add the -n option to jupytercluster.sh. Example:

ssh <cluster_login>@hal.latmos.ipsl.fr
srun --gres=gpu:1 /net/nfs/tools/bin/jupytercluster.sh -n 'pytorch/1.10.1' # pytorch/1.10.1 Or another ML module containing JupyterLab!

If you want to start Tensorboard, add the -t option followed by the path to the directory that will contain the training log files. The script will give you an additional URL to copy into your web browser to display Tensorboard. Example:

ssh <cluster_login>@hal.latmos.ipsl.fr
srun --gres=gpu:1 /net/nfs/tools/bin/jupytercluster.sh -t "/$HOME/my_logs_dir" 'pytorch/1.10.1' # pytorch/1.10.1 Or another ML module containing JupyterLab!

Info

Of course, the script options are cumulative. To see all the options, run /net/nfs/tools/bin/jupytercluster.sh -h from hal0.

Method 2 (manual)

Understand the general instructions found at this page. The following instructions give you a practical example for HAL GPU cluster.

From a terminal on your machine:

ssh <cluster_login>@hal.latmos.ipsl.fr
srun --gres=gpu:1 --pty bash
# Slurm makes you connect to one of the hal[1-6] machine.
# Note the number of the allocated HAL machine for later.
module load pytorch/1.10.1 # pytorch/1.10.1 Or another ML module containing JupyterLab!
jupyter lab --no-browser --ip=0.0.0.0 --port=<number between 10000 and 15000>

From another terminal on your machine:

ssh -N -L <the chosen port number>:hal<allocated machine number>.latmos.ipsl.fr:<the chosen port number> <cluster_login>@hal.latmos.ipsl.fr

The allocated machine number can be fetched looking to the output of the squeue command. Then copy and past the JupyterLab URL connection address that was displayed in the first terminal.