Skip to content

Documentation cluster HAL

Date: 15/10/2021
Authors: Yann Delcambre, Sébastien Gardoll
Keywords: anaconda conda cluster deep learning gpu hal jupyter module neuron neuronal nvidia python pytorch sbatch scheduler slurm srun ssh tensorflow

Info

The current Anaconda module is python/3.8-anaconda2020-11. This module contains 403 Python 3.8.5 packages: those of Anaconda (list here), plus Basemap, Cartopy, Catboost, Captum, Chainer, (7.7.0), Cupy, Cuda (10.1), Cudnn (7. 6.5), Esmtools, Ignite, Keras-tuner, Lime, Mahotas, Mpi4py, Netcdf4, Pipenv, Plotly, Pydot, Pytorch, (1.7.1), Pytorch-lightning, Pytorch-model-summary, Tensorflow, (2.2.0), Tensorflow-datasets, Theano, Torchvision, Virtualenv and Xarray. Get the complete list with this command line: module load python/3.8-anaconda2020-11; pip list

1. Preamble

Some recommendations on how to read this document:

  • [1-4] means 1 to 4, included.
  • <cluster_login> : replace the text and the two symbols (greater than and lesser than) by a value according to the text semantic.

2. Connection to the cluster

The HAL cluster is composed of the machine hal.obs.uvsq.fr (or hal0.obs.uvsq.fr) called hal0 and the machines hal[1-4].obs.uvsq.fr. hal0 is the head node of the cluster, on which connections are possible from the internet with SSH key authentication. The hal[1-4] machines can only be accessed by bouncing from hal0 and only if you have a running job on that machine. More information about the head nodes of the computing centre of the IPSL at this page.

2.1 Connection without Jupyter Lab/Notebook

After requesting the creation of an account on the computing and data centre of the IPSL, and once it is accepted by the IPSL, the connection to the cluster is established with the SSH protocol using the SSH key declared to the centre (more details at this page). If you want to have a graphical return (graphical windows deported on your computer), add the -X option to the ssh command (X forwarding). Feel free to create connection aliases using a config file (more details at this page).

Example from a terminal on your machine:

ssh -X <cluster_login>@hal.obs.uvsq.fr

2.2 Connection with Jupyter Lab/Notebook

It is about creating an SSH tunnel that allows sending the outgoing flow from the Jupyter server to your machine. We propose two methods that create an interactive job (section) on the cluster and an SSH tunnel. More information about Jupyter at the computing centre of the IPSL at this address.

Warning

At the end of your work session, close all your terminals connected to HAL (press CTRL and C several times) in order to free the allocated resources (because simply leaving the notebook does not stop your interactive job). If any sessions remain blocked, you can cancel all your jobs with this command executed on hal0: scancel -u <cluster_login>.

Method 1 (semi-automatic)

Example from a terminal on your machine:

ssh <cluster_login>@hal.latmos.ipsl.fr
srun --gres=gpu:1 /net/nfs/tools/bin/jupytercluster.sh 'python/3.8-anaconda2020-11' # Or another Conda module/environment but containing Jupyter lab!

Info

The jupytercluster.sh script displays a command line in the terminal to be executed in another terminal on your machine (not on hal0). Example: ssh -N -L 30920:hal4.obs.uvsq.fr:30920 delcambre@hal.obs.uvsq.fr . This command line produces no output, which is normal. Then in a web browser on your machine, copy/paste the URL that is displayed at the very end of the first terminal and that starts with http://127.0.0.1:........ Example : http://127.0.0.1:30920/?token=c3a46af69f1eb10fc0e3b8aa270050d7d3046a30508d9376. Keep both terminals until the end of your Jupyter session.

By default, Jupyter Lab is started. If you prefer Jupyter Notebook, add the -n option to jupytercluster.sh. Example:

ssh <cluster_login>@hal.latmos.ipsl.fr
srun --gres=gpu:1 /net/nfs/tools/bin/jupytercluster.sh -n 'python/3.8-anaconda2020-11'

If you want to start Tensorboard, add the -t option followed by the path to the directory that will contain the training log files. The script will give you an additional URL to copy into your web browser to display Tensorboard. Example:

ssh <cluster_login>@hal.latmos.ipsl.fr
srun --gres=gpu:1 /net/nfs/tools/bin/jupytercluster.sh -t "/$HOME/my_logs_dir" 'python/3.8-anaconda2020-11'

Info

Of course, the script options are cumulative. To see all the options, run /net/nfs/tools/bin/jupytercluster.sh -h from hal0.

Method 2 (manual)

Example from a terminal on your machine:

ssh <cluster_login>@hal.latmos.ipsl.fr
srun --gres=gpu:1 --pty bash
# Slurm makes you connect to one of the hal[1-4] machine.
# Note the number of the allocated HAL machine for later.
module load python/3.8-anaconda2020-11 # Or another Conda module/environment but containing Jupyter lab!
jupyter lab --no-browser --ip=0.0.0.0 --port=<number between 10000 and 15000> 

From another terminal on your machine:

ssh -N -L <the chosen port number>:hal<allocated machine number>.latmos.ipsl.fr:<the chosen port number> <cluster_login>@hal.latmos.ipsl.fr

Warning

It is not possible to allocate the same port number twice on the same machine. Choose another port number if Jupyter refuses to start because of the port. Afterwards, keep the same number in the commands that follow.

Info

Copy the connection address displayed by Jupyter in the terminal that starts with: http://127.0.0.1 in a web browser.

Info

There are no space characters in the expression <the chosen port number>:hal<allocated machine number>.latmos.ipsl.fr:<the chosen port number>

3. Disk spaces

You have a personal disk space (user's home): /home/${USER} or ${HOME}, but you also have fast storage disk spaces, shared with the other users of the cluster, where you can create your personal directories:

  • /net/nfs/ssd1
  • /net/nfs/ssd2
  • /net/nfs/ssd3

Warning

Remember to copy your results on Ciclad or on your PC because there is currently no backup on the HAL cluster.

Moreover, the disk spaces of the computing centre of the IPSL are present at their usual mount point, currently in read only (more details at this page):

  • /bdd (read only)
  • /data (beware no backup; read only)

4. Data transfer

The import of data coming from elsewhere is possible thanks to the SFTP protocol, using tools like Filezilla (select the SFTP mode), scp or the sftp command, when connecting to hal.obs.uvsq.fr .

Example of transferring all sub-directories and files from the "mydir" directory of a local machine to the "/net/nfs/ssd3" directory on HAL:

scp -rf mydir <cluster_login>@hal.obs.uvsq.fr:/net/nfs/ssd3

Warning

Remember to copy your results on Ciclad or on your PC because there is currently no backup on the HAL cluster.

5. Slurm

Once connected to the hal.obsq.uvsq.fr, the head node, you can submit interactive or batch jobs to access the computational resources.

Warning

If you are using development IDEs like Spyder, it is necessary to close the application in order to really free the memory of the GPU card.

Info

During the development phase of your code, it is not necessary to use all your data or to use a GPU card. So you can omit the GPU card reservation in your job submissions (remove the option --gres=gpu:1), which avoids unnecessary immobilization of precious resources that others would like to use. You also have the possibility to develop your code on the other clusters of the computing centre of the IPSL (Ciclad and Climserv). Finally, you can sub-sample your data beforehand.

5.1 Job management

List of commands to manage jobs. Executable on all hal[0-4] machines, but preferably on hal0:

  • sinfo # Display the state of the nodes.
  • sinfo -Nl # Get information about partitions.
  • squeue # List the current jobs.
  • scancel # Cancel a job with the identifier job_id.
  • srun # Submit a job (interactive or batch).
  • sbatch # Submit a batch job.
  • scontrol show job # Display the details of the jobs.

5.2 GPU Monitoring

List of GPU monitoring commands. Executable only on hal[1-4] machines (not on hal0):

  • nvtop: the GPU version of the top and htop commands. A well made ASCII art command so as to monitor GPU cards in real time (computation activity, memory, data transfer, etc.).
module load nvtop
nvtop
  • nvidia-smi: text mode command for monitoring GPU cards, that can be part of an automated process.
watch nvidia-smi # Monitoring every two seconds.

5.3 Sessions

In order to pool resources, sessions are limited as follows:

  • GPU = 2
  • Thread = 4 (i.e. 2 CPU cores)
  • RAM = 20 GB
  • Node = 1
  • Max running job = 2

Warning

Slurm kills the jobs that exceed the maximum RAM.

5.4 Partitions (queues)

When submitting jobs to Slurm, it is necessary to define the maximum elapsed time of the job, that is the meaning of the cluster partitions. Each partition defines this time as follow:

  • short: 2h
  • std: 6h - default -
  • h12: 12h
  • day: 24h
  • day3 | threedays: 72h
  • week
  • week2
  • infinite

Warning

Slurm kills the jobs that exceed the maximum elapsed time of its partition.

6. Interactive Job

The interactive job allows you to stay connected to the job so as to interact with it, typically for a Jupyter Lab/Notebook/Console session. This solution is to be preferred in the development phase or when treatments require a graphical output to be consulted in real time (for example: training monitoring). Once the job is submitted, you are connected to a machine that meets your request of resources: one of the hal[1-4] machines.

Example from hal0:

srun
-p <partition-name> # Select the maximum elapsed time - partition.
--x11 \ # Returns the display (X-Forwarding). Connection via ssh -X required.
-w <nodename> \ # Select a particular node: hal[1-4]
--gres=gpu:1 \ # Request the use of one GPU card.
-n <core number> \ # Request CPU usage (limited to 4 cores).
--mail-user=<email> \ # Requests email notifications.
-N <node number> \ # Number of nodes.
--pty bash # Request an interactive job with the Bash shell.

Example of submitting an interactive job allocated on the day partition, with a single GPU card, executed on one node whose choice is left to Slurm, without graphical feedback and without email notifications. From hal0:

srun --gres=gpu:1 --pty bash

Once connected to one of the hal[1-4] machines, you can run the scripts and binaries in the shell that is presented to you. Exit the shell to end the interactive job (exit command or CTRL + D shortcut).

Info

If your processing requires loading modules, read the following page.

Info

For the extension of a module, e.g. installation of missing packages or in particular versions, it is possible to create Python virtual environments on top of modules. See this section for more details.

Example of loading an Anaconda module, from hal[1-4], after submitting an interactive job:

module load python/3.8-anaconda2020-11 # Or any other module.

7. Batch job

The batch job allows you to submit a processing in a detached way for long term treatments: without interaction and without maintaining a connection between your machine and the cluster (the jobs run even if you are disconnected). Several batch jobs can be submitted to run at the same time. However, jobs that cannot fit in available resources are queued to be run later.

To submit a job in batch mode there are two methods, depending on the command used for submission: sbatch and srun.

7.1 The sbatch method

The first method is to create an executable Bash file (chmod +x bootstrap.sh) containing configuration options starting with #SBATCH, placed at the beginning of the file.

Example of a Bash script named bootstrap.sh located in the directory ${HOME}/mydir. The purpose of this script is to execute the Python script ${HOME}/mydir/mycode.py.

To submit the job to Slurm, from hal0:

sbatch ${HOME}/mydir/bootstrap.sh

Any arguments given after the script path are passed to the Python script:

sbatch ${HOME}/mydir/bootstrap.sh <arg1> <arg2>

Contents of the bootstrap.sh file:

#!/bin/bash

# Instructions SBATCH always at the beginning of the script!

# Change the working directory before the execution of the job.
# Warning: the environment variables, e.g. $HOME,
# are not interpreted for the SBATCH instructions.
# Writing absolute paths is recommended.
#SBATCH -D /home/me/mydir 

# The job partition (maximum elapsed time of the job).
#SBATCH --partition=day  

# The name of the job.
#SBATCH -J myjobname

# The number of GPU cards requested.
#SBATCH --gres=gpu:1

# Email notifications (e.g. the beginning and the end of the job).
#SBATCH --mail-user=me@myprovider.com
#SBATCH --mail-type=all

# The path of the job log files.
# The error and the output logs can be merged into the same file.
# %j implements a job counter.
#SBATCH --error=slurm-%j.err
#SBATCH --output=slurm-%j.out

# Overtakes the system memory limits.
ulimit -s unlimited

# Load the user configuration file.
source /etc/profile

# Unload all modules previously loaded.
module purge

# Changes to the given working directory.
# It supersedes the #SBATCH -D instruction, but in runtime.
cd ${HOME}/mydir

# Load the required modules.
# Change the module names as you need.
modules=('python/3.8-anaconda2020-11' 'intel/15.0.6.233')
for mod in "${modules[@]}" ; do
  module load "${mod}"
done

################ OPTIONAL ENVIRONMENT ##################

##### CONDA ENVIRONMENT ACTIVATION
## source activate must be used instead of conda activate.
# source "path/to/anaconda/bin/activate" <myenv>

##### PYTHON VIRTUAL ENVIRONMENT ACTIVATION
## A Python module should be loaded before (e.g. python/3.8-anaconda2020-11). 
# source "path/to/myenv/bin/activate" 

########################################################

# Time pretty printer.
# $1 : Time in seconds.
display_duration()
{
  local duration=$1
  local secs=$((duration % 60)) ; duration=$((duration / 60));
  local mins=$((duration % 60)) ; duration=$((duration / 60));
  local hours=$duration

  local splur; if [ $secs  -eq 1 ]; then splur=''; else splur='s'; fi
  local mplur; if [ $mins  -eq 1 ]; then mplur=''; else mplur='s'; fi
  local hplur; if [ $hours -eq 1 ]; then hplur=''; else hplur='s'; fi

  if [[ $hours -gt 0 ]]; then
    display="$hours hour$hplur, $mins minute$mplur, $secs second$splur"
  elif [[ $mins -gt 0 ]]; then
    display="$mins minute$mplur, $secs second$splur"
  else
    display="$secs second$splur"
  fi
  echo "$display"
  return 0
}

# Enable the standard and error outputs of Python.
export PYTHONUNBUFFERED=1

# Run your process. The optional command line arguments of the bootstrap.sh script
# are passed to your Python scripts ($@).
python mycode.py $@ 

returned_code=$?
echo "> script completed with exit code ${returned_code}"
echo "> elapsed time: `display_duration ${SECONDS}`"

####################### DEBUG #####################
# Optional instructions.
echo "===== my job information ==== "
echo "> module list:"
module list
echo "> node list: " $SLURM_NODELIST
echo "> my job id: " $SLURM_JOB_ID
echo "> job name: " $SLURM_JOB_NAME
echo "> partition: " $SLURM_JOB_PARTITION
echo "> submit host:" $SLURM_SUBMIT_HOST
echo "> submit directory:" $SLURM_SUBMIT_DIR
echo "> current directory: `pwd`"
echo "> executed as user: `whoami`"
echo "> executed as slurm user: " $SLURM_JOB_USER
echo "> user $PATH: " $PATH
####################################################

exit ${returned_code}

Info

The instructions of the "debug" block are optional, but can be useful to understand why a job does not work.

Info

If your Python script requires the activation of a Conda environment or a Python virtual environment, un-comment the instructions in the "OPTIONAL ENVIRONMENT" block and add the name of your environment.

7.2 srun method

The second method uses both scripts (Bash and Python). However, the #SBATCH options are given as arguments to the srun command. The second method is preferable when these options are generated by a meta-script.

List of additional srun options for batch job submission. The options seen for interactive job submission make sense except for --pty bash which specifies an interactive job:

  • --output # Specifies the file path where the standard output will be logged.
  • --error # Specifies the path of the file where the standard error will be logged.

Example of submitting a batch job using srun, from hal0:

srun --gres=gpu:1 --mail-user='me@myprovider.com' --output='job.log' --error='job.log' "${HOME}/mydir/bootstrap.sh" <arg1> <arg2> &

Info

Note the & symbol at the end of the line which effectively submits the job to Slurm.

8. Software environments

The cluster has a set of installed modules, to list them and their version and to load them, it is necessary to use the module command. More details at this page.

  • module avail # List the available modules.
  • module load # Loads the modules given as arguments.
  • module list # List the loaded modules.
  • module unload # Unloads the module given in argument.
  • module purge # Unload all loaded modules.

Info

The module command is executable on hal[0-4] machines. If you want to test a module, don't forget that you have the possibility to submit an interactive job without GPU card allocation: srun --pty bash

Info

When using Anaconda or Python virtual environments, it is recommended to list the packages of the environment for reproducibility purposes. On hal[0-4]: module load <module_name>; pip list

9. Extending an Anaconda module

The Anaconda modules available on HAL contain hundreds of Python packages. However, you may not find the package you need for your project or the package may not be at the right version. This section describes a procedure to extend an Anaconda module. Extending an Anaconda module is done by overlaying a virtual Python environment on top of the module.

The packages of the Anaconda module are always accessible from the Python virtual environment. Indeed, the packages installed in the Python virtual environment come in addition to or replace those of the module.

This technique is not without restrictions: Python virtual environments only manage Python packages, unlike Conda. For example, you can't install drivers (cudatoolkit, cudnn, etc.) necessary for some Python packages (Tensorflow, Pytorch, etc.). However, if the drivers present in the module satisfy the packages you want to install (list of packages in a module: pip list), the technique described in this section is sufficient. First of all, try this procedure. If it fails, we advise you to create a Conda environment according to your specifications (see also here).

Warning

Conda environments take more disk space than Python virtual environments. Don't forget that your disk space is restricted (quota). If possible, use Python virtual environments.

Python virtual environments

This example shows how to extend the Anaconda module python/3.8-anaconda2020-11, thanks to the creation of a Python virtual environment named myenv, located in the directory ${HOME}/virtual_envs. From hal.obs.uvsq.fr:

module load python/3.8-anaconda2020-11 # Load the module.
mkdir -p "${HOME}/virtual_envs" # Create the environment parent dir, if it is not already done.
python -m venv --system-site-packages "${HOME}/virtual_envs/myenv" # Create the virtual environment named myenv.
source "${HOME}/virtual_envs/myenv/bin/activate" # Activate the virtual environment.
pip install -U pip # Update pip.
pip install -U <nom_package1> <nom_package2> # Install or upgrade your packages.

Example of the activation of the Python virtual environment named myenv:

module load python/3.8-anaconda2020-11
source "${HOME}/virtual_envs/myenv/bin/activate"

De-activation of the current Python virtual environment:

deactivate

Deletion of the Python virtual environment named myenv:

rm -fr "${HOME}/virtual_envs/myenv"

Warning

Code that has dependencies on packages installed in the myenv environment must always run in an environment where the module extended by myenv and the myenv environment are loaded (example: module load python/3.8-anaconda2020-11) and activated (example source "${HOME}/virtual_envs/myenv/bin/activate"), respectively. Load the module first then activate the virtual environment.

Warning

Installing packages with the command pip may result in upgrading or downgrading previously installed packages (due to dependencies). Be careful, this can impact your experiences! It can also lead to conflicts. In order to check for them, run pip check and please resolve them as pip will not do it for you!

Info

Find Python packages at this address.

Here is an example of creating a virtual environment with Tensorflow 2.3.x which is compatible with the drivers (cudnn and cudatoolkit) present in the python/3.8-anaconda2020-11 module (whose Tensorflow version is only 2.2.0):

module load python/3.8-anaconda2020-11 # Load the module.
mkdir -p "${HOME}/virtual_envs" # Create the environment parent dir, if it is not already done.
python -m venv --system-site-packages "${HOME}/virtual_envs/myenv" # Create the virtual environment.
source "${HOME}/virtual_envs/myenv/bin/activate" # Activate the virtual environment.
pip install -U pip # Update pip.
pip install -U 'tensorflow-gpu==2.3.*' 'tensorflow==2.3.*' # Install Tensorflow 2.3.x.