Skip to content

About the Job Scheduler

The job scheduler allows you to submit your analisys to be executed on the computes nodes of the cluster.

Each submission is refer as a 'job'.

At first you need to create a job script or adapt an older one to fit your present job requierements.

A job is a script (Shell, Perl, Python, etc.) and Not a binary file.

The script specifies the actions/commands which will be executed on the compute node to fullfill your job. Thoses commands are similar than those you type and execute on the interactive nodes.

With your job script ready, you will need to submit it to the scheluder and it will be placed in a queue along others jobs.

When you submit your job, you can ask for a specif set of ressources (CPU,Ram,etc.) and/or a specific queue to use. This can be done via the submission commandline or inside your job script.

Each queue arranges the jobs submitted by an order of priority and will treat them in this order. By default the queues work in FIFO, First In First Out, prioritizing jobs by time of submission. Others rules may apply in specific cases (DocInProgress).

As soon as the scheduler detect the avaiblity of the ressources to execute your job, it will reserve them and launch your job automaticly.

After the submission and during the execution, you can/must monitor your job.

To ease this monitoring, the batch system can inform you via mail of the changes of state of your jobs: submited, launched, finished and "error detected"

The differents commands, options and explainations are detailed in the following pages.