Skip to content

Computing nodes

Computing nodes can only be accessed with the job manager
All nodes have 4Gb of memory per core
to know the state of all nodes, use the check-cluster command

Ciclad cluster (2021/01/20)

  • 20 Computing nodes 1152 cores
  • 4 nodes with 32 cores 128Gb AMD Opteron
  • 16 nodes with 64 cores 256Gb AMD Opteron

Submit limit for Users is 1000 jobs

Standard User limit for running jobs : if any parameter is over limit jobs are waiting to be under limit to launch the jobs MAXNODE=2 MAXPROC=66 MAXJOBS=66 MAXMEM=252G

Parallel Jobs : Walltime MAX 1 Week ( 168H ) interactive jobs: Walltime MAX 6H

External Users ( Not in IPSL ) have one more limit

MAXPROC=172 for all external user

Climserv cluster (2021/01/20)

  • 14 Computing nodes 896 cores
  • 14 Computing nodes with 64 cores 256Gb AMD Opteron

Submit limit for Users is 1000 jobs

Standard User limit for running jobs : if any parameter is over limit jobs are waiting to be under limit to launch the jobs MAXNODE=4 MAXPROC=128 MAXJOBS=66 MAXMEM=504G

Parallel Jobs : Walltime MAX 1 Week ( 168H ) interactive jobs: Walltime MAX 6H

External Users ( Not in IPSL ) have one more global limit

MAXJOB=48 MAXNODE=3 MAXPROC=66

Hal GPU cluster

  • 4 computing nodes with 2 gpu card
  • 8 core Intel(R) Xeon(R) Silver 4112 CPU @ 2.60GHz
  • 64Gb Memory
  • 2 Nvidia GeForce RTX 2080 Ti