The aim of the Consortium is to provide researchers with access to powerful computing equipment (clusters). Clusters are installed and managed locally at the different universities taking part in the Consortium, but they are accessible by all researchers from the member universities. A single account is used to access all clusters through SSH. For information on how to request an account see the getting started page.

All of them run Linux, and use Slurm as the job manager. Basic parallel computing libraries (OpenMP, MPI, etc) are installed, as well as the optimized computing subroutines (e.g. BLAS, LAPACK, etc.). Common interpreters such as R, Octave, Python, etc. are also installed. See below each cluster page for more details.

Decomissioned Clusters

Summary of the clusters

ClusterCPU countGPU countGlobalScratchMax timePreferred jobs
128040500 TB5 daysserial, shared memory parallel, GPU
5120None320 TB2 dayslarge MPI jobs
4672None520 TB2 daysmedium MPI jobs
280012250 TB15 daysserial, shared memory parallel, high-memory
59243.3 TB21 daysserial, shared memory parallel
416 (26 × 16) + 32 (2 × 16)81.1 TB41 daysserial / SMP
1872 (78 × 24) + 112 (4 × 28)None440 TB2 days 6 hoursMPI
2048 (120 × 16 + 8 × 16)None144 TB3 daysMPI
896 (14 × 64)None70 TB14 daysserial / SMP / MPI
512 (32 × 16)None20 TB63 daysserial / SMP
1380 (115 × 12)3120 TB3 daysMPI
816 (17 × 48)None30 TB15 daysSMP

CÉCI clusters capabilities comparison

Normalized comparison across all capabilities.

Timeline of availability

The clusters have been installed gradually since early 2011, first at UCL, with HMEM being a proof of concept. At that time, the whole account infrastructure was designed and deployed so that every researcher from any university was able to create an account and login to HMEM. Then, LEMAITRE2 was setup as the first cluster entirely funded by the F.N.R.S. for the CÉCI. DRAGON1, HERCULES, VEGA and NIC4 have followed, in that order, as shown in the timeline here-under.

20192020202120222023202420252026HERC2DRG2NIC5LUCIALEM4LYRA