The aim of the Consortium is to provide researchers with access to powerful computing equipment (clusters). Clusters are installed and managed locally at the different universities taking part in the Consortium, but they are accessible by all researchers from the member universities. A single account is used to access all clusters through SSH. For information on how to request an account see the getting started page.
All of them run Linux, and use Slurm as the job manager. Basic parallel computing libraries (OpenMP, MPI, etc) are installed, as well as the optimized computing subroutines (e.g. BLAS, LAPACK, etc.). Common interpreters such as R, Octave, Python, etc. are also installed. See below each cluster page for more details.
Decomissioned Clusters
Summary of the clusters
| Cluster | CPU count | GPU count | GlobalScratch | Max time | Preferred jobs |
|---|---|---|---|---|---|
| 1280 | 40 | 500 TB | 5 days | serial, shared memory parallel, GPU | |
| 5120 | None | 320 TB | 2 days | large MPI jobs | |
| 4672 | None | 520 TB | 2 days | medium MPI jobs | |
| 2800 | 12 | 250 TB | 15 days | serial, shared memory parallel, high-memory | |
| 592 | 4 | 3.3 TB | 21 days | serial, shared memory parallel | |
| 416 (26 × 16) + 32 (2 × 16) | 8 | 1.1 TB | 41 days | serial / SMP | |
| 1872 (78 × 24) + 112 (4 × 28) | None | 440 TB | 2 days 6 hours | MPI | |
| 2048 (120 × 16 + 8 × 16) | None | 144 TB | 3 days | MPI | |
| 896 (14 × 64) | None | 70 TB | 14 days | serial / SMP / MPI | |
| 512 (32 × 16) | None | 20 TB | 63 days | serial / SMP | |
| 1380 (115 × 12) | 3 | 120 TB | 3 days | MPI | |
| 816 (17 × 48) | None | 30 TB | 15 days | SMP |
CÉCI clusters capabilities comparison
Normalized comparison across all capabilities.
Timeline of availability
The clusters have been installed gradually since early 2011, first at UCL, with HMEM being a proof of concept. At that time, the whole account infrastructure was designed and deployed so that every researcher from any university was able to create an account and login to HMEM. Then, LEMAITRE2 was setup as the first cluster entirely funded by the F.N.R.S. for the CÉCI. DRAGON1, HERCULES, VEGA and NIC4 have followed, in that order, as shown in the timeline here-under.












