The aim of the Consortium is to provide researchers with access to powerful computing equipment (clusters). Clusters are installed and managed locally at the different sites of the universities taking part in the Consortium, but they are accessible by all researchers from the member universities. A single account is used to access all clusters through SSH. For information on how to request an account see the getting started page.
All of them run Linux, and use Slurm as the job manager. Basic parallel computing libraries (OpenMP, MPI, etc) are installed, as well as the optimized computing subroutines (e.g. BLAS, LAPACK, etc.). Common interpreters such as R, Octave, Python, etc. are also installed. See below each cluster page for more details.
Decomissioned Clusters
Summary of the clusters
| Cluster | Host | CPU type | CPU count | RAM/node | Network | Filesystem | GPU | Max time | Particularities |
|---|---|---|---|---|---|---|---|---|---|
serial jobs, GPU | ULB | Genoa 3.25 GHz | 1280 (40 × 32) | 128 GB | 50 GbE | CephFS 500 TB | 40x NVIDIA RTX 6000 ADA | 5 days | Max 24 cores/node for non-gpu nodes |
large MPI jobs | UCLouvain | Genoa 2.4 GHz | 5120 (40 × 128) | 766 GB | HDR IB | BeeGFS 320 TB | None | 2 days | Max 6GB RAM/core |
medium MPI jobs | ULiège | Rome 2.9 GHz | 4672 (73 × 64) | 256 GB…1 TB | HDR IB | BeeGFS 520 TB | None | 2 days | Max CPUs/user: 384, max jobs: 201 |
high-memory jobs | UNamur | Naples / Genoa 2.0 / 2.75 GHz | 1024 (30 × 32 + 2 × 64) + 1776 (35 × 48 + 1 × 96) | 256 GB…3 TB | 10 GbE / 25 GbE | NFS / Ceph 80 TB / 250 TB | 8x NVIDIA A40, 4x NVIDIA A6000 | 15 days | Max CPUs/user: 1024 |
serial jobs | UMons | SkyLake 2.60 GHz | 592 (17 × 32 + 2 × 24) | 192…384 GB | 10 GbE | RAID0 3.3 TB | 4x Volta V100 | 21 days | Max 1 GPU/user |
serial / SMP | UMons | SandyBridge 2.60 GHz | 416 (26 × 16) + 32 (2 × 16) | 128 GB | GbE | RAID0 1.1 TB | 4x Tesla C2075, 4x Tesla Kepler K20m | 41 days | decommissioned since 2025 |
MPI | UCLouvain | SkyLake / Haswell 2.3 / 2.6 GHz | 1872 (78 × 24) + 112 (4 × 28) | 95 GB / 64 GB | Omnipath | BeeGFS 440 TB | None | 2 days 6 hours | decommissioned since 2024 |
MPI | ULiège | SandyBridge / IvyBridge 2.0 GHz | 2048 (120 × 16 + 8 × 16) | 64 GB | QDR IB | FHGFS 144 TB | None | 3 days | decommissioned since 2021 |
serial / SMP / MPI | ULB | Bulldozer 2.1 GHz | 896 (14 × 64) | 256 GB | QDR IB | GPFS 70 TB | None | 14 days | decommissioned since 2020 |
serial / SMP | UNamur | SandyBridge 2.20 GHz | 512 (32 × 16) | 64…128 GB | GbE | NFS 20 TB | None | 63 days | decommissioned since 2019 |
MPI | UCLouvain | Westmere 2.53 GHz | 1380 (115 × 12) | 48 GB | QDR IB | Lustre 120 TB | 3x Quadro Q4000 | 3 days | decommissioned since 2018 |
SMP | UCLouvain | MagnyCours 2.2 GHz | 816 (17 × 48) | 128…512 GB | QDR IB | FHGFS 30 TB | None | 15 days | decommissioned since 2020 |
* Decommissioned / decomissioned clusters are greyed out.
CÉCI clusters capabilities comparison
Normalized comparison across all capabilities.
Timeline of availability
The clusters have been installed gradually since early 2011, first at UCL, with HMEM being a proof of concept. At that time, the whole account infrastructure was designed and deployed so that every researcher from any university was able to create an account and login to HMEM. Then, LEMAITRE2 was setup as the first cluster entirely funded by the F.N.R.S. for the CÉCI. DRAGON1, HERCULES, VEGA and NIC4 have followed, in that order, as shown in the timeline here-under.












