The aim of the Consortium is to provide researchers with access to powerful computing equipment (clusters). Clusters are installed and managed locally at the different sites of the universities taking part in the Consortium, but they are accessible by all researchers from the member universities. A single account is used to access all clusters through SSH. For information on how to request an account see the getting started page.

All of them run Linux, and use Slurm as the job manager. Basic parallel computing libraries (OpenMP, MPI, etc) are installed, as well as the optimized computing subroutines (e.g. BLAS, LAPACK, etc.). Common interpreters such as R, Octave, Python, etc. are also installed. See below each cluster page for more details.

Decomissioned Clusters

Summary of the clusters

ClusterHostCPU typeCPU countRAM/nodeNetworkFilesystemGPUMax timeParticularities
serial jobs, GPU
ULBGenoa 3.25 GHz1280 (40 × 32)128 GB50 GbECephFS 500 TB40x NVIDIA RTX 6000 ADA5 daysMax 24 cores/node for non-gpu nodes
large MPI jobs
UCLouvainGenoa 2.4 GHz5120 (40 × 128)766 GBHDR IBBeeGFS 320 TBNone2 daysMax 6GB RAM/core
medium MPI jobs
ULiègeRome 2.9 GHz4672 (73 × 64)256 GB…1 TBHDR IBBeeGFS 520 TBNone2 daysMax CPUs/user: 384, max jobs: 201
high-memory jobs
UNamurNaples / Genoa 2.0 / 2.75 GHz1024 (30 × 32 + 2 × 64) + 1776 (35 × 48 + 1 × 96)256 GB…3 TB10 GbE / 25 GbENFS / Ceph 80 TB / 250 TB8x NVIDIA A40, 4x NVIDIA A600015 daysMax CPUs/user: 1024
serial jobs
UMonsSkyLake 2.60 GHz592 (17 × 32 + 2 × 24)192…384 GB10 GbERAID0 3.3 TB4x Volta V10021 daysMax 1 GPU/user
serial / SMP
UMonsSandyBridge 2.60 GHz416 (26 × 16) + 32 (2 × 16)128 GBGbERAID0 1.1 TB4x Tesla C2075, 4x Tesla Kepler K20m41 daysdecommissioned since 2025
MPI
UCLouvainSkyLake / Haswell 2.3 / 2.6 GHz1872 (78 × 24) + 112 (4 × 28)95 GB / 64 GBOmnipathBeeGFS 440 TBNone2 days 6 hoursdecommissioned since 2024
MPI
ULiègeSandyBridge / IvyBridge 2.0 GHz2048 (120 × 16 + 8 × 16)64 GBQDR IBFHGFS 144 TBNone3 daysdecommissioned since 2021
serial / SMP / MPI
ULBBulldozer 2.1 GHz896 (14 × 64)256 GBQDR IBGPFS 70 TBNone14 daysdecommissioned since 2020
serial / SMP
UNamurSandyBridge 2.20 GHz512 (32 × 16)64…128 GBGbENFS 20 TBNone63 daysdecommissioned since 2019
MPI
UCLouvainWestmere 2.53 GHz1380 (115 × 12)48 GBQDR IBLustre 120 TB3x Quadro Q40003 daysdecommissioned since 2018
SMP
UCLouvainMagnyCours 2.2 GHz816 (17 × 48)128…512 GBQDR IBFHGFS 30 TBNone15 daysdecommissioned since 2020

* Decommissioned / decomissioned clusters are greyed out.

CÉCI clusters capabilities comparison

Normalized comparison across all capabilities.

Timeline of availability

The clusters have been installed gradually since early 2011, first at UCL, with HMEM being a proof of concept. At that time, the whole account infrastructure was designed and deployed so that every researcher from any university was able to create an account and login to HMEM. Then, LEMAITRE2 was setup as the first cluster entirely funded by the F.N.R.S. for the CÉCI. DRAGON1, HERCULES, VEGA and NIC4 have followed, in that order, as shown in the timeline here-under.

20192020202120222023202420252026HERC2DRG2NIC5LUCIALEM4LYRA