Hosted at UMons, the Dragon1 cluster consists of 28 compute nodes:
- 26 nodes with two Intel Sandy Bridge E5-2670 (2×8 cores @ 2.6 GHz)
- 2 nodes with two Intel Sandy Bridge E5-2650 (2×8 cores @ 2.0 GHz)
All nodes feature 128 GB RAM and 1.1 TB local scratch.
The cluster is interconnected with Gigabit Ethernet (10 Gigabit for the 36 TB NFS file server).
GPU resources include:
- 2 nodes with 2× NVIDIA Tesla M2075 (512 GFLOPS float64 each)
- 2 nodes with 2× NVIDIA Tesla K20m (1.1 TFLOPS float64 each)
Suitable for:
Long shared-memory parallel jobs (OpenMP, Pthreads) or resource-intensive sequential workloads.
Max wall times:
- Long queue: 41 days
- Batch queue: 5 days
- GPU queues: 15–21 days depending on GRES
Resources
- Home directory (20 GB per user)
- Local working directory
/scratch($LOCALSCRATCH) - No internet access on compute nodes
- Queues*:
- Long queue: Max 41 days, 40 CPUs/user, 500 jobs/user
- Default queue (batch): Max 5 days, 40 CPUs/user, 500 jobs/user
- GPU GRES*:
gpu: Max 15 days, gres=gpu:kepler:1 or gres=gpu:tesla:1lgpu: Max 21 days, gres=gpu:1
Access / Support
This cluster has been decommissioned

