Department of Mathematical Sciences

SAMBa invests in high performance computing system

Wed Jul 08 15:27:00 BST 2015

Atmospheric model for climate prediction The University’s new high performance computing (HPC) system, Balena, went live in April 2015.

SAMBa was able to fund extra capability on the system, thanks to an award of almost £186,000 from EPSRC to invest in the hardware.

Balena's processing power and storage will be of great benefit to our researchers, who tackle real world challenges like climate prediction (see image). These challenges involve complex processes and vast quantities of data.

Training SAMBa students in HPC

To equip them with the skills in HPC they need, all SAMBa students take a course in scientific computing. Many will continue to do computing-intensive PhD projects, developing novel research methods using HPC equipment and Balena will complement the EPSRC-provided national facility, ARCHER.

Students and staff associated with SAMBa will have priority access to the EPSRC-funded part of Balena (a 36-node IvyBridge cluster, 130TBs of storage and one GPU-enabled node). Other researchers will benefit, including those from EPSRC-funded CDTs in Condensed Matter Physics, Water Informatics and Sustainable Chemical Technologies.

Visit the HPC website or contact Steven Chapman ( for more information and how to access Balena.


Balena provides:

  • 192 compute nodes providing 3,072 cores
  • 88 nodes with four GB of memory per core (64GB per compute node)
  • 80 nodes with eight GBs of memory per core (128GB per compute node)
  • Two high-memory nodes each with 512GB (32GBs per core)
  • Six nodes each with one NVIDIA K20x Kepler GPU card
  • Four nodes each with four NVIDIA K20x Kepler GPU cards
  • One node with two AMD S10000 GPU card
  • Three nodes with single Xeon Phi 5110p cards
  • Eight nodes with four Xeon Phi 5110p cards
  • Four interactive test and development (ITD) nodes with NVIDIA K20x or Xeon Phi 5110p cards and 128GBs memory
  • High-end visualisation service
  • Shared home area storage (five GB quota per user)
  • 0.22PBs of non-archivable storage based on a BeeGFS/FhGFS parallel filesystem for high I/O
  • Remote storage mapping
  • Web based metrics / accounting information
  • Energy reporting