About our High Performance Computing facility
Find out about what HPC is and how it's helping University researchers process and analyse very large data sets and perform complex computational tasks.
High Performance Computing (HPC) refers to the action of making individual computers work together. As a group (or 'cluster') they can break up large tasks and each handle a small piece. This process means, in an age of Big Data, valuable answers can be rapidly arrived at, through HPC, from analysing data too big for a single desktop computer to handle.
HPC as a tool for research
All researchers and academics at the University can use the HPC service. There are a wide range of applications and software available on the central machine. You can use the service to get through masses of data quickly (capacity computing) or run large simulations or models over a number of servers (capability computing). Typical problems include:
- computational fluid dynamics
- finite element analysis
- large-scale statistical analyses
- numerical weather and climate prediction
- molecular interaction simulations
- RNA sequencing analysis
- simulations of quasi-brittle structures
- x-ray crystallography
Researchers benefit from our specialist knowledge and support, and a large range of scientific software pre-configured for your use.
To find out more you can:
HPC service at the University
View the full technical specification of the University's centralised HPC cluster. The core system is known as Balena. It runs on a version of the Linux operating system and provides 3,072 x86 compute cores, a range of accelerator cards, a parallel file-system and visualisation services.