Iridis research computing facility

About the Iridis research computing facility

Iridis, the University’s High Performance Computing System, has dramatically increased computational performance and reach. Iridis is currently in its fifth generation and remains one of the largest computational faculties in the UK. In 2017, Iridis 5 joined the elite of the world’s top 500 supercomputers and is over four times more powerful than its predecessor.

In 2024, we will be welcoming the sixth generation of Iridis (Iridis 6), featuring an impressive addition of over 26,000 CPU cores. This substantial upgrade effectively doubles the current computational capacity of the entire Iridis facility, marking a significant advancement in our computing capabilities and fostering innovation. Iridis 6 seamlessly replaces Iridis 4, which had just over 12,000 CPU cores, while coexisting with Iridis 5, forming a dynamic and versatile computing ecosystem.

Highlights list 

  • Iridis 5 comprises of 25,000+ processor cores, 74 GPU cards, as well as 2.2PB storage utilizing the IBM Spectrum Scale file system.
  • Iridis 6 will comprise of 26,000+ AMD CPU cores, alongside high memory and login nodes with up to 3TB of memory and 15TB of local storage each. 
  • Dedicated nodes for visualisation software applications. 
  • Management of private research facilities including the School of Engineering’s deep learning computing cluster. 
  • Dedicated research computing system engineers for user support and training with an inclusive HPC facility supporting both research and teaching activities at the University.
  • High performance InfiniBand network infrastructure for high-speed data transfer

Technical specification

Iridis 5

Compute nodes

  • 464 Intel compute nodes with dual 2.0 GHz Intel Xeon Gold 6138 processors

  • 76 AMD compute nodes with dual 2.35 GHz AMD 7452 processors.

  • 16 AMD compute nodes with dual 2.5 GHz AMD 7502 processors.

  • Each Intel compute node has 40 CPUs with 192 GB of DDR4 memory while the AMD compute nodes have 64 CPUs with 256GB of DDR4 memory.

Graphic cards

  • 20 NVIDIA Tesla V100 graphic cards, each with 16GB VRAM spread across 10 nodes (node specification: 40 CPUs with 192 GB of DDR4 memory)

  • 40 GTX1080Ti graphic cards, each with 12GB VRAM spread across 10 nodes (node specification: 28 CPUs with 128GB of DDR4 memory)

  • 14 A100 graphics cards, each with 80GB VRAM spread across 7 nodes (node specification: 48CPUs with 192GB of DDR4 memory)

High Memory nodes

  • 16 high-memory nodes, each with 64 cores, 1.5TB of RAM and 20TB of SAS HDD.

Visualisation nodes, login nodes and more

  • 2 data visualisation nodes with 32 usable cores, 384 GB of RAM and an Nvidia M60 GPU. These run both windows and Linux VMs.

  • 3 Intel login nodes, each with 40 CPU cores and 384 GB of memory.

  •  1 AMD login node with 64 CPU cores and 512 GB of memory

  •  In total more than 20,000 processor-cores providing 1,305 TFlops peak.

  • InfiniBand HDR100 Interconnect for high-speed data transfer

Iridis 6

  •  25,000+ AMD CPU for batch computing

  • 4 high memory nodes, each with 3TB of Memory, 6.6TB of local storage and 192 AMD CPU cores

  •  3 Login nodes with 3TB of Memory, 15TB of local storage and 192 AMD CPU cores

  •  InfiniBand HDR100 Interconnect for high-speed data transfer

Contact us

Contact us

The University provides both administrative, training and software engineering support to unlock the full potential of our high performance computing clusters.
University of Southampton Research Data Centre

HPC Administrative team

University of Southampton Research Software Group

Useful links