High performance computing touches many aspects of modern business, but what exactly is a supercomputer? IBM's David Turek sheds some light on the technology driving advanced industry and research
High performance computing is the driving force behind many advanced processes in industry and scientific research. At the ISC High Performance conference in Frankfurt last week, IBM’s vice president of exascale systems David Turek sat down with Andrew Fawthrop to discuss what a supercomputer is, and how it creates value for modern businesses
Computers are such commonplace features of modern life it can be easy to forget that the smartphone nestled snugly in your pocket is in fact a powerful data processing machine capable of astonishing feats of information analysis.
Whether planning the optimum route for a long car journey while avoiding real-time traffic build-up, checking the latest weather forecast or managing personal finances, harnessing the problem-solving capabilities of computer processors is an integral aspect of daily life for many around the world.
But beyond all these now routine applications of computer technology lurks the less familiar world of high performance computing (HPC) – essentially the process of linking multiple computers together to dramatically increase the potential of each individual unit by getting them to operate collectively.
These “supercomputers” are capable of processing staggering amounts of data extremely quickly, with performance measured in unusual-sounding terms like teraflops, petaflops and exaflops – the “flop” part standing for floating point operations per second.
The aggregated power of these machines can be used to solve highly complex information problems, far beyond the capabilities of even the most sophisticated of desktop computers, and has a range of potential applications across science and industry.
Here we take a closer look at the world of HPC, with IBM’s vice president of Exascale Systems David Turek discussing its evolution, utilisation and future directions.
What is a supercomputer?
The act of connecting multiple computer processors together in parallel clusters to boost the overall power of each unit, or node, is not a new procedure – creating supercomputers has in fact been done for several decades.
Mr Turek explains: “In a certain sense, HPC is the employment and utilisation of mathematics writ-large, to represent or model the physical world, to give a representation of it.
“The nature of the mathematics can be fairly exotic, but generally speaking it’s the use of computing and software algorithms based on mathematics that allows you to explore different dimensions of the world.
“Every discipline you can thing of will have a use for HPC within it.
“The reason for that is that as a world economy over the last several decades, we’ve been moving away from experiments in vivo to experiments in silico – a move from the analogue word to the digital world.
“Just about anything you touch in your day-to-day life has been influenced by high performance computing.”
Where is high performance computing used?
The ability to process and inspect vast amounts of data at very high speeds enables companies and research institutions to create virtual knowledge maps and simulations as they look to develop new technologies and methodologies based on highly complex data.
The world’s two fastest supercomputers – Summit and Sierra, both built by IBM – are used in research centres in the US for programmes largely funded by the US Department of Energy and National Nuclear Security Administration.
Summit is housed at the Oak Ridge National Laboratory in Tennessee, while Sierra can be found at the Lawrence Livermore National Laboratory in California – with the pair used for research in areas of nuclear science, systems biology and national security among others.
Last week, IBM also announced Pangea III, the world’s most powerful supercomputer built specifically for commercial purposes, which will be used by French petroleum company Total to aid in its new resource discovery.
Mr Turek says Pangea III uses the same architecture as Summit and Sierra and puts Total in a position where, regardless of how the machine is used today, it can “bring AI features into play in concert with the classical ways Total has previously looked at things like seismic processing or even reservoir modelling”.
But he explains HPC has potential applications across a very broad range of industries, from analysing risk in financial services to designing the cars of the future.
“A trucking company could use this technology to determine the outcome of trucking routes through the UK to find out the best route to optimise fuel consumption,” says Mr Turek.
“Formula 1 racers will use it to model race oils for fuel efficiency to give themselves a competitive advantage in the race.
“In financial services, people use this technology to explore the quantification of risk and the optimisation of investment portfolios.
“In the automotive industry, it’s used to model the physical behaviour of automobiles or to optimise fuel efficiency.
“You can use this technology for fraud discovery, aeroplane design, drug design or genomics.”
AI is bringing a new dimension to the traditional supercomputer
While high performance computing may have been around for many years, recent advancements in AI technology have brought a new dimension to the way supercomputers are built and used.
Advanced machine learning and deep learning software is paired with the processing power of HPC hardware, enabling new ways of optimising the machines and working with the data they create.
Mr Turek explains: “What has happened in the last couple of years that has been different is the simplification of AI and its incorporation in two ways.
“First, to better orchestrate the way conventional HPC operates in terms of equations and mathematics, and second to present a completely alternative means to solve problems.
“For example, if the problem is understanding where to drill to find the next major oil deposit, the conventional way to do that has been to take a mathematical approach.
“But the AI approach might tackle the problem completely differently.
“It might look at a history of geological pictures of where oil has historically been discovered and compare those using visualisation techniques to the images we have today.
“That’s worthwhile if the results are comparable because it’s cheaper and faster to do it that way.
“High performance computing is evolving to incorporate AI capabilities.”