Scientist: Software development is the foundation of supercomputer success
The field of astrophysics is a major consumer of supercomputer time and has a long tradition of using the powerful calculators that extend all the way back to the 1950s and the first computers in the world.
The research area is characterised by very large scales in time and space, which places massive demands on the ability of the IT departments.
“You can't just build a star in the basement and make adjustments and changes to it. We want the opportunity to work in three dimensions, and it requires a supercomputer to build a visual laboratory,” says Troels Haugbølle, astrophysicist and associate professor at the University of Copenhagen.
Images and data from astronomical observations can only provide a two-dimensional version, while a computer can add an extra dimension, which provides a natural craving for computing power.
In the latest consumption statement from PRACE, "Universe Science", which covers everything from plasma and solar physics to cosmology, accounts for approximately 20 percent of the overall use of the common European supercomputer resources.
PRACE (Partnership for Advanced Computing in Europe) is a collaboration between a number of European countries providing access to data processing on the largest supercomputers in Europe.
Therefore, Troels Haugbølle also sees the LUMI project as a new and interesting tool in his work.
LUMI (Large Unified Modern Infrastructure) is a consortium of nine countries, including Denmark, which is right now working at full throttle to build one of the world’s most powerful supercomputers in order to provide the research landscape in Europe with a digital quality boost.
The supercomputer is scheduled to be operational by early 2021.
GPU / CPU: Almost similar - but not
But there are also challenges that need to be addressed before work on LUMI can begin. LUMI uses GPUs as the basis for its calculations, while the CPU model is much more widely used in both Denmark and the rest of the world.
Briefly explained, either Central Processing Unit (CPU) or Graphics Processing Unit (GPU) technologies are used in large computer systems.
The CPU is distinguished by being general and having access to a large workload to suit all types of computing problems. While the GPU technology requires the metrics to be expressed as vector operations and it has a smaller workload, therefore programming of this type of architecture requires some expertise.
“It is a political decision for LUMI to use the GPU model, which has its advantages. Nevertheless, that means there's a lot of code to rewrite. It is precisely this challenge that we are currently facing,” says Troels Haugbølle, and continues:
"If this is not done quickly enough, we will have unused computing capacity at LUMI, and that is really unfortunate for both the economy and the research."
"We are scientists not programmers"
The conversion between the existing CPU usable code to LUMI's GPU code can be achieved by different paths, but it goes for all of them, that they take time away from the real science work and requires expertise.
“We expect to have our new code ready so we can use LUMI from day one. But if everyone wants the maximum benefit from LUMI, then there must be a broader focus and more funding for software development,” says Troels Haugbølle.
An image of the transition may be that of the transition between a family car and a Formula 1 racer. The racer is much harder to drive, but it gets off to a much quicker start.
The challenges are, among other things, that the compilers that can help with the code changes are so new that they do not always give a good result. The alternative is a more handheld transition, which is obviously very resource-demanding.
“We are scientists, after all, and not everyone can handle code development. Appropriations are always allocated to laboratories and people, but there is no tradition of funding for software development. That is really needed now,” says Troels Haugbølle.
Calculations over a 40 million years span
For the past four years, Troels Haugbølle has together with colleagues, used the largest computers in Europe and the US to calculate how stars are formed in the Milky Way.
For the first time, it has been possible to calculate how tens of thousands of stars are born and evolve over a period of 40 million years.
Enough for the heaviest of the stars to explode like supernovas.
Thus LUMI is big part of the science race against other scientists in the world, and it may play an important role in the future of Danish science, but the code must be in place first.
“There is a competitive element in the field of research on who comes first with the results, and of course we would like to be first. But it requires some funds right now, which hopefully will come back many times in the long run,” says Troels Haugbølle.
He also points out that the HPC (High Performance Computing) capabilities can attract talented scientists from abroad to the Danish universities.
Star formation is a dynamic and chaotic process where feedback from stars drive supersonic turbulence in the cold gas. Shockwaves can compress the gas enough for gravity to take over; the gas collapses under its own weight and a new star is born. The process is what is called a "multi-scale problem" where the dynamics very close to a newly formed star can affect the gas in a large area.
The image shows a computer model of a star-forming region, and illustrates how, in the latest computer models, resolving details up to 500 million times smaller than the entire model while developing dynamics over millions of years.
The models have helped the research team at the Niels Bohr Institute in Denmark to interpret observations and understand which physical processes drive the formation of new stars, and in our time our own solar system.