Using artificial intelligence techniques to rethink development of numerical methods on high-performance computers so that the computer independently learns the optimal numerical solution method for a given simulation problem.
Helmholtz AI project call showcase: Improving simulations on high-performance computers
Wouldn’t it be great if the computer could learn to use the “optimal” numerical solution method by itself? Scientists at Forschungszentrum Jülich (FZJ) and the Karlsruhe Institute of Technology (KIT) are using reinforcement learning techniques to precisely solve this problem. Read more in today’s Helmholtz AI project showcase.
Could you introduce yourself, giving your affiliation, area of work, and of course, the project title?
I’m Robert Speck from the Jülich Supercomputing Centre (JSC) at FZJ. I’m leading JSC’s division “Mathematics and Education” and a research group on parallel-in-time integration, where the Helmholtz AI project “AlphaNumerics Zero” (αN0) is also located. In my research, I’m focusing on the development, implementation and application of parallel-in-time integration methods for high-performance computing (HPC) systems. Together with my team at JSC, I work in particular on the development and analysis of time-multigrid methods, high-order iterative time-stepping schemes and algorithm-based fault tolerance. The project is a collaboration with Martin Frank from the Steinbuch Centre for Computing (SCC) at KIT.
In simple words, what specifically is your project about? And, how and why do you think it is a high risk, high gain endeavour?
Most simulation methods in the natural sciences and engineering rely on differential equation solvers. These numerical methods are constructed for individual sub-problems by hand, typically on paper. The individual building blocks are then put together to obtain a solver for a complex problem. On supercomputers, however, in addition substantial performance engineering is necessary: The individual building blocks have to be put together in such a way that they can exploit the hardware as efficiently as possible. Extreme-scale supercomputers become ever more heterogeneous in themselves, which makes engineering the optimal solver even more challenging and laborious.
The idea of AlphaNumerics Zero is to use reinforcement learning techniques so that for a specified simulation problem the computer learns the “optimal” numerical solution method by itself. As a particular problem, the team at JSC together with Martin Frank’s team at KIT focus on iterative time-stepping schemes. These methods have gained quite a bit of interest recently, in particular with respect to high-order splitting methods and parallel-in-time integration techniques. They are particularly suited to exploit parallelism on supercomputers. More importantly, this class of methods serves as a prototype for many different areas: stationary iterative solvers, preconditioning, parallel multigrid techniques, time integrators, and resilient numerical methods. Progress made here can be converted into progress in these other fields, with a very broad impact. Yet, to put it bluntly: the goal is to beat decades of mathematical research with AI in this particular field. It is far from obvious whether or not AI approaches will be able to find better, efficient, practical results.
How important has the Helmholtz AI funding and platform been to carry out this project?
The idea of using AI for finding better, more parallelizable numerical methods has been with us for quite a while. Yet, since AI research is not the key focus of our work, we have not been able to actually start doing something in this direction. Only through Helmholtz AI funding, we are able to pursue this idea thoroughly. In addition, the help of the Helmholtz AI consultants is very valuable to us. Being distinguished AI experts (in contrast to us), they helped us to use the right tools and to use them correctly. Their continuous collaboration, input and dedication is highly appreciated.
Image: Logo of the Alpha Numerics Zero (αN0) Helmholtz AI project.