News

Abstract Technology Binary Code Background. Random Numbers Digital Binary Data And Secure Data Concept.

Prediction of Optimal Solvers for Sparse Linear Systems Using Deep Learning

AI consultant Markus Götz teamed up with Yannick Funk (KIT) and Hartwig Anzt (KIT, ICL) to evaluate deep learning approaches for predicting the optimal solver for sparse linear systems.

Yannick Funk, Markus Götz and Hartwig Anzt from the Karlsruhe Institute of Technology, the Helmholtz AI local unit of Energy @ KIT, the Helmholtz Young Investigator Group ‘Fixed-Point Methods for Numerics at Exascale (FiNE)’ and the Innovative Computing Laboratory (ICL) at the University of Tennessee have investigated the usage of deep-learning approaches to predict optimal algorithms for sparse linear systems.


Picture: Yannick Funk (left), Markus Götz (center), and Hartwig Anzt (right)

Solving sparse linear systems is a key task in a number of computational problems, such as data analysis and simulations, and majorly determines overall execution time. Choosing a suitable iterative solver algorithm can significantly improve time-to-completion. We have investigated a deep learning approach designed to predict the optimal iterative solver for a given sparse linear problem. For this, we have identified useful linear system features to drive the prediction process, the metrics we use to quantify the iterative solvers’ time-to-approximation performance and a comprehensive experimental evaluation of the prediction quality of the neural network. Using a hyperparameter optimization and an ablation study on the SuiteSparse matrix collection we have inferred the importance of distinct features, achieving a top-1 classification accuracy of 60% and a top-4-accuracy of 90% from a set of seven solvers.

What are the involved challenges?

The major issue of this study was how to pass the sparse linear systems to a neural network. How do you encode a singular example? As an image? As a graph? Ultimately, a set of hand-crafted features and a multi-layer perceptron proved to be the most efficient approach. In a subsequent ablation study, some of the features that we have expected to be most likely useful have been revealed as redundant - a somewhat surprising find.

What did others do?

Prior work, mainly by Sood et al., have investigated classical machine learning approaches to predict the optimal algorithm for a sparse linear system. Using an elaborate ensemble of classifiers based on support vector machines (SVMs), decision trees (DTs) or k-nearest neighbours algorithm (kNN) they have achieved a top-1 prediction accuracy of up to 87%. This approach is more precise compared to our proposal. However, the massive amount of required computations eat up all benefits from suggesting a better solver. In contrast, our proposed deep learning approach still allows you to profit from a better solver prediction due to a lower inference time.

Why is the optimal prediction of sparse linear solvers important?

Selecting an optimal sparse linear solver algorithm for a computational problem can drastically impact the time-to-solution. If you think of large simulations with billions to trillions of necessary operations this does not only allow to perform more runs in a shorter amount of time, but also a drastically reduced environmental footprint due to reduced power consumption.


Figure: Confusion matrix for full (left), reduced (center) and final (right) feature set.

More information