Efficient learning and probabilistic inference for science
Combining Bayesian inference with powerful deep learning models for challenging applications
Vincent Fortuin’s Helmholtz AI young investigator group, which is located at the Helmholtz Center Munich, focuses on the interface between Bayesian inference and deep learning with the goals of improving robustness, data-efficiency, and uncertainty estimation in these modern machine learning approaches. While deep learning often leads to impressive performance in many applications, it can be over-confident in its predictions and require large datasets to train. Especially in scientific applications, where training data is scarce and detailed prior knowledge is available, insights from Bayesian statistics can be used to drastically improve these models. Important research questions include how to effectively specify priors in deep Bayesian models, how to harness unlabeled data to learn re-usable representations, how to transfer knowledge between tasks using meta-learning, and how to guarantee generalization performance using PAC-Bayesian bounds.
- Bayesian deep learning: Combining Bayesian inference with deep neural networks promises great advantages but still poses many challenges regarding effective prior specification and efficient inference.
- Deep generative modeling: While deep generative models have achieved impressive performance on images and text, it is still unclear how to use them most effectively on scientific data, where datasets are much smaller but more specific prior knowledge is available.
- Meta-learning: Most tasks in real-world applications, including in science, are not solved in isolation but in the context of related similar tasks. How to transfer knowledge between these tasks most effectively remains an impactful research question.
- PAC-Bayesian theory: While overfitting is a constant threat to machine learning models, PAC-Bayesian bounds can provide probabilistic guarantees on the generalization performance and thus enable more robust and trustworthy models for critical applications.
- Fortuin, Vincent. "Priors in bayesian deep learning: A review." International Statistical Review (2022)
- Immer, Alexander, Tycho FA van der Ouderaa, Gunnar Rätsch, Vincent Fortuin, and Mark van der Wilk. "Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations." Advances in Neural Information Processing Systems 35 (2022)
- Nabarro, Seth, Stoil Ganev, Adrià Garriga-Alonso, Vincent Fortuin, Mark van der Wilk, and Laurence Aitchison. "Data augmentation in Bayesian neural networks and the cold posterior effect." In Uncertainty in Artificial Intelligence, pp. 1434-1444. PMLR, 2022
- Fortuin, Vincent, Adrià Garriga-Alonso, Sebastian W. Ober, Florian Wenzel, Gunnar Rätsch, Richard E. Turner, Mark van der Wilk, and Laurence Aitchison. "Bayesian Neural Network Priors Revisited." In International Conference on Learning Representations. 2021
- Fortuin, Vincent, Adrià Garriga-Alonso, Mark van der Wilk, and Laurence Aitchison. "BNNpriors: A library for Bayesian neural network inference with different prior distributions." Software Impacts 9 (2021): 100079
- D'Angelo, Francesco, and Vincent Fortuin. "Repulsive deep ensembles are bayesian." Advances in Neural Information Processing Systems 34 (2021): 3451-3465
- Rothfuss, Jonas, Vincent Fortuin, Martin Josifoski, and Andreas Krause. "PACOH: Bayes-optimal meta-learning with PAC-guarantees." In International Conference on Machine Learning, pp. 9116-9126. PMLR, 2021
- Immer, Alexander, Matthias Bauer, Vincent Fortuin, Gunnar Rätsch, and Khan Mohammad Emtiyaz. "Scalable marginal likelihood estimation for model selection in deep learning." In International Conference on Machine Learning, pp. 4563-4573. PMLR, 2021
- Manduchi, Laura, Matthias Hüser, Martin Faltys, Julia Vogt, Gunnar Rätsch, and Vincent Fortuin. "T-dpsom: An interpretable clustering method for unsupervised learning of patient health states." In Proceedings of the Conference on Health, Inference, and Learning, pp. 236-245. 2021
- Fortuin, Vincent, Dmitry Baranchuk, Gunnar Raetsch, and Stephan Mandt. "GP-VAE: Deep Probabilistic Time Series Imputation." In International Conference on Artificial Intelligence and Statistics, pp. 1651-1661. PMLR, 2020
- Fortuin, Vincent, Matthias Hüser, Francesco Locatello, Heiko Strathmann, and Gunnar Rätsch. "SOM-VAE: Interpretable Discrete Representation Learning on Time Series." In International Conference on Learning Representations. 2019
Google scholar: Vincent Fortuin