Reliable machine learning

Building robust, theoretically substantiated and socially beneficial machine learning systems
Our Helmholtz AI young investigator group broadly investigates machine learning systems that interact with humans, for example by making consequential decisions, affecting our behavior, or challenging our privacy. We focus on reliable, fair, and privacy preserving algorithms for these settings. As a key tool for trustworthy machine learning, we also aim at making causal inference techniques more applicable to systems involving humans.
Visit Niki Kilbertus' personal website
- Causality
- Reliable ML
- Methodology
Data protection note
Selected publications
-
Exploration in two-stage recommender systems. Jiri Hron*, Karl Krauth*, Michael I. Jordan, NK (* equal contribution). ACM RecSys 2020 Workshop on Bandit and Reinforcement Learning from User Interactions (REVEAL 2020), NeurIPS 2020 Workshop on Consequential Decisions in Dynamic Environments, NeurIPS 2020 Workshop on Challenges of Real-World RL [paper] [short talk video]
-
A class of algorithms for general instrumental variable models. NK, Matt J. Kusner, Ricardo Silva. NeurIPS 2020 [paper] [code] [talk video]
-
On Disentangled Representations Learned From Correlated Data. Frederik Träuble, Elliot Creager, NK, Francesco Locatello, Andrea Dittadi, Anirudh Goyal, Bernhard Schölkopf, Stefan Bauer [paper]
-
Fair decisions despite imperfect predictions. NK, Manuel Gomez-Rodriguez, Bernhard Schölkopf, Krikamol Muandet, Isabel Valera. AISTATS 2020 [paper] [bibtex] [code]
-
The sensitivity of counterfactual fairness to unmeasured confounding. NK, Philip Ball, Matt J. Kusner, Adrian Weller, Ricardo Silva. UAI 2019 [paper] [bibtex] [code]
-
Convolutional neural networks: a magic bullet for gravitational-wave detection? Timothy Gebhard*, NK*, Ian Harry, Bernhard Schölkopf (* equal contribution). Physical Review D, 2019 [paper] [bibtex] [code] [data generation] [DOI]
-
Improving consequential decision making under imperfect predictions. NK, Manuel Gomez-Rodriguez, Bernhard Schölkopf, Krikamol Muandet, Isabel Valera. KDD 2019 Workshop on Data Collection, Curation, and Labeling for Mining and Learning (DCCL) [paper]
-
Generalization in anti-causal learning. NK*, Giambattista Parascandolo*, Bernhard Schölkopf (* equal contribution). NeurIPS 2018 Workshop on Critiquing and correcting trends in machine learning [paper]
-
Blind Justice: Fairness with Encrypted Sensitive Attributes. NK, Adrià Gascón, Matt J. Kusner, Michael Veale, Krishna P. Gummadi, Adrian Weller. ICML 2018, also at: FATML 2018 [talk] and PIMLAI 2018 [paper] [bibtex] [poster] [code]
-
Learning Independent Causal Mechanisms. Giambattista Parascandolo, NK, Mateo Rojas-Carulla, Bernhard Schölkopf. ICML 2018, also at: NIPS 2017 Workshop on Learning Disentangled Representations [paper] [bibtex]
-
Avoiding Discrimination Through Causal Reasoning. NK, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, Bernhard Schölkopf. NeurIPS 2017 [paper] [bibtex] [poster]
-
ConvWave: Searching for Gravitational Waves with Fully Convolutional Neural Nets. Timothy Gebhard*, NK*, Giambattista Parascandolo, Ian Harry, Bernhard Schölkopf (* equal contribution). NeurIPS 2017 Workshop on Deep Learning for Physical Sciences [paper] [bibtex] [code] [poster]
Team
- Elisabeth Ailer, PhD Candidate
- Kirtan Padh (PhD Candidate)
- Zhufeng Li (PhD Candidate)
- Alexander Reisach (external Master's student)