Helmholtz AI consultants and colleagues from HZDR have presented modern techniques to reproduce uncertainties measures for state-of-art results in image classification at ICLR 2022
As the race for state-of-the-art ML research continues, we need uncertainties as our referees
As the race for faster, more precise and data efficient pattern recognition algorithms continues, recent progress has stagnated to some degree. The days of 5-10% accuracy improvements using deep learning on the popular imagenet benchmark are over. With this in mind, identifying groundbreaking results in the field becomes more and more challenging today. For this reason, the ML community is on its way to establish uncertainty estimation procedures on obtained accuracies as a required ingredient in analyses and benchmarks to resolve this predicament. One milestone on this journey was the “ML Evaluation Standards” workshop at the International Conference on Learning Representations.
Therefore, Peter Steinbach (Helmholtz AI consultant), Felicita Gernhardt (Helmholtz-Zentrum Dresden-Rossendorf), Mahnoor Tanveer (Helmholtz AI consultant), Steve Schmerler (Helmholtz AI consultant) & Sebastian Starke (Helmholtz AI consultant) have demonstrated how these uncertainties can be obtained in a reproducible and resource efficient manner. Concretely, they showed that one important source for uncertainties can robustly be approximated using a binomial distribution. To support this, they co-published an automated workflow that scales to HPC infrastructure and allows other scientists to plug-and-play their models in order to obtain uncertainties for their research.
- P Steinbach, F Gernhardt, M Tanveer, S Schmerler, S Starke: Machine Learning State-of-the-Art with Uncertainties. Published as a conference paper at ICLR 2022.
- Code Repository for reproduction: github