Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep LearningDownload PDF

Published: 20 Dec 2019, Last Modified: 22 Oct 2023ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: We highlight the problems with common metrics of in-domain uncertainty and perform a broad study of modern ensembling techniques.
Abstract: Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of only few independently trained networks in terms of test performance.
Keywords: uncertainty, in-domain uncertainty, deep ensembles, ensemble learning, deep learning
Code: https://github.com/bayesgroup/pytorch-ensembles
Slides: pdf
Spotlight Video: https://iclr.cc/virtual_2020/poster_BJxI5gHKDr.html
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2002.06470/code)
Original Pdf: pdf
12 Replies

Loading