Estimating Fréchet bounds for validating programmatic weak supervision

Published: 27 Oct 2023, Last Modified: 28 Dec 2023OTML 2023 PosterEveryoneRevisionsBibTeX
Keywords: weak supervision, model assessment, Frechet bounds
TL;DR: This paper introduces a method to estimate performance metrics like accuracy, recall, and precision in the weak supervision setup when no labels are observed.
Abstract: We develop methods for estimating Fréchet bounds on (possibly high-dimensional) distribution classes in which some variables are continuous-valued. We establish the statistical correctness of the computed bounds under uncertainty in the marginal constraints and demonstrate the usefulness of our algorithms by evaluating the performance of machine learning (ML) models trained with programmatic weak supervision (PWS). PWS is a framework for principled learning from weak supervision inputs (e.g., crowdsourced labels, knowledge bases, pre-trained models on related tasks, etc.), and it has achieved remarkable success in many areas of science and engineering. Unfortunately, it is generally difficult to validate the performance of ML models trained with PWS due to the absence of labeled data. Our algorithms address this issue by estimating sharp lower and upper bounds for performance metrics such as accuracy/recall/precision drawing connections to tools from computational optimal transport.
Submission Number: 75
Loading