Learning via Wasserstein-Based High Probability Generalisation Bounds

Published: 27 Oct 2023, Last Modified: 28 Dec 2023OTML 2023 PosterEveryoneRevisionsBibTeX
Keywords: Wasserstein, PAC-Bayes, Generalization Bound, Algorithm
Abstract: Minimising upper bounds on the population risk or the generalisation gap has been widely used in structural risk minimisation (SRM) -- this is in particular at the core of PAC-Bayes learning. Despite its successes and unfailing surge of interest in recent years, a limitation of the PAC-Bayes framework is that most bounds involve a Kullback-Leibler (KL) divergence term (or its variations), which might exhibit erratic behavior and fail to capture the underlying geometric structure of the learning problem -- hence restricting its use in practical applications. As a remedy, recent studies have attempted to replace the KL divergence in the PAC-Bayes bounds with the Wasserstein distance. Even though these bounds alleviated the aforementioned issues to a certain extent, they either hold in expectation, are for bounded losses, or are nontrivial to minimize in an SRM framework. In this work, we contribute to this line of research and prove novel Wasserstein distance-based PAC-Bayes generalisation bounds for both batch learning with independent and identically distributed (i.i.d.) data, and online learning with potentially non-i.i.d. data. Contrary to previous art, our bounds are stronger in the sense that (i) they hold with high probability, (ii) they apply to unbounded (potentially heavy-tailed) losses, and (iii) they lead to optimizable training objectives that can be used in SRM. As a result we derive novel Wasserstein-based PAC-Bayes learning algorithms and we illustrate their empirical advantage on a variety of experiments.
Submission Number: 69
Loading