PAC-Bayes Un-Expected Bernstein Inequality

Abstract

We present a new PAC-Bayesian generalization bound. Standard bounds contain a square-root $L_n \mathrm{KL}/n$ complexity term which dominates unless $L_n$, the empirical error of the learning algorithm’s randomized predictions, vanishes. We manage to replace $L_n$ by a term which vanishes in many more situations, essentially whenever the employed learning algorithm is sufficiently stable on the dataset at hand. Our new bound consistently beats state-of-the-art bounds both on a toy example and on UCI datasets (with large enough $n$). Theoretically, unlike existing bounds, our new bound can be expected to converge to 0 faster whenever a Bernstein/Tsybakov condition holds, thus connecting PAC-Bayesian generalization and excess risk bounds—for the latter it has long been known that faster convergence can be obtained under Bernstein conditions. Our main technical tool is a new concentration inequality which is like Bernstein’s but with $X^2$ taken outside its expectation.

Publication
Advances in Neural Information Processing Systems 32
Zak Mhammedi
Zak Mhammedi
Postdoctoral Associate

I work on the theoretical foundations of Reinforcement Learning, Controls, and Optimization.