Exponential Stochastic Inequality


We develop the concept of exponential stochastic inequality (ESI), a novel notation that simultaneously captures high-probability and in-expectation statements. It is especially well suited to succinctly state, prove, and reason about excess-risk and generalization bounds in statistical learning, specifically, but not restricted to, the PAC-Bayesian type. We show that the ESI satisfies transitivity and other properties which allow us to use it like standard, nonstochastic inequalities. We substantially extend the original definition from Koolen et al. (2016) and show that general ESIs satisfy a host of useful additional properties, including a novel Markov-like inequality. We show how ESIs relate to, and clarify, PAC-Bayesian bounds, subcentered subgamma random variables and fast-rate conditions such as the central and Bernstein conditions. We also show how the ideas can be extended to random scaling factors (learning rates).

arXiv preprint arXiv:2304.14217
Zak Mhammedi
Zak Mhammedi
Postdoctoral Associate

I work on the theoretical foundations of Reinforcement Learning, Controls, and Optimization.