Abstract
We present an argument based on the multidimensional and the uniform central limit theorems, proving that, under some geometrical assumptions between the target function $T$ and the learning class $F$, the excess risk of the empirical risk minimization algorithm is lower bounded by $$\frac{\mathbb{E}\sup_{q\in Q}G_{q}}{\sqrt{n}}\delta$$ where $(G_q)_{q∈Q}$ is a canonical Gaussian process associated with $Q$ (a well chosen subset of $F$) and $δ$ is a parameter governing the oscillations of the empirical excess risk function over a small ball in $F$.
Citation
Guillaume Lecué. Shahar Mendelson. "Sharper lower bounds on the performance of the empirical risk minimization algorithm." Bernoulli 16 (3) 605 - 613, August 2010. https://doi.org/10.3150/09-BEJ225
Information