Tag Archives: Probability

The Kullback’s version for the minimax lower bound with two hypothesis

Photos of (left to right) Solomon Kullback, Richard A. Leibler and Lucien Le Cam. Sources: NSA Cryptologic Hall of Honor (1, 2) and MacTutor. We saw the last time how to find lower bounds using the total variation divergence.  Even so, conditions with the Kullback-Leiber divergence are easier to verify than the total divergence and …

A general reduction scheme for minimax lower bounds

In the last publication, we defined a minimax lower bound as $latex \displaystyle \mathcal{R}^{*}\geq cs_{n} &fg=000000$ where $latex {\mathcal{R}^{*}\triangleq\inf_{\hat{f}}\sup_{f\in\mathcal{F}}\mathbb E\left[d^{2}(\hat{f}_{n},f)\right]}&fg=000000$ and $latex {s_{n}\rightarrow0}&fg=000000$. The big issue with this definition is to take the supremum over a massive set $latex {\mathcal{F}}&fg=000000$ and then the infimum over all the possible estimators of $latex {f}&fg=000000$.

The probability versions for the Big-O and little-o notations

We introduce here some notation very useful in probability and statistics. Definition 1 For a given sequence of random variables $latex {R_{n}}&fg=000000$, $latex {(i)}&fg=000000$ $latex {X_{n}=o_{P}(R_{n})}&fg=000000$ means $latex {X_{n}=Y_{n}R_{n}}&fg=000000$ with $latex {Y_{n}}&fg=000000$ converging to $latex 0&fg=000000$ in probability. $latex {(ii)}&fg=000000$ $latex {X_{n}=O_{P}(R_{n})}&fg=000000$ means $latex {X_{n}=Y_{n}R_{n}}&fg=000000$ with the family $latex {(Y_{n})_{n}}&fg=000000$ uniformly thigh.