Introduction The cross validation is a common technique to calibrate the binwidth histogram. They are the simplest and, sometimes, the most effective tool to describe the density of some dataset.

The other day I was bounding some inequalities for my thesis. I had that some functions were Holder, but I haven’t the explicit derivatives of my functions. I spent one hour or so thinking if I could bound correctly my expression without having the derivatives. The answer is yes, and I will show how. Hölder …

We are now going to apply our version of Kullback’s theorem based in two hypothesis to the non-parametric regression model. Assume first the following conditions:

Summary of the “Journées de Statistiques 2012” in Bruxelles. Related to: JdS 2012: Efficient estimation of conditional covariance matrices for dimension reduction

The mathematics behind deblurring images http://yuzhikov.com/articles/BlurredImagesRestoration1.htm

Photos of (left to right) Solomon Kullback, Richard A. Leibler and Lucien Le Cam. Sources: NSA Cryptologic Hall of Honor (1, 2) and MacTutor. We saw the last time how to find lower bounds using the total variation divergence. Even so, conditions with the Kullback-Leiber divergence are easier to verify than the total divergence and …

Remember that we have supposed two hypothesis $latex {\left\{ f_{0},f_{1}\right\} }&fg=000000$ elements of $latex {\mathcal{F}}&fg=000000$. Denote $latex {P_{0}}&fg=000000$ and $latex {P_{1}}&fg=000000$ two probability measures under $latex {(\mathcal{X},\mathcal{A})}&fg=000000$ under $latex {f_{0}}&fg=000000$ and $latex {f_{1}}&fg=000000$ respectively. If $latex {P_{0}}&fg=000000$ and $latex {P_{1}}&fg=000000$ are very “close”, then it is hard to distinguish $latex {f_{0}}&fg=000000$ and $latex {f_{1}}&fg=000000$ and …

Photos of Johann Radon and Otto Nikodym. Sources: Apprendre les Mathématiques and Wikipedia. Consider the simplest case, $latex {M=1}&fg=000000$ with two hypothesis $latex {\{f_{1},f_{2}\}}&fg=000000$ belonging to $latex {\mathcal{F}}&fg=000000$. According to the last post, we need only to find lower bounds for the minimax probability of error $latex {p_{e,1}}&fg=000000$. Today, we will find a bound using …

In the last publication, we defined a minimax lower bound as $latex \displaystyle \mathcal{R}^{*}\geq cs_{n} &fg=000000$ where $latex {\mathcal{R}^{*}\triangleq\inf_{\hat{f}}\sup_{f\in\mathcal{F}}\mathbb E\left[d^{2}(\hat{f}_{n},f)\right]}&fg=000000$ and $latex {s_{n}\rightarrow0}&fg=000000$. The big issue with this definition is to take the supremum over a massive set $latex {\mathcal{F}}&fg=000000$ and then the infimum over all the possible estimators of $latex {f}&fg=000000$.

In my most recent research, I’m working on finding “Minimax Lower Bounds” for some kind of estimators. Therefore, to learn a little more and get my ideas clear, I’ll going to start a series of posts about the topic. I pretend to make some review in the general method and introduce some bounds depending on …