## Paper’s review: Zhu & Fang, 1996. Asymptotics for kernel estimate of sliced inverse regression.

It is already known, that for $latex { Y\in {\mathbb R} }&fg=000000$ and $latex { X \in {\mathbb R}^{p} }&fg=000000$, the regression problem $latex \displaystyle Y = f(\mathbf{X}) + \varepsilon, &fg=000000$ when $latex { p }&fg=000000$ is larger than the data available, it is well-known that the curse of dimensionality problem arises. Richard E. Bellman …

## The return

Photo from Paolo Dala Hola, Hello, Bonjour! It’s good to return to the blogging stream. I’ve been somewhat disconnected these months. A little distracted, with a drought of ideas and a little unmotivated,… I think that it happens, to all of us. At least didn’t happen to me like it did to poor Chuck….

## Project Euler: Problem #1 – Multiples of 3 and 5

The last Saturday I went to a Python workshop organized by the Toulibre in Toulouse. The experience was so great that I decide to start the Project Euler to learn this amazing language. I will to try to publish each problem on Mondays. To make the challenge a little more interesting, I am going to …

## Example with two hypothesis: Regression case

We are now going to apply our version of Kullback’s theorem  based in two hypothesis to the non-parametric regression model. Assume first the following conditions:

## Summary JdS 2012

Summary of the “Journées de Statistiques 2012” in Bruxelles. Related to: JdS 2012: Efficient estimation of conditional covariance matrices for dimension reduction

## How to keep yourself easily updated in statistics (or in any subject)

Keeping yourself updated to new theories and discoveries in the scientific world is crucial. Talking with a university’s friend, we agreed that reading recent articles about our interest areas is comparable to read the local newspaper. Sometimes I am a little forgetful and I could pass to check all the news advances in statistics very often. …

The mathematics behind deblurring images http://yuzhikov.com/articles/BlurredImagesRestoration1.htm

## An illustrative explanation of manifolds

Today I was reading about manifolds and I found this illustrative image about them. “[…] To find global information, the being would have the walk around both surfaces and be very careful to check angles and distances. If the being were nearsighted and could not check distances and angles, then its examination of the local vicinity, or …

## The Kullback’s version for the minimax lower bound with two hypothesis

Photos of (left to right) Solomon Kullback, Richard A. Leibler and Lucien Le Cam. Sources: NSA Cryptologic Hall of Honor (1, 2) and MacTutor. We saw the last time how to find lower bounds using the total variation divergence.  Even so, conditions with the Kullback-Leiber divergence are easier to verify than the total divergence and …

## Minimax Lower Bounds using the Total Variation Divergence

Remember that we have supposed two hypothesis $latex {\left\{ f_{0},f_{1}\right\} }&fg=000000$ elements of $latex {\mathcal{F}}&fg=000000$. Denote $latex {P_{0}}&fg=000000$ and $latex {P_{1}}&fg=000000$ two probability measures under $latex {(\mathcal{X},\mathcal{A})}&fg=000000$ under $latex {f_{0}}&fg=000000$ and $latex {f_{1}}&fg=000000$ respectively. If $latex {P_{0}}&fg=000000$ and $latex {P_{1}}&fg=000000$ are very “close”, then it is hard to distinguish $latex {f_{0}}&fg=000000$ and $latex {f_{1}}&fg=000000$ and …