Paper’s review: Zhu & Fang, 1996. Asymptotics for kernel estimate of sliced inverse regression.

It is already known, that for $latex { Y\in {\mathbb R} }&fg=000000$ and $latex { X \in {\mathbb R}^{p} }&fg=000000$, the regression problem $latex \displaystyle Y = f(\mathbf{X}) + \varepsilon, &fg=000000$ when $latex { p }&fg=000000$ is larger than the data available, it is well-known that the curse of dimensionality problem arises. Richard E. Bellman …

How to keep yourself easily updated in statistics (or in any subject)

Keeping yourself updated to new theories and discoveries in the scientific world is crucial. Talking with a university’s friend, we agreed that reading recent articles about our interest areas is comparable to read the local newspaper. Sometimes I am a little forgetful and I could pass to check all the news advances in statistics very often. …

The Kullback’s version for the minimax lower bound with two hypothesis

Photos of (left to right) Solomon Kullback, Richard A. Leibler and Lucien Le Cam. Sources: NSA Cryptologic Hall of Honor (1, 2) and MacTutor. We saw the last time how to find lower bounds using the total variation divergence.  Even so, conditions with the Kullback-Leiber divergence are easier to verify than the total divergence and …

Minimax Lower Bounds using the Total Variation Divergence

Remember that we have supposed two hypothesis $latex {\left\{ f_{0},f_{1}\right\} }&fg=000000$ elements of $latex {\mathcal{F}}&fg=000000$. Denote $latex {P_{0}}&fg=000000$ and $latex {P_{1}}&fg=000000$ two probability measures under $latex {(\mathcal{X},\mathcal{A})}&fg=000000$ under $latex {f_{0}}&fg=000000$ and $latex {f_{1}}&fg=000000$ respectively. If $latex {P_{0}}&fg=000000$ and $latex {P_{1}}&fg=000000$ are very “close”, then it is hard to distinguish $latex {f_{0}}&fg=000000$ and $latex {f_{1}}&fg=000000$ and …