Uncategorized

Paper’s review: Zhu & Fang, 1996. Asymptotics for kernel estimate of sliced inverse regression.

It is already known, that for $latex { Y\in {\mathbb R} }&fg=000000$ and $latex { X \in {\mathbb R}^{p} }&fg=000000$, the regression problem $latex \displaystyle Y = f(\mathbf{X}) + \varepsilon, &fg=000000$ when $latex { p }&fg=000000$ is larger than the data available, it is well-known that the curse of dimensionality problem arises. Richard E. Bellman …

How to keep yourself easily updated in statistics (or in any subject)

Keeping yourself updated to new theories and discoveries in the scientific world is crucial. Talking with a university’s friend, we agreed that reading recent articles about our interest areas is comparable to read the local newspaper. Sometimes I am a little forgetful and I could pass to check all the news advances in statistics very often. …

The Kullback’s version for the minimax lower bound with two hypothesis

Photos of (left to right) Solomon Kullback, Richard A. Leibler and Lucien Le Cam. Sources: NSA Cryptologic Hall of Honor (1, 2) and MacTutor. We saw the last time how to find lower bounds using the total variation divergence.  Even so, conditions with the Kullback-Leiber divergence are easier to verify than the total divergence and …