## Regularization on high dimensional covariance matrices

Motivation I remember one lecture about linear models in the University of Costa Rica. He said before to present some classic method “This only works if you have more data than variables”. In this moment, it seems very reasonable and I could not imagine any real case with more variables ($p$) than data ($n$).

## Paper’s review: Zhu & Fang, 1996. Asymptotics for kernel estimate of sliced inverse regression.

It is already known, that for $latex { Y\in {\mathbb R} }&fg=000000$ and $latex { X \in {\mathbb R}^{p} }&fg=000000$, the regression problem $latex \displaystyle Y = f(\mathbf{X}) + \varepsilon, &fg=000000$ when $latex { p }&fg=000000$ is larger than the data available, it is well-known that the curse of dimensionality problem arises. Richard E. Bellman […]