Tag Archives: Sliced Inverse Regression

Paper’s review: Zhu & Fang, 1996. Asymptotics for kernel estimate of sliced inverse regression.

It is already known, that for $latex { Y\in {\mathbb R} }&fg=000000$ and $latex { X \in {\mathbb R}^{p} }&fg=000000$, the regression problem $latex \displaystyle Y = f(\mathbf{X}) + \varepsilon, &fg=000000$ when $latex { p }&fg=000000$ is larger than the data available, it is well-known that the curse of dimensionality problem arises. Richard E. Bellman …

JdS 2012: Efficient estimation of conditional covariance matrices for dimension reduction

In the framework of the Journées de Statisque 2012 in Bruxelles, I presented the paper “Efficient estimation of conditional covariance matrices” made under Jean-Michel Loubes and Clement Marteau direction. You could check the program and the slides of the presentation. Today I will present you some ideas about the problem studied and the solution found …