## Paper’s review: Zhu & Fang, 1996. Asymptotics for kernel estimate of sliced inverse regression.

It is already known, that for $latex { Y\in {\mathbb R} }&fg=000000$ and $latex { X \in {\mathbb R}^{p} }&fg=000000$, the regression problem $latex \displaystyle Y = f(\mathbf{X}) + \varepsilon, &fg=000000$ when $latex { p }&fg=000000$ is larger than the data available, it is well-known that the curse of dimensionality problem arises. Richard E. Bellman …

## The Delta method: Main Result

Let $latex {T_{n}}&fg=000000$ an estimator of $latex {\theta}&fg=000000$, we want to estimate the parameter $latex {\phi(\theta)}&fg=000000$ where $latex {\phi}&fg=000000$ is a known function. It is natural to estimate $latex {\phi(\theta)}&fg=000000$ by $latex {\phi(T_{n})}&fg=000000$. Now, we can then ask: How the asymptotic properties of $latex {T_{n}}&fg=000000$ could be transfer to $latex {\phi(T_{n})}&fg=000000$?

## Weak Law of Large Numbers and Central Limit Theorem via the Levy’s continuity theorem

The Levy’s continuity theorem is a very important tool in the statistical machinery. For example, it will give us two simple proofs to two classical statistical problems: The Law of Large Numbers and the Central Limit Theorem.

## Characteristic functions and the Lévy’s continuity theorem

Photo of Paul Lévy. Source: MacTutor and Ra-bird. Using $latex {(ii)}&fg=000000$ of the Pormanteau lemma, it is possible to show convergence in distribution for a random vectors sequence via one “transformation”. The most important transform is the characteristic function

## The probability versions for the Big-O and little-o notations

We introduce here some notation very useful in probability and statistics. Definition 1 For a given sequence of random variables $latex {R_{n}}&fg=000000$, $latex {(i)}&fg=000000$ $latex {X_{n}=o_{P}(R_{n})}&fg=000000$ means $latex {X_{n}=Y_{n}R_{n}}&fg=000000$ with $latex {Y_{n}}&fg=000000$ converging to $latex 0&fg=000000$ in probability. $latex {(ii)}&fg=000000$ $latex {X_{n}=O_{P}(R_{n})}&fg=000000$ means $latex {X_{n}=Y_{n}R_{n}}&fg=000000$ with the family $latex {(Y_{n})_{n}}&fg=000000$ uniformly thigh.

## Distribution function, weak convergence and Portmanteau’s lemma.

1. Preliminaries Given a random variable $latex {X}&fg=000000$, we define the cumulative distribution function(or distribution function) as follows,