It is already known, that for $latex { Y\in {\mathbb R} }&fg=000000$ and $latex { X \in {\mathbb R}^{p} }&fg=000000$, the regression problem $latex \displaystyle Y = f(\mathbf{X}) + \varepsilon, &fg=000000$ when $latex { p }&fg=000000$ is larger than the data available, it is well-known that the curse of dimensionality problem arises. Richard E. Bellman …

Let $latex {T_{n}}&fg=000000$ an estimator of $latex {\theta}&fg=000000$, we want to estimate the parameter $latex {\phi(\theta)}&fg=000000$ where $latex {\phi}&fg=000000$ is a known function. It is natural to estimate $latex {\phi(\theta)}&fg=000000$ by $latex {\phi(T_{n})}&fg=000000$. Now, we can then ask: How the asymptotic properties of $latex {T_{n}}&fg=000000$ could be transfer to $latex {\phi(T_{n})}&fg=000000$?

The Levy’s continuity theorem is a very important tool in the statistical machinery. For example, it will give us two simple proofs to two classical statistical problems: The Law of Large Numbers and the Central Limit Theorem.

Photo of Paul Lévy. Source: MacTutor and Ra-bird. Using $latex {(ii)}&fg=000000$ of the Pormanteau lemma, it is possible to show convergence in distribution for a random vectors sequence via one “transformation”. The most important transform is the characteristic function

We introduce here some notation very useful in probability and statistics. Definition 1 For a given sequence of random variables $latex {R_{n}}&fg=000000$, $latex {(i)}&fg=000000$ $latex {X_{n}=o_{P}(R_{n})}&fg=000000$ means $latex {X_{n}=Y_{n}R_{n}}&fg=000000$ with $latex {Y_{n}}&fg=000000$ converging to $latex 0&fg=000000$ in probability. $latex {(ii)}&fg=000000$ $latex {X_{n}=O_{P}(R_{n})}&fg=000000$ means $latex {X_{n}=Y_{n}R_{n}}&fg=000000$ with the family $latex {(Y_{n})_{n}}&fg=000000$ uniformly thigh.

## The Slutsky’s lemma as an application of the continuous mapping theorem and uniform weak convergence

Photo of Evgeny Evgenievich Slutsky. Sources: MacTutor and Bomkj. Applying the continuous mapping theorem and $latex {(v)}&fg=000000$ from the last post, we get the following theorem Lemma (Slutsky). Let be $latex {X_{n}}&fg=000000$, $latex {X}&fg=000000$ and $latex {Y_{n}}&fg=000000$ random vectors and $latex {c}&fg=000000$ a constant vector. If $latex {X_{n}\rightsquigarrow X}&fg=000000$ and $latex {Y_{n}\rightsquigarrow c}&fg=000000$, then $latex …

We are going to show some relations between the different modes of convergence . These results are very important in practical examples. In the next post we will explain some of them. To proof this theorem, we shall use several times the Portmanteau’s lemma.

Photo of (left to right) Henry Berthold Mann and Abraham Wald. Sources: Mathematics Dept. Ohio State and MacTutor. Let $latex {d(x,y)}&fg=000000$ be the Euclidean distance in $latex {{\mathbb R}^{k}}&fg=000000$ $latex \displaystyle d(x,y)=\Vert x-y\Vert=\left(\sum_{i=1}^{k}(x_{i}-y_{i})^{2}\right)^{1/2}. &fg=000000$ A random variable sequence $latex {X_{n}}&fg=000000$ is said to converge in probability to $latex {X}&fg=000000$ if for all $latex {\varepsilon>0}&fg=000000$ $latex \displaystyle \mathbb P(d(X_{n},X)>\varepsilon)\rightarrow0. …

1. Preliminaries Given a random variable $latex {X}&fg=000000$, we define the cumulative distribution function(or distribution function) as follows,