This week I am going to present three applications of the Delta method theorem. The first is a direct one and it is about the behavior in distribution of the sample variance. The second one is an hypothesis test in the variance when the sample is normal. Finally, the third is an interesting application in …

Let $latex {T_{n}}&fg=000000$ an estimator of $latex {\theta}&fg=000000$, we want to estimate the parameter $latex {\phi(\theta)}&fg=000000$ where $latex {\phi}&fg=000000$ is a known function. It is natural to estimate $latex {\phi(\theta)}&fg=000000$ by $latex {\phi(T_{n})}&fg=000000$. Now, we can then ask: How the asymptotic properties of $latex {T_{n}}&fg=000000$ could be transfer to $latex {\phi(T_{n})}&fg=000000$?

The Levy’s continuity theorem is a very important tool in the statistical machinery. For example, it will give us two simple proofs to two classical statistical problems: The Law of Large Numbers and the Central Limit Theorem.

The newest edition of the mathematical journal of Costa Rica “Revista de Matemática: Teoría y Aplicaciones. Vol 19, No 2 (2012).” is available here http://bit.ly/PDfXDK

Photo of Paul Lévy. Source: MacTutor and Ra-bird. Using $latex {(ii)}&fg=000000$ of the Pormanteau lemma, it is possible to show convergence in distribution for a random vectors sequence via one “transformation”. The most important transform is the characteristic function

We introduce here some notation very useful in probability and statistics. Definition 1 For a given sequence of random variables $latex {R_{n}}&fg=000000$, $latex {(i)}&fg=000000$ $latex {X_{n}=o_{P}(R_{n})}&fg=000000$ means $latex {X_{n}=Y_{n}R_{n}}&fg=000000$ with $latex {Y_{n}}&fg=000000$ converging to $latex 0&fg=000000$ in probability. $latex {(ii)}&fg=000000$ $latex {X_{n}=O_{P}(R_{n})}&fg=000000$ means $latex {X_{n}=Y_{n}R_{n}}&fg=000000$ with the family $latex {(Y_{n})_{n}}&fg=000000$ uniformly thigh.

## The Slutsky’s lemma as an application of the continuous mapping theorem and uniform weak convergence

Photo of Evgeny Evgenievich Slutsky. Sources: MacTutor and Bomkj. Applying the continuous mapping theorem and $latex {(v)}&fg=000000$ from the last post, we get the following theorem Lemma (Slutsky). Let be $latex {X_{n}}&fg=000000$, $latex {X}&fg=000000$ and $latex {Y_{n}}&fg=000000$ random vectors and $latex {c}&fg=000000$ a constant vector. If $latex {X_{n}\rightsquigarrow X}&fg=000000$ and $latex {Y_{n}\rightsquigarrow c}&fg=000000$, then $latex …

We are going to show some relations between the different modes of convergence . These results are very important in practical examples. In the next post we will explain some of them. To proof this theorem, we shall use several times the Portmanteau’s lemma.

From left to right: Eduard Helly, Yurii Vasilevich Prokhorov and Andrei Andreyevich Markov. Source: MacTutor (1, 2, 3) and TellOfVisions. Let me start with a technical lemma that it will be very useful to show the equivalence between weak convergence and uniform tightness (Prohorov’s theorem). 1. The Helly‘s lemma Lemma (Helly’s Lemma) Let $latex {(F_{n})_{n}}&fg=000000$ a …

Photo of (left to right) Henry Berthold Mann and Abraham Wald. Sources: Mathematics Dept. Ohio State and MacTutor. Let $latex {d(x,y)}&fg=000000$ be the Euclidean distance in $latex {{\mathbb R}^{k}}&fg=000000$ $latex \displaystyle d(x,y)=\Vert x-y\Vert=\left(\sum_{i=1}^{k}(x_{i}-y_{i})^{2}\right)^{1/2}. &fg=000000$ A random variable sequence $latex {X_{n}}&fg=000000$ is said to converge in probability to $latex {X}&fg=000000$ if for all $latex {\varepsilon>0}&fg=000000$ $latex \displaystyle \mathbb P(d(X_{n},X)>\varepsilon)\rightarrow0. …