Tag Archives: Convergence of random variables

Paper’s review: Zhu & Fang, 1996. Asymptotics for kernel estimate of sliced inverse regression.

It is already known, that for $latex { Y\in {\mathbb R} }&fg=000000$ and $latex { X \in {\mathbb R}^{p} }&fg=000000$, the regression problem $latex \displaystyle Y = f(\mathbf{X}) + \varepsilon, &fg=000000$ when $latex { p }&fg=000000$ is larger than the data available, it is well-known that the curse of dimensionality problem arises. Richard E. Bellman …

The Delta method: Main Result

Let $latex {T_{n}}&fg=000000$ an estimator of $latex {\theta}&fg=000000$, we want to estimate the parameter $latex {\phi(\theta)}&fg=000000$ where $latex {\phi}&fg=000000$ is a known function. It is natural to estimate $latex {\phi(\theta)}&fg=000000$ by $latex {\phi(T_{n})}&fg=000000$. Now, we can then ask: How the asymptotic properties of $latex {T_{n}}&fg=000000$ could be transfer to $latex {\phi(T_{n})}&fg=000000$?

The probability versions for the Big-O and little-o notations

We introduce here some notation very useful in probability and statistics. Definition 1 For a given sequence of random variables $latex {R_{n}}&fg=000000$, $latex {(i)}&fg=000000$ $latex {X_{n}=o_{P}(R_{n})}&fg=000000$ means $latex {X_{n}=Y_{n}R_{n}}&fg=000000$ with $latex {Y_{n}}&fg=000000$ converging to $latex 0&fg=000000$ in probability. $latex {(ii)}&fg=000000$ $latex {X_{n}=O_{P}(R_{n})}&fg=000000$ means $latex {X_{n}=Y_{n}R_{n}}&fg=000000$ with the family $latex {(Y_{n})_{n}}&fg=000000$ uniformly thigh.

The Slutsky’s lemma as an application of the continuous mapping theorem and uniform weak convergence

Photo of Evgeny Evgenievich Slutsky. Sources: MacTutor and Bomkj. Applying the continuous mapping theorem and $latex {(v)}&fg=000000$ from the last post, we get the following theorem Lemma (Slutsky). Let be $latex {X_{n}}&fg=000000$, $latex {X}&fg=000000$ and $latex {Y_{n}}&fg=000000$ random vectors and $latex {c}&fg=000000$ a constant vector. If $latex {X_{n}\rightsquigarrow X}&fg=000000$ and $latex {Y_{n}\rightsquigarrow c}&fg=000000$, then $latex …

Convergence in probability, convergence almost surely and the continuous mapping theorem

Photo of (left to right) Henry Berthold Mann and Abraham Wald. Sources: Mathematics Dept. Ohio State and MacTutor. Let $latex {d(x,y)}&fg=000000$ be the Euclidean distance in $latex {{\mathbb R}^{k}}&fg=000000$ $latex \displaystyle d(x,y)=\Vert x-y\Vert=\left(\sum_{i=1}^{k}(x_{i}-y_{i})^{2}\right)^{1/2}. &fg=000000$ A random variable sequence $latex {X_{n}}&fg=000000$ is said to converge in probability to $latex {X}&fg=000000$ if for all $latex {\varepsilon>0}&fg=000000$ $latex \displaystyle \mathbb P(d(X_{n},X)>\varepsilon)\rightarrow0. …