The Delta method: Main Result

Let $latex {T_{n}}&fg=000000$ an estimator of $latex {\theta}&fg=000000$, we want to estimate the parameter $latex {\phi(\theta)}&fg=000000$ where $latex {\phi}&fg=000000$ is a known function. It is natural to estimate $latex {\phi(\theta)}&fg=000000$ by $latex {\phi(T_{n})}&fg=000000$. Now, we can then ask:

How the asymptotic properties of $latex {T_{n}}&fg=000000$ could be transfer to $latex {\phi(T_{n})}&fg=000000$?

The continue mapping theorem gives a first answer to this question. If $latex {T_{n}}&fg=000000$ converge in probability to $latex {\theta}&fg=000000$ and $latex {\phi}&fg=000000$ is continuous then $latex {\phi(T_{n})}&fg=000000$ converge in probability to $latex {\phi(\theta)}&fg=000000$. However, this does not answer the follow question: If $latex {\sqrt{n}\left(T_{n}-\theta\right)}&fg=000000$ converge weakly to a distribution $latex {T}&fg=000000$, then is it true that $latex {\sqrt{n}\left(\phi(T_{n})-\phi(\theta)\right)}&fg=000000$ converge weakly to some distribution $latex {F}&fg=000000$? Notice that if $latex {\phi}&fg=000000$ is linear, the result is true with $latex {F=\phi(T)}&fg=000000$.

With this little remark we see that the linear part of $latex {\phi}&fg=000000$ is the interesting one, namely its differential. Informally, for $latex {\phi}&fg=000000$ differentiable we have

$latex \displaystyle \sqrt{n}(\phi(T_{n})-\phi(\theta))\approx\phi^{\prime}(\theta)\sqrt{n}(T_{n}-\theta). &fg=000000$

The following theorem states formally the latter equation

Theorem 1 (The Delta Method) Let $latex {\phi:\mathbb{D}_{\theta}\subset{\mathbb R}^{k}\rightarrow{\mathbb R}^{m}}&fg=000000$ one application differentiable in $latex {\theta}&fg=000000$. Let $latex {T_{n}}&fg=000000$ be random vectors of $latex {{\mathbb R}^{k}}&fg=000000$ with values taking their values in the domain of $latex {\phi}&fg=000000$. If $latex {r_{n}(T_{n}-\theta)\rightsquigarrow T}&fg=000000$ for $latex {r_{n}\rightarrow\infty}&fg=000000$, then

$latex \displaystyle r_{n}(\phi(T_{n})-\phi(\theta))\rightsquigarrow\phi^{\prime}(\theta)(T). &fg=000000$

Moreover, the difference between $latex {r_{n}\left(\phi(T_{n})-\phi(\theta)\right)}&fg=000000$ and $latex {\phi^{\prime}(\theta)\left(r_{n}(T_{n}-\theta)\right)}&fg=000000$ goes to zero in probability.

Proof: Because the sequence $latex {r_{n}(T_{n}-\theta)}&fg=000000$ converge in distribution, by the Prohorov theorem it is uniformly tight. Besides, by the Slutsky theorem $latex {(T_{n}-\theta)\rightsquigarrow0}&fg=000000$. Let be $latex {R(h)=\phi(\theta+h)-\phi(\theta)-\phi^{\prime}(\theta)(h)}&fg=000000$. By definition of differential $latex {R(h)=o(\|h\|)}&fg=000000$ as $latex {h\rightarrow0}&fg=000000$. We apply the Lemma 2 in an earlier post to replace the fixed $latex {h}&fg=000000$ by a random sequence,

$latex \displaystyle \phi(T_{n})-\phi(\theta)-\phi^{\prime}(\theta)(T_{n}-\theta)=R(T_{n}-\theta)=o_{P}(\|T_{n}-\theta\|). &fg=000000$

Multiply both sides of the equality by $latex {r_{n}}&fg=000000$, and note that because $latex {r_{n}(T_{n}-\theta)}&fg=000000$ is uniformly tight, we deduce that $latex {o_{P}(r_{n}\|T_{n}-\theta\|)=o_{P}(1)}&fg=000000$ \footnote{We write $latex {o_{P}(r_{n}\|T_{n}-\theta\|)=r_{n}\|T_{n}-\theta\|Z_{n}}&fg=000000$ with $latex {Z_{n}=o_{P}(1)}&fg=000000$. Then, fixed $latex {\epsilon>0}&fg=000000$ and $latex {M>0}&fg=000000$ such that $latex {\mathbb P\left(r_{n}\|T_{n}-\theta\|>M\right)<\epsilon}&fg=000000$. We showed that for all $latex {\eta>0}&fg=000000$, $latex {\mathbb P\left(r_{n}\|T_{n}-\theta\|Z_{n}>\eta\right)\rightarrow0}&fg=000000$. }. This achieves the second part of the theorem.

Moreover, $latex \phi^{\prime}(\theta)&fg=000000$ is linear thus continue. By the continuous mapping theorem we have

$latex \displaystyle r_{n}\phi^{\prime}(\theta)(T_{n}-\theta)\rightsquigarrow\phi^{\prime}(\theta)(T). &fg=000000$

Apply Slutsky’s lemma to conclude that the sequence $latex {r_{n}\left(\phi(T_{n})-\phi(\theta)\right)}&fg=000000$ has the same weak limit $latex \Box&fg=000000$

Remark 1 It is usual apply the Delta method when $latex {T_{n}}&fg=000000$ is an estimator of $latex {\theta}&fg=000000$ and the $latex {T}&fg=000000$’s law is Gaussian. In this framework, if $latex {\phi}&fg=000000$ is differentiable on $latex {\theta}&fg=000000$, $latex {\Sigma}&fg=000000$ is the covariance matrix of $latex {T}&fg=000000$ and

$latex \displaystyle \sqrt{n}\left(T_{n}-\theta\right)\rightsquigarrow\mathcal{N}(0,\Sigma), &fg=000000$

then

$latex \displaystyle \sqrt{n}\left(\phi(T_{n})-\phi(\theta)\right)\rightsquigarrow\mathcal{N}(0,\phi^{\prime}(\theta)\Sigma\phi^{\prime}(\theta)^{T}), &fg=000000$

where $latex \phi^{\prime}(\theta)&fg=000000$ is the partial derivatives matrix.

The next week we will see some examples and applications of this theorem. For example, non-robustness of the chi-square, test for normal variances and variance stabilizing transformations.

 

Comments

  1. Pingback: The Delta Method: Applications | Blog about Statistics

Leave a Reply