This week I am going to present three applications of the Delta method theorem. The first is a direct one and it is about the behavior in distribution of the sample variance. The second one is an hypothesis test in the variance when the sample is normal. Finally, the third is an interesting application in variance stabilization.

Sample Variance

Let us start for call the Multidimensional Central Limit Theorem.

Theorem 1 Let be $latex {Y_{1},Y_{2},\ldots}&fg=000000$ i.i.d. random vectors on $latex {{\mathbb R}^{k}}&fg=000000$ with mean $latex {\mu}&fg=000000$ and covariance matrix $latex {\Sigma=\mathbb E\left((Y_{1}-\mu)(Y_{1}-\mu)^{\top}\right)}&fg=000000$. Then $latex {\sqrt{n}(\overline{Y}_{n}-\mu)\rightsquigarrow\mathcal{N}_{k}(0,\Sigma)}&fg=000000$.

Proof: Let be $latex {t\in{\mathbb R}^{k}}&fg=000000$ and notice that $latex {Z_{n}=\sqrt{n}(\overline{Y}_{n}-\mu)}&fg=000000$ by the classic central limit theorem $latex {t^{\top}Z_{n}\rightsquigarrow\mathcal{N}(0,t^{\top}\Sigma t)}&fg=000000$. We conclude the result using the Cramer-Wold theorem. $latex \Box&fg=000000$

Let be $latex {X_{1},\ldots,X_{n}}&fg=000000$ i.i.d. random variables. We define $latex {S_{n}^{2}=\frac{1}{n}\sum_{i=1}^{n}(X_{i}-\overline{X}_{n})^{2}}}&fg=00000$. A quick calculation shows that $latex {S_{n}^{2}=\phi(\overline{X_{n}},\overline{X_{n}^{2}})}&fg=000000$, with $latex {\phi(x,y)=y-x^{2}}&fg=000000$. Assume that $latex {X_{1}}&fg=000000$ has its four first moments exists and denote them as $latex {\mu_{i}}&fg=000000$ the $latex {i}&fg=000000$-th order moment. If we write $latex {Y_{i}=(X_{i},X_{i}^{2})}&fg=000000$ in the previous theorem, we have

$latex \displaystyle \sqrt{n}\left(\begin{pmatrix}\overline{X_{n}}\\ \overline{X_{n}^{2}} \end{pmatrix}-\begin{pmatrix}\mu_{1}\\ \mu_{2} \end{pmatrix}\right)\rightsquigarrow\mathcal{N}_{2}\left(\begin{pmatrix}0\\ 0 \end{pmatrix},\ \begin{pmatrix}\mu_{2}-\mu_{1}^{2} & \mu_{3}-\mu_{1}\mu_{2}\\ \mu_{3}-\mu_{1}\mu_{2} & \mu_{4}-\mu_{2}^{2} \end{pmatrix}\right). &fg=000000$

The application $latex {\phi}&fg=000000$ is differentiable in every point, with differential $latex {\phi^{\prime}(x,y)(h,k)=-2xh+k}&fg=000000$. We apply the Delta Method:

$latex \displaystyle \sqrt{n}\left(S_{n}^{2}-(\mu_{2}-\mu_{1}^{2})\right)\rightsquigarrow\mathcal{N}(0,-4\mu_{1}^{4}-\mu_{2}^{2}+8\mu_{1}^{2}\mu_{2}-4\mu_{1}\mu_{3}+\mu_{4}). &fg=000000$

We can assume without losing generality that the observations are centered, that is to say $latex {\mu_{1}=0}&fg=000000$. That is because if we take $latex {Z_{i}=X_{i}-\mu_{1}}&fg=000000$, we can show $latex {\frac{1}{n}\sum_{i=1}^{n}(Z_{i}-\overline{Z}_{n})^{2}=\frac{1}{n}\sum_{i=1}^{n}(X_{i}-\overline{X}_{n})^{2}}&fg=000000$, therefore $latex {S_{n}^{2}}&fg=000000$ remains equal. Thus,

$latex \displaystyle \sqrt{n}\left(S_{n}^{2}-\mu_{2}\right)\rightsquigarrow\mathcal{N}(0,\mu_{4}-\mu_{2}^{2}). &fg=000000$

We can rewrite it in the following manner

$latex \displaystyle \sqrt{n}\left(\frac{S_{n}^{2}}{\mu_{2}}-1\right)\rightsquigarrow\mathcal{N}(0,\kappa+2), &fg=000000$

where $latex {\kappa=\mu_{4}/\mu_{2}^{2}-3}&fg=000000$ is the kurtosis or the distribution skewness coefficient of $latex {X}&fg=000000$.

Finally remark that by the Slutsky theorem, we obtain the same result even if we consider the variance unbiased estimator $latex {S_{n-1}^{2}=\frac{1}{n-1}\sum_{i=1}^{n}(X_{i}-\overline{X}_{n})^{2}}}&fg=00000$ .

Variance test of a normal law

Before to start, I’ll be a little review about some topics.

1. Let be $latex {U_{1},\ldots,U_{k}}&fg=000000$ i.i.d. r.v. $latex {\mathcal{N}(0,1)}&fg=000000$. Then, the r.v. $latex {Z=U_{1}^{2}+\ldots+U_{k}^{2}}&fg=000000$ has law chi-square with $latex {k}&fg=000000$ freedom degrees. We denote it $latex {\chi_{k}^{2}}&fg=000000$.
2. If $latex {X_{i}}&fg=000000$ has law normal $latex {\mathcal{N}(\mu,\sigma^{2})}&fg=000000$ then $latex {n\frac{S_{n}^{2}}{\sigma^{2}}}}&fg=00000$ is a r.v. chi-square with $latex {n-1}&fg=000000$ freedom degrees.
3. The central limit theorem assures that$latex \displaystyle \frac{\chi_{n-1}^{2}-(n-1)}{\sqrt{2n-2}}\rightsquigarrow\mathcal{N}(0,1). &fg=000000$

Suppose that we want to test the null hypothesis $latex {H_{0}:\mu_{2}\leq1}&fg=000000$ from a sample $latex {X_{1},\ldots,X_{n}}&fg=000000$ . If the $latex {X_{i}}&fg=000000$ are gaussians, we reject $latex {H_{0}}&fg=000000$ if $latex {nS_{n}^{2}}&fg=000000$ exceeds the quantile of order $latex {1-\alpha}&fg=000000$ of a $latex {\chi_{n-1}^{2}}&fg=000000$.We shall denote this number as $latex {\chi_{n,\alpha}^{2}}&fg=000000$. In the gaussian case, the test’s level is exactly $latex {\alpha}&fg=000000$. But, what happens if the $latex {X_{i}}&fg=000000$ are not gaussians?
We have in our hands two law convergence theorems. One coming from the CLT and other from the Delta Method. Denote $latex {\chi_{\alpha}^{2}}&fg=000000$ the real $latex {x}&fg=000000$ such that $latex {\mathbb P(\chi_{n-1}^{2}>x)=\alpha}&fg=000000$ and $latex {u_{\alpha}}&fg=000000$ his gaussian equivalent. The CLT implies

$latex \displaystyle \frac{\chi_{\alpha}^{2}-(n-1)}{\sqrt{2n-2}}\rightarrow u_{\alpha}. &fg=000000$

Therefore, the chi-square test level verifies

$latex \displaystyle \mathbb P_{\{\mu_{2}=1\}}\left(nS_{n}^{2}>\chi_{\alpha}^{2}\right)=\mathbb P\left(\sqrt{n}\left(\frac{S_{n}^{2}}{\mu_{2}}-1\right)>\frac{\chi_{\alpha}^{2}-n}{\sqrt{n}}\right)\rightarrow1-\Phi\left(\frac{u_{\alpha}\sqrt{2}}{\sqrt{\kappa+2}}\right). &fg=000000$

Conclude that the asymptotic level test is $latex {\alpha}&fg=000000$ if only if $latex {\kappa=0}&fg=000000$.

Variance Stabilization

Let be $latex {T_{n}}&fg=000000$ and $latex {\Theta\subset{\mathbb R}}&fg=000000$ such that for any $latex {\theta\in\Theta}&fg=000000$, $latex {\sqrt{n}\left(T_{n}-\theta\right)\rightsquigarrow\mathcal{N}\left(0,\sigma^{2}(\theta)\right).}&fg=000000$ The law convergence take place under $latex {\mathbb P_{\theta}}&fg=000000$. For $latex {\theta}&fg=000000$ fix, the asymptotic confidence intervals of level $latex {1-2\alpha}&fg=000000$ has the form

$latex \displaystyle \left(T_{n}-u_{1-\alpha}\frac{\sigma(\theta)}{\sqrt{n}},T_{n}+u_{1-\alpha}\frac{\sigma(\theta)}{\sqrt{n}}\right), &fg=000000$

where $latex {u_{1-\alpha}}&fg=000000$ is the standard normal quantile of order $latex {1-\alpha}&fg=000000$. The issue with these intervals are that they depend on the unknow parameter $latex {\sigma(\theta)}&fg=000000$. A first solution is to change this quantity by an estimator. Another solution is to transform the problem in one where the variance does not depend on $latex {\theta}&fg=000000$ anymore.

Let $latex {\phi}&fg=000000$ a differential function. We consider the parameter $latex {\eta=\phi(\theta)}&fg=000000$, which we estimate naturally by $latex {\phi(T_{n})}&fg=000000$. The delta method says that

$latex \displaystyle \sqrt{n}\left(\phi(T_{n})-\phi(\theta)\right)\rightsquigarrow\mathcal{N}\left(0,\{\phi^{\prime}(\theta)\}^{2}\sigma^{2}(\theta)\right). &fg=000000$

We choose (when it exists) $latex {\phi}&fg=000000$ such that $latex {\{\phi^{\prime}(\theta)\}^{2}\sigma^{2}(\theta)\equiv1}&fg=000000$, having solution solution

$latex \displaystyle \phi(\theta)=\int\frac{1}{\sigma(\theta)}}d\theta. &fg=00000$

Therefore, we obtain an asymptotic confidence interval of level $latex {1-\alpha}&fg=000000$ for $latex {\phi(\theta)}&fg=000000$. If the transformation is well defined the transformation is increasing monotone (because $latex {\phi^{\prime}(\theta)=1/\sigma(\theta)>0}&fg=000000$), so the confidence interval for $latex {\eta}&fg=000000$ can be transformed back to a confidence interval for $latex {\theta}&fg=000000$.

Example: Correlation

Let $latex {(X_{1},Y_{1}),\ldots,(X_{n},Y_{n})}&fg=000000$ be a sample from a bivariate normal distribution with correlation coefficient $latex {\rho}&fg=000000$. The sample correlation coefficient is given by:

$latex \displaystyle \hat{\rho}_{n}=\displaystyle\frac{\frac{1}{n}\sum_{i=1}^{n}\left(X_{i}-\overline{X}_{n}\right)\left(Y_{i}-\overline{Y}_{n}\right)}{\sqrt{\left(\frac{1}{n}\sum_{i=1}^{n}\left(X_{i}-\overline{X}_{n}\right)^{2}\right)\left(\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}-\overline{Y}_{n}\right)^{2}\right)}}. &fg=000000$

We can show in the case of a sample with bivariate normal distribution that $latex {\sqrt{n}(\hat{\rho}_{n}-\rho)\rightsquigarrow\mathcal{N}(0,(1-\rho^{2})^{2})}&fg=000000$. It is possible using the Slutsky theorem deduce an asymptotic confidence interval for $latex {\rho}&fg=000000$, but the calculations are extremely complicated by the presence of $latex {\hat{\rho}_{n}^{2}}&fg=000000$.

Another solution is to use the Delta method with a transformation that stabilize the variance. Applying the principle describe before to search $latex {\phi}&fg=000000$ such that

$latex \displaystyle \begin{array}{rl} \phi(\rho) & =\displaystyle\int\frac{1}{1-\rho^{2}}d\rho\\ & =\displaystyle\int\frac{1}{2}\left[\frac{1}{1-\rho}+\frac{1}{1+\rho}\right]d\rho\\ & =\displaystyle\frac{1}{2}\ln\left(\frac{1+\rho}{1-\rho}\right)\\ & =\displaystyle\mbox{arctanh}\rho \end{array} &fg=000000$

We finally deduce that our interval is given by

$latex \displaystyle \left[\tanh\left(\mbox{arctanh}(\hat{\rho}_{n})-\frac{u_{1-\frac{\alpha}{2}}}{\sqrt{n}}\right),\tanh\left(\mbox{arctanh}(\hat{\rho}_{n})+\frac{u_{1-\frac{\alpha}{2}}}{\sqrt{n}}\right)\right]} &fg=00000$

where $latex {u_{1-\frac{\alpha}{2}}}&fg=000000$ is the quantile of order $latex {1-\frac{\alpha}{2}}&fg=000000$ of a law $latex {\mathcal{N}(0,1)}&fg=000000$.

Van der Vaart  made some simulations in his book Asymptotic Statistics about this example and we can see how this kind of transformations improve the correlation analysis.  