A first minimax lower bound in the two hypothesis scenario

Photos of  Johann Radon and Otto Nikodym. Sources: Apprendre les Mathématiques and Wikipedia.

Consider the simplest case, $latex {M=1}&fg=000000$ with two hypothesis $latex {\{f_{1},f_{2}\}}&fg=000000$ belonging to $latex {\mathcal{F}}&fg=000000$. According to the last post, we need only to find lower bounds for the minimax probability of error $latex {p_{e,1}}&fg=000000$. Today, we will find a bound using the likelihood ratio.

The acknowledgment of our weakness is the first step in repairing our loss.
— Thomas Kempis.
Photo credit: Tyler Shields

Denote $latex {\mathbb P_{0}^{a}}&fg=000000$ and $latex {\mathbb P_{0}^{s}}&fg=000000$ the absolutely continuous and the singular components of $latex {P_{0}}&fg=000000$ with respect to the measure $latex {P_{1}}&fg=000000$. By some result in probability we can write $latex {\mathbb P_{0}=\mathbb P_{0}^{a}+\mathbb P_{0}^{s}}&fg=000000$. We will denote $latex {{\displaystyle \frac{d\mathbb P_{0}^{a}}{d\mathbb P_{1}}}}&fg=000000$ the Radon-Nikodym derivate.

Proposition 1

$latex \displaystyle p_{e,1}\geq\sup_{\tau>0}\left\{ \frac{\tau}{1+\tau}\mathbb P_{1}\left({\displaystyle \frac{d\mathbb P_{0}^{a}}{d\mathbb P_{1}}}\geq\tau\right)\right\} . &fg=000000$

Proof: Fix $latex {\tau>0}&fg=000000$. For any test $latex {h:\mathcal{S}\rightarrow{0,1}}&fg=000000$,

$latex \displaystyle \begin{array}{rl} \mathbb P_{0}(h\neq0) & ={\displaystyle \mathbb P_{o}(h=1)\geq\mathbb P_{0}^{a}(h=1)}\\ & ={\displaystyle \int I(h=1){\displaystyle \frac{d\mathbb P_{0}^{a}}{d\mathbb P_{1}}}d\mathbb P_{1}}\\ & \geq{\displaystyle \tau\int I\left(\left\{ h=1\right\} \cap\left\{ {\displaystyle \frac{d\mathbb P_{0}^{a}}{d\mathbb P_{1}}}\geq\tau\right\} \right)d\mathbb P_{1}}\\ & ={\displaystyle \tau\mathbb P_{1}\left(\left\{ h=1\right\} \cap\left\{ {\displaystyle \frac{d\mathbb P_{0}^{a}}{d\mathbb P_{1}}}\geq\tau\right\} \right)}\\ & ={\displaystyle \tau\left[\mathbb P_{1}\left(h=1\right)+\mathbb P_{1}\left({\displaystyle \frac{d\mathbb P_{0}^{a}}{d\mathbb P_{1}}}\geq\tau\right)-\mathbb P_{1}\left(\left\{ h=1\right\} \cup\left\{ {\displaystyle \frac{d\mathbb P_{0}^{a}}{d\mathbb P_{1}}}\geq\tau\right\} \right)\right]}\\ & \geq{\displaystyle \tau\left[\mathbb P_{1}\left(h=1\right)+\mathbb P_{1}\left({\displaystyle \frac{d\mathbb P_{0}^{a}}{d\mathbb P_{1}}}\geq\tau\right)-1\right]\geq\tau(p-\alpha)} \end{array} &fg=000000$

where $latex {p=\mathbb P_{1}\left(h=1\right)}&fg=000000$ and $latex {\alpha=\mathbb P_{1}\left({\displaystyle \frac{d\mathbb P_{0}^{a}}{d\mathbb P_{1}}}<\tau\right)}&fg=000000$. Clearly, $latex {\mathbb P_{1}(h\neq1)=1-p}&fg=000000$. Then

$latex \displaystyle \begin{array}{rl} p_{e,1} & ={\displaystyle \inf_{h}\max_{j=0,1}\mathbb P_{j}(h\neq j)}\\ & \geq{\displaystyle \min_{o\leq p\leq1}\max\{\tau(p-\alpha),1-p\}}\\ & ={\displaystyle \frac{\tau(1-\alpha)}{1+\tau}}. \end{array} &fg=000000$

The last equality is because the minimal point between the lines $latex {\tau(p-\alpha)}&fg=000000$ and $latex {\tau(p-\alpha)}&fg=000000$ (the former is increasing and the latter is decreasing) is exactly when $latex {{\displaystyle p=\frac{1+\tau\alpha}{1+\tau}}}&fg=000000$. $latex \Box&fg=000000$

Summarizing, it is enough to find constants $latex {\tau>0}&fg=000000$ and $latex {0<\alpha<1}&fg=000000$ independent of $latex {n}&fg=000000$ satisfying

$latex \displaystyle \mathbb P_{1}\left({\displaystyle \frac{d\mathbb P_{0}^{a}}{d\mathbb P_{1}}}\geq\tau\right)\geq1-\alpha. \ \ \ \ \ (1)&fg=000000$

 Remark 1 If $latex {\mathbb P_{o}\ll \mathbb P_{1}}&fg=000000$ (then $latex {\mathbb P_{0}^{a}=\mathbb P_{0}}&fg=000000$), the random variable $latex {{\displaystyle \frac{d\mathbb P_{0}^{a}}{d\mathbb P_{1}}}(X)}&fg=000000$ is called the likelihood ratio.

Remark 2 Condition (1) means that the closer $latex {\mathbb P_{0}}&fg=000000$ is to $latex {\mathbb P_{1}}&fg=000000$, the greater is the lower bound. In fact, if $latex {\mathbb P_{0}=\mathbb P_{1}}&fg=000000$ condition (1) holds for $latex {\tau=1}&fg=000000$, $latex {\alpha=0}&fg=000000$ and the best lower bound is $latex {p_{e,1}\geq1/2}&fg=000000$. There are situations when this bound is not sharp (see Tsybakov for example).

The condition (1) is quite general but is not always easy to check. Therefore, we need to build other bounds based in other distances or divergences between $latex {\mathbb P_{0}}&fg=000000$ and $latex {\mathbb P_{1}}&fg=000000$. We will check them the next time


  1. Pingback: A general reduction scheme for minimax lower bounds | Blog about Statistics

  2. Pingback: Introduction to Minimax Lower Bounds | Blog about Statistics

  3. Pingback: Minimax Lower Bounds using the Total Variation Divergence | Blog about Statistics

Leave a Reply