Remember that we have supposed two hypothesis $latex {\left\{ f_{0},f_{1}\right\} }&fg=000000$ elements of $latex {\mathcal{F}}&fg=000000$. Denote $latex {P_{0}}&fg=000000$ and $latex {P_{1}}&fg=000000$ two probability measures under $latex {(\mathcal{X},\mathcal{A})}&fg=000000$ under $latex {f_{0}}&fg=000000$ and $latex {f_{1}}&fg=000000$ respectively. If $latex {P_{0}}&fg=000000$ and $latex {P_{1}}&fg=000000$ are very “close”, then it is hard to distinguish $latex {f_{0}}&fg=000000$ and $latex {f_{1}}&fg=000000$ and consequently $latex {p_{e,1}}&fg=000000$ will be large.

The today’s objective is to define a bound for $latex {p_{e,1}}&fg=000000$ using the total variation divergence. We define this divergence between two measures as,

$latex \displaystyle {\displaystyle {\displaystyle V(P_{0},P_{1})=\sup_{A\in\mathcal{A}}\vert P_{0}(A)-P_{1}(A)\vert=\sup_{A\in\mathcal{A}}\left|\int_{A}(p_{0}-p_{1})d\nu\right|}} &fg=000000$

where $latex {p_{0}}&fg=000000$ and $latex {p_{1}}&fg=000000$ are the densities of $latex {P_{0}}&fg=000000$ and $latex {P_{1}}&fg=000000$ with respect to a common dominating measure $latex {\nu}&fg=000000$ [1] and $latex {A}&fg=000000$ is any subset of $latex {\mathcal{A}.}&fg=000000$

Before to start, we need to establish the following lemma

Lemma 1 (Scheffé’s theorem, 1947)

$latex \displaystyle {\displaystyle V(P_{0},P_{1})=\frac{1}{2}\int\left|p_{0}-p_{1}\right|d\nu=1-\int\min(p_{0},p_{1})} &fg=000000$

Proof: Take the set $latex {A_{0}=\left\{ x\in\mathcal{X}:p_{0}(x)\geq p_{1}(x)\right\} }&fg=000000$ and $latex {A_{0}^{c}}&fg=000000$ its complement. Then

$latex \displaystyle \begin{array}{rl} & {\displaystyle \int\vert p_{0}-p_{1}\vert d\nu}\\ & ={\displaystyle \int_{A_{0}}\left(p_{0}-p_{1}\right)d\nu+\int_{A_{0}^{c}}\left(p_{1}-p_{0}\right)d\nu}\\ & ={\displaystyle P_{0}(A_{0})-P_{1}(A_{0})+P_{1}(A_{0}^{c})-P_{0}(A_{0}^{c})}\\ & ={\displaystyle P_{0}(A_{0})-P_{1}(A_{0})+(1-P_{1}(A_{0}))+(1-P_{0}(A_{0}))}\\ & ={\displaystyle 2(P_{0}(A_{0})-P_{1}(A_{0}))}\\ & ={\displaystyle 2\int_{A_{0}}\left(p_{0}-p_{1}\right)d\nu.} \end{array} &fg=000000$

Moreover

$latex \displaystyle V(P_{0},P_{1})\geq P_{0}(A_{0})-P_{1}(A_{0})=\frac{1}{2}\int\vert p_{0}-p_{1}\vert d\nu=1-\int\min(p_{0},p_{1})d\nu. &fg=000000$

The proof will be complete if we can show that effectively $latex {V(P_{0},P_{1})=P_{0}(A_{0})-P_{1}(A_{0})}&fg=000000$.

Therefore, for every $latex {A\in\mathcal{A}}&fg=000000$,

$latex \displaystyle \begin{array}{rl} & {\displaystyle \left|\int_{A}\left(p_{0}-p_{1}\right)d\nu\right|}\\ & ={\displaystyle \left|\int_{A\cap A_{0}}\left(p_{0}-p_{1}\right)d\nu+\int_{A\cap A_{0}^{c}}\left(p_{0}-p_{1}\right)d\nu\right|}\\ & \leq{\displaystyle \max\left\{ \int_{A_{0}}\left(p_{0}-p_{1}\right)d\nu,\int_{A_{0}^{c}}\left(p_{1}-p_{0}\right)d\nu\right\} }\\ & ={\displaystyle \frac{1}{2}\left[\int_{A_{0}}\left(p_{0}-p_{1}\right)d\nu+\int_{A_{0}^{c}}\left(p_{1}-p_{0}\right)d\nu+\left|\int_{A_{0}}\left(p_{0}-p_{1}\right)d\nu-\int_{A_{0}^{c}}\left(p_{1}-p_{0}\right)d\nu\right|\right]}\\ & ={\displaystyle \frac{1}{2}\int\vert p_{0}-p_{1}\vert d\nu}\\ & ={\displaystyle \int_{A_{0}}\left(p_{0}-p_{1}\right)d\nu}. \end{array} &fg=000000$

With this, we can conclude that

$latex \displaystyle {\displaystyle \int_{A_{0}}\left(p_{0}-p_{1}\right)d\nu\leq V(P_{0},P_{1})\leq\int_{A_{0}}\left(p_{0}-p_{1}\right)d\nu} &fg=000000$

implying clearly $latex {V(P_{0},P_{1})=\int_{A_{0}}\left(p_{0}-p_{1}\right)d\nu}&fg=000000$. $latex \Box&fg=000000$ We are now ready to tackle the lower bound on $latex {p_{e,1}}&fg=000000$.

Theorem 2 Let $latex {P_{0},P_{1}}&fg=000000$ be two probability measures in $latex {(\mathcal{X},\mathcal{A}).}&fg=000000$ Si $latex {V(P_{0},P_{1})\leq\alpha<1}&fg=000000$, then

$latex \displaystyle p_{e,1}\geq\frac{1-\alpha}{2}. &fg=000000$

Proof: Consider all tests $latex {h:\mathcal{X}\rightarrow\{0,1\}}&fg=000000$. Notice that $latex {h=1_{A}(X)}&fg=000000$ for any subset $latex {A}&fg=000000$ of the domain. Then

$latex \displaystyle \begin{array}{rl} p_{e,1} & \geq{\displaystyle \inf_{h}\max_{j=0,1}P_{j}(h\neq j)}\\ & \geq{\displaystyle \frac{1}{2}\inf_{h}(P_{0}(h\neq0)+P_{1}(h\neq1))}\\ & ={\displaystyle \frac{1}{2}\inf_{A}(P_{0}(1_{A}(X)\neq0)+P_{1}(1_{A}(X)\neq1))}\\ & ={\displaystyle \frac{1}{2}\inf_{A}(P_{0}(1_{A}(X)\neq0)+P_{1}(1_{A}(X)\neq1))}\\ & ={\displaystyle \frac{1}{2}\inf_{A}(1-(P_{0}(A)-P_{1}(A))}. \end{array} &fg=000000$

Define $latex {A_{0}=\left\{ x\in\mathcal{X}:p_{0}(x)\geq p_{1}(x)\right\} }&fg=000000$ and $latex {p_{0}}&fg=000000$ and $latex {p_{1}}&fg=000000$ are the densities of $latex {P_{0}}&fg=000000$ and $latex {P_{1}}&fg=000000$ with respect $latex {\nu}&fg=000000$. Thus, by theorem 1 we get

$latex \displaystyle \begin{array}{rl} & {\displaystyle \frac{1}{2}\inf_{A}(1-(P_{0}(1_{A}(X)=0)-P_{1}(1_{A}(X)\neq1))}\\ = & {\displaystyle \frac{1}{2}(1-\int_{A_{0}}(p_{0}-p_{1})d\nu)}\\ = & {\displaystyle \frac{1}{2}(1-V(P_{0},P_{1}))}\\ \geq & {\displaystyle \frac{1}{2}(1-\alpha)}. \end{array} &fg=000000$

$latex \Box&fg=000000$

Summarizing, if $latex {P_{0}}&fg=000000$ is close to $latex {P_{1}}&fg=000000$ then $latex {V(P_{0},P_{1})}&fg=000000$ is small and the probability of error $latex {p_{e\ensuremath{,1}}}&fg=000000$ is large.

The next week, we will check bound involving the famous Kullback-Leibler divergence, which is more convenient to work than the total variation divergence.

See you the next time!

[1] Formally, set $latex {j=0,1}&fg=000000$ and suppose that $latex {\nu}&fg=000000$ is a $latex {\sigma}&fg=000000$-finite measure on $latex {(\mathcal{X},\mathcal{A})}&fg=000000$ such that $latex {P_{j}\ll\nu}&fg=000000$. We define, $latex {p_{j}=dP_{j}/d\nu}&fg=000000$. Such measure $latex {\nu}&fg=000000$ always exists, because we can take, for example, $latex {\nu=P_{0}+P_{1}}&fg=000000$.