FIND ME ON

GitHub

LinkedIn

Clarity

🌱

InfoTheory

Let XX be a nn-dimensional Continuous random variable with differential entropy, h(X)h(X). The clarity of XX is defined as q[X]=(1+e2h[X](2πe)n)1.q[X]=\left(1+\frac{e^{2h[X]}}{(2\pi e)^{n}}\right)^{-1}.The normalizing factor (2πe)n(2\pi e)^{n} is introduced to simplify some of the algebra. ^e46657

For any nn-dimensional Continuous random variable XX, we have for any ARn×nA\in\mathbb{R}^{n\times n} and cRnc\in\mathbb{R}^{n} that q[X][0,1]clarity is boundedq[X+c]=q[X]clarity is shift-invariantq[AX]q[X]clarity is not scale-invariant\begin{align*}\\ q[X]&\in[0,1] \quad&\text{clarity is bounded}\tag{1}\\ q[X+c]&=q[X] \quad&\text{clarity is shift-invariant}\tag{2}\\ q[AX]&\not=q[X] \quad&\text{clarity is not scale-invariant}\tag{3} \end{align*}

\begin{proof} (1)(1): Since h[X][,],q[X]=11+sh[X]\in[-\infty,\infty],q[X]=\frac{1}{1+s} for s[0,]s\in [0,\infty], i.e., q[X][0,1]q[X]\in[0,1]. (2),(3)(2),(3): Follows from properties of differential entropy i.e. Differential Entropy Under Scaling and Invariance of Differential Entropy Under Translation h[X+c]=h[X]&h[AX]=h[X]+logAh[X+c]=h[X]\,\&\,h[AX]=h[X]+\log|A| \end{proof}

For any nn-dimensional Continuous rv XX and any X^Rn\hat{X}\in\mathbb{R}^{n}, the determinant of the expected estimation error is lower-bounded as E[(XX^)(XX^)]1q[X]1\left|E_{}\left[ (X-\hat{X})(X-\hat{X})^{\top} \right]\right|\ge \frac{1}{q[X]}-1with equality if and only if XX is Gaussian and E[X]=X^E_{}\left[ {X} \right]=\hat{X}.

^theorem1 \begin{proof} Using the same arguments as in th 8.6.6 E[(XX^)(XX^)]e2h[X](2πe)n\left|E_{}\left[ (X-\hat{X})(X-\hat{X})^{\top} \right] \right|\ge \frac{e^{2h[X]}}{(2\pi e)^{n}}
\end{proof}

Linked from