FIND ME ON

GitHub

LinkedIn

Optimal Bit Allocation

🌱

Theorem
InfoTheory

Simplifications…

Before we state the theorem we must state some high-resolution approximations that are made. 1. High-Resolution Conditions: Each XiX_{i} has pdf fif_{i} and Wi(b)=112fi1322b,  i=1,,kW_{i}(b)=\frac{1}{12}\lVert f_{i} \rVert_{\frac{1}{3}}2^{-2b}, \ \ i=1,\dots,k 2. Let σi2=Var(Xi)\sigma_{i}^{2}=\text{Var}(X_{i}) and X~i=Xiσi\tilde{X}_{i}=\frac{X_{i}}{\sigma_{i}}. Then X~i\tilde{X}_{i} has unit variance (i.e. Var(X~i)=1\text{Var}(\tilde{X}_{i})=1) and its pdf f~i(x)=σifi(σix)\tilde{f}_{i}(x)=\sigma_{i}f_{i}(\sigma_{i}x) satisfies fi13=f~i13σi2\lVert f_{i} \rVert _{\frac{1}{3}}=\lVert \tilde{f}_{i} \rVert _{\frac{1}{3}}\sigma_{i}^{2} 3. Hence Wi(b)=112f~i13hiσi222b=hiσi222b\begin{align*} W_{i}(b)&=\underbrace{ \frac{1}{12}\lVert \tilde{f}_{i} \rVert _{\frac{1}{3}} }_{ h_{i} }\sigma_{i}^{2}2^{-2b}\\ &=h_{i}\sigma_{i}^{2}2^{-2b} \end{align*} ## Note hih_{i} is invariant to scaling and hence, is the same for all XiX_{i} so we treat it as a constant. # Theorem Our optimal distortion for the bit allocation problem is defined as D(b)=i=1khiσi222biD(\mathbf{b})=\sum^{k}_{i=1}h_{i}\sigma_{i}^{2}2^{-2b_{i}}and is minimized subject to our bit constraint i=1kbiB\sum_{i=1}^{k}b_{i}\le B if and only if bi=bˉ+12log2σi2ρ2+12log2hiH,  i=1,,kb_{i}=\bar{b}+\frac{1}{2}\log_{2}\frac{\sigma_{i}^{2}}{\rho^{2}}+\frac{1}{2}\log_{2} \frac{h_{i}}{H}, \ \ i=1,\dots,kwhere bˉ=Bkρ2=(i=1kσi2)1kH=(i=1khi)1/k\bar{b}=\frac{B}{k}\quad\rho^{2}=\left( \prod_{i=1}^{k}\sigma_{i}^{2} \right)^{\frac{1}{k}\quad}H=\left( \prod_{i=1}^{k}h_{i} \right)^{1/k}i.e. ρ2\rho^{2} is the geometric mean of variance and HH is the geometric mean of our shape.

Remark

We see that the optimal b\mathbf{b} must satisfy i=1kbi=B\sum_{i=1}^{k}b_{i}=B

Linked from