FIND ME ON

GitHub

LinkedIn

Predictive Quantization vs Scalar Quantization

🌱

Theorem
InfoTheory

What is the Performance Gain?

Assume X1,…,Xn,…X_{1},\dots,X_{n},\dots is WSS. Recall the difference quantization principle E[(Xnāˆ’X^n)2]=E[(enāˆ’e^n)2]=E[(enāˆ’Q(en))2]E[(X_{n}-\hat{X}_{n})^{2}]=E[(e_{n}-\hat{e}_{n})^{2}]=E[(e_{n}-Q(e_{n}))^{2}]We see here that our quantization errors do not accumulate. We can assess the performance gain over our baseline scalar quantization.

System SNR

SNRsys=10log⁔10Gclp+SNRQ   [dB]\text{SNR}_{\text{sys}}=10\log_{10}G_{\text{clp}}+\text{SNR}_{Q} \ \ \ [\text{dB}]

Approximations

  1. Gclpā‰ˆGolp=E[Xn2]E[(Xnāˆ’āˆ‘i=1maiXnāˆ’i)2]G_{\text{clp}}\approx G_{\text{olp}}=\frac{E[X_{n}^{2}]}{E\left[ \left( X_{n}-\sum^{m}_{i=1}a_{i}X_{n-i} \right)^{2} \right]}
  2. GQG_{Q} is mainly determined by QQ and not ene_{n}

Hence we have SNRsysā‰ˆSNRQ+10log⁔10Golp\text{SNR}_{\text{sys}}\approx \text{SNR}_{Q}+10\log_{10}G_{\text{olp}} Here we see the overall performance gain over scalar quantization is 10log⁔10Golp10\log_{10}G_{\text{olp}} # Maximum Gain With optimal coefficients a1,…,ama_{1},\dots,a_{m} the max gain is Gclpā‰ˆGolp=E[Xn2]E[(Xnāˆ’āˆ‘i=1maiXnāˆ’i)2]=r0r0āˆ’āˆ‘i=1mairiG_{\text{clp}}\approx G_{\text{olp}}=\frac{E[X_{n}^{2}]}{E\left[ \left( X_{n}-\sum^{m}_{i=1}a_{i}X_{n-i} \right)^{2} \right]}=\frac{r_{0}}{r_{0}-\sum_{i=1}^{m}a_{i}r_{i}}