Suppose we have a sequence of RVsX1β,β¦,Xnβ which have the same marginal distribution. If we perform sample-by-sample scalar quantization, this yields the average MSE E[n1βi=1βnβ(XiββQ(Xiβ))2]=E[(XiββQ(Xiβ))2]We observe that distortion stays the same whether or not our Xiββs have any interdependence. So since we arenβt exploiting the statistical interdependence between RVs we arenβt squeezing out the most out of our quantization that we should be. ### Idea To exploit the statistical interdependence we can form a linear prediction, X^nβ, from the previous m samples X~nβ=i=1βmβaiβXnβiβand quantize the difference signalenββ=XnββX~nβ=Xnββa1βXnβ1βββ―βamβXnβmββ Let P be the FIR Filter with transfer function P(z)=i=1βmβaiβzβiWithout quantization the Xnβ can be perfectly reconstructed:
Pasted image 20240312114126.png
We see that since enβ is less spread out in its output that it is easier to quantize. Adding quantization to the mix we now have our proposed scheme:
Pasted image 20240312114226.png We see if quantization error is small then enββe^nββΉX^nββXnβ. But in this structure we see the error accumulates.
Let hnβ denote the impulse response of the transfer function1βP(z)1β, also let X^nβ=e^nββhnβ, Xnβ=enββhnβ, and XnββX^nβ=(enββe^nβ)βhnβ. Hence, E[(XnββX^nβ)2]β=E[((enββe^nβ)βhnβ)2]=Eβ(i=0βnβenβiββQ(enβiβ)(enβiββe^nβiβ)βββhiβ)2βenβiββQ(enβiβ)(enβiββe^nβiβ)βββWe see that at time n we depend on all past quantization errors eiββQ(eiβ)! This means all our errors accumulate. We need a different structure, which brings us to Difference Quantization.