FIND ME ON

GitHub

LinkedIn

Closed-loop Predictive Quantization

🌱

Definition
InfoTheory

Now to put difference quantization in practice we still need to define what the term UnU_{n} is. The idea here is to let the signal UnU_{n} be the prediction of XnX_{n} from past reconstructions (like in our linear predictor set up). Here we define UnU_{n} as follows Un=āˆ‘i=1maiX^nāˆ’iU_{n}=\sum_{i=1}^{m}a_{i}\hat{X}_{n-i} Pasted image 20240312150852.png

The key feature here is that UnU_{n} can be generated at both the encoder and decoder since it depends on past reconstruction values.

Expressed end-to-end

X^n=e^n+Un=Q(en)+Un=Q(Xnāˆ’Un)+Un=Q(Xnāˆ’āˆ‘i=1maiX^nāˆ’i)+āˆ‘i=1maiX^nāˆ’i\begin{align*} \hat{X}_{n}&=\hat{e}_{n}+U_{n}\\ &=Q(e_{n})+U_{n}\\ &=Q(X_{n}-U_{n})+U_{n}\\ &=Q\left( X_{n}-\sum_{i=1}^{m}a_{i}\hat{X}_{n-i} \right)+\sum_{i=1}^{m}a_{i}\hat{X}_{n-i}\\ \end{align*} # Intuition This architecture is clever in the way it cancels out the UnU_{n} terms from both XnX_{n} and X^n\hat{X}_{n}. It essentially isolates the issue to be strictly concerned with the quantizer alone. Whereas in the previous iteration we had some stuff going on with the pulse response that kept residues of past predictions in the mix. Another way to make sense of this is to look at the source of PP here. PP is derived from X^nāˆ’i,i>1\hat{X}_{n-i}, i>1 only, this means we can keep the UnU_{n} consistent on both sides, a matter that was not the case when we had PP being generated both from XnX_{n} and X^n\hat{X}_{n} on either side.

Equivalent structure

Pasted image 20240312153732.png We see here that the sending side keeps track of what the receiver receives which allows the UnU_{n} to stay consistent on both sides.

Linked from