FIND ME ON

GitHub

LinkedIn

Closed-loop Predictor Coefficients

🌱

Definition
InfoTheory

In our Closed-loop Predictive Quantization system one may wonder, “how are the predictor coefficients, aia_{i}, chosen?”

  1. We may first attempt to minimize the closed-loop MSE prediction error E[en2]=E[(Xni=1maiX^ni)2]E[e_{n}^{2}]=E\left[ \left( X_{n}-\sum^{m}_{i=1}a_{i}\hat{X}_{n-i} \right)^{2} \right]but this is too computationally complex as for large sequences each X^ni\hat{X}_{n-i} will depend on previous aia_{i} in a highly linear fashion.
  2. We may instead choose aia_{i} to minimize the open-loop error E[(Xni=1maiXni)2]E\left[ \left( X_{n}-\sum^{m}_{i=1}a_{i}X_{n-i} \right)^{2} \right]so now we state the problem formally…

Problem

Given RVs Xnm,Xnm+1,,Xn1,XnX_{n-m},X_{n-m+1},\dots,X_{n-1},X_{n} (all have finite variance), determine their predictor coefficients, aia_{i} by minimizing our prediction error minE[(Xni=1maiXni)2]\min E\left[ \left( X_{n}-\sum^{m}_{i=1}a_{i}X_{n-i} \right)^{2} \right] # Solution Define g(a1,,am)=E[(Xni=1maiXni)2]g(a_{1},\dots,a_{m})=E\left[ \left( X_{n}-\sum^{m}_{i=1}a_{i}X_{n-i} \right)^{2} \right]We must find argmina1,,am g(a1,,am)\underset{ a_{1},\dots,a_{m} }{ \arg\min } \ g(a_{1},\dots,a_{m})Since E[(Xni=1maiXni)2]=E[Xn2]+i=1mk=1maiakE[XniXnk]2i=1maiE[XnXni]\begin{align*} &E\left[ \left( X_{n}-\sum^{m}_{i=1}a_{i}X_{n-i} \right)^{2} \right]\\ &=E[X_{n}^{2}]+\sum_{i=1}^{m}\sum_{k=1}^{m}a_{i}a_{k}E[X_{n-i}X_{n-k}]-2\sum_{i=1}^{m}a_{i}E[X_{n}X_{n-i}] \end{align*}now we derive with respect to aja_{j} gaj=2k=1makE[XnjXnk]2E[XnXnj]\frac{ \partial g }{ \partial a_{j} } =2\sum_{k=1}^{m}a_{k}E[X_{n-j}X_{n-k}]-2E[X_{n}X_{n-j}] Setting gaj=0\frac{ \partial g }{ \partial a_{j} }=0 j\forall j gives mm linear equations (note this works because distortion is convex). ## System of equations k=1makE[XnjXnk]=E[XnXnj],   j=1,,m(*)\tag{*}\sum_{k=1}^{m}a_{k}E[X_{n-j}X_{n-k}]=E[X_{n}X_{n-j}], \ \ \ j=1,\dots,mLet rik=E[XniXnk]r_{ik}=E[X_{n-i}X_{n-k}] and R=[r11r12r1mr21r22r2mrm1rm2rmm]v=[r01r02r0m]\mathbf{R}=\begin{bmatrix}r_{11} & r_{12} & \dots & r_{1m} \\ r_{21} & r_{22} & \dots & r_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ r_{m1} & r_{m2} & \dots & r_{mm} \end{bmatrix}\quad \mathbf{v}=\begin{bmatrix}r_{01} \\ r_{02} \\ \vdots \\ r_{0m}\end{bmatrix}From ()(*) the optimal a=(a1,,am)T\mathbf{a}=(a_{1},\dots,a_{m})^{T} must solve Ra=v\mathbf{Ra}=\mathbf{v}Solution always exists but is only unique if R\mathbf{R} is invertible a=R1v\mathbf{a}=\mathbf{R}^{-1}\mathbf{v} ## Remark The autocorrelation matrix R\mathbf{R} is symmetric (i.e. rij=rjir_{ij}=r_{ji}) It is also Positive Semidefinite (or definite) i.e. xTRx0   x=(x1,,xm)T\mathbf{x}^{T}\mathbf{R}\mathbf{x}\ge 0 \ \ \ \forall \mathbf{x}=(x_{1},\dots,x_{m})^{T}

Linked from