FIND ME ON

GitHub

LinkedIn

Orthogonality Principle

🌱

Theorem
InfoTheory

Recall

In our closed-loop linear predictor we defined the error as en=Xnβˆ’βˆ‘i=1maiXnβˆ’ie_{n}=X_{n}-\sum_{i=1}^{m}a_{i}X_{n-i}We then have that E[enXnβˆ’j]=E[(Xnβˆ’βˆ‘i=1maiXnβˆ’i)Xnβˆ’j]=E[XnXnβˆ’j]βˆ’βˆ‘k=1maiE[Xnβˆ’kXnβˆ’j]\begin{align*} E[e_{n}X_{n-j}]&=E\left[ \left( X_{n}-\sum_{i=1}^{m}a_{i}X_{n-i} \right)X_{n-j} \right]\\ &=E[X_{n}X_{n-j}]-\sum_{k=1}^{m}a_{i}E[X_{n-k}X_{n-j}] \end{align*}We see that from (*), a1,…,ama_{1},\dots,a_{m} is optimal if and only if E[enXnβˆ’j]=0E[e_{n}X_{n-j}]=0, j=1,…,mj=1,\dots,m. # Theorem The linear predictor X^n=βˆ‘k=1makXnβˆ’k\hat{X}_{n}=\sum_{k=1}^{m}a_{k}X_{n-k} is optimal in the MSE sense if and only if the prediction error is orthogonal to all Xnβˆ’jX_{n-j} i.e.X^nΒ optimalΒ β€…β€ŠβŸΊβ€…β€Š(Xnβˆ’X^n)βŠ₯Xnβˆ’jΒ Β Β j=1,…,m\hat{X}_{n}\text{ optimal }\iff(X_{n}-\hat{X}_{n})\perp X_{n-j} \ \ \ j=1,\dots,m