FIND ME ON

GitHub

LinkedIn

Asymptotic Equipartition Property

🌱

Theorem
InfoTheory

For a DMS {Xi}i=1\{X_i\}_{i=1}^\infty with pmf pXp_X and alphabet X\mathcal{X}, 1nlog2(pXn(Xn))nH(X)\mboxinprobability-\frac{1}{n}\log_2(p_{X^n}(X^n))\xrightarrow{n\to\infty}H(X)\mbox{ in probability}for the average information of Xi,n1X_i,n\ge1 converges in probability to the entropy of XX. For a stationary ergodic source {Xi}i=1\{X_i\}_{i=1}^\infty we have 1nlog2(pXn(Xn))nH(X)\mboxinprobability-\frac{1}{n}\log_2(p_{X^n}(X^n))\xrightarrow{n\to\infty}H(\mathcal{X})\mbox{ in probability}

AEP states that a subset Bϵ(n)B_{\epsilon}^{(n)} of Xn\mathcal{X}^{n} given by Bϵ(n)={anXn:1nlog2pXn(an)H(X)>ϵ}B_{\epsilon}^{(n)}=\{a^{n}\in\mathcal{X}^{n}:|- \frac{1}{n}\log_{2}p_{X^{n}}(a^{n})-H(X)|>\epsilon\} satisfies limnP(Bϵ(n))=0\lim_{n\to\infty}P(B_{\epsilon}^{(n)})=0

For a DMS {Xi}i=1\{X_i\}_{i=1}^\infty with pmf pXp_X on X\mathcal{X} and entropy H(X)H(X), the typical set Aϵ(n)A_\epsilon^{(n)} satisfies: 1. limnP(Aϵ(n))=1\lim_{n\to\infty}P(A_\epsilon^{(n)})=1 2. Aϵ(n)2n(H(X)+ϵ)|A_\epsilon^{(n)}|\le2^{n(H(X)+\epsilon)} 3. Aϵ(n)(1ϵ)2n(H(X)ϵ)|A_\epsilon^{(n)}|\ge(1-\epsilon)2^{n(H(X)-\epsilon)} for nn sufficiently large

https://momot.rs/d3/y/1741030114/10000/g4/libgenrs_nonfiction/libgenrs_nonfiction/2358000/ee866eac78f896c3757743bda3b8910b~/TvybSwLnmq52KQKhf5taxA/An%20Introduction%20to%20Single-User%20Information%20Theory%20%28Springer%20–%20Alajaji%2C%20Fady%3B%20Chen%2C%20Po-Ning%20–%20Springer%20Undergraduate%20Texts%20in%20Mathematics%20and%20–%209789811080005%20–%20ee866eac78f896c3757743bda3b8910b%20–%20Anna%E2%80%99s%20Archive.pdf

Linked from