FIND ME ON

GitHub

LinkedIn

Maximal Likelihood Estimator

🌱

Definition

Introduction

Let’s say we observe a realization of X0,X1,…,XNX_{0},X_{1},\ldots,X_{N} from a \mboxMarkov(Ξ»,P)\mbox{Markov}(\lambda,P). How do we estimate the transition matrix PP? Well lets define some things:

Likelihood

L=P(X0=x0,…,XN=xN)=Ξ»x0px0,x1β‹―pxNβˆ’1,xNL=P(X_{0}=x_{0},\ldots,X_{N}=x_{N})=\lambda_{x_{0}}p_{x_{0},x_{1}}\cdots p_{x_{N-1},x_{N}} ### Log-likelihood log⁑(L)=log⁑(Ξ»X0)+βˆ‘i,j∈S(βˆ‘k=0Nβˆ’11{Xk=i,Xk+1=j})log⁑(pij)\log(L)=\log(\lambda_{X_{0}})+\sum\limits_{i,j\in S}\left(\sum\limits_{k=0}^{N-1}\mathbb{1}_{\{X_{k}=i,X_{k+1}=j\}} \right)\log(p_{ij})

For i,j∈Si,j\in S the maximal likelihood estimator for PP is P^ij=βˆ‘k=0Nβˆ’11{Xk=i,Xk+1=j}βˆ‘k=0Nβˆ’11{Xk=i}\hat P_{ij}=\frac{\sum\limits_{k=0}^{N-1}\mathbb{1}_{\{X_{k}=i,X_{k+1}=j\}}}{\sum\limits_{k=0}^{N-1}\mathbb{1}_{\{X_{k}=i\}}}

Assume PP is irreducible and positive recurrent. Then for any i,j∈Si,j\in S P(lim⁑nβ†’βˆžP^ij=Pij)=1P\left(\lim_{n\to\infty}\hat P_{ij}=P_{ij}\right)=1