Consider the problem of minimizing J(t0ā,x0ā,u)=J(u)=t0āā«t1āāL(t,x(t),u(t))dt+Q(x(t1ā))where u:RāU,(UāRm), L:RĆRnĆRmāR is the running cost (o/w referred to as the Lagrangian), Q:RnāR is the terminal cost, all subject to the following dynamics xĖ(t)x(t0ā)ā=f(x(t),u(t),t)=x0āāātā„t0ā where f:RnĆRmĆRāRn. The goal of dynamic programming is to consider a family of minimization problems J(t,X,u)=tā«t1āāL(Ļ,x(Ļ),u(Ļ))dĻ+Q(x(t1ā))ātā[t0ā,t1ā)where XāRn and x(t)=X. Our goal is to derive a dynamic relationship among these problems and solve all of them. To do this we introduce the Value Function V(t,X)=u[t,t1ā]infā{J(t,X,u)}where the control is restricted to future time. We also wish to have V(t1ā,X)=Q(X)āXāRn