Definition
The information signal, yi,i∈N, is what any agent in a team problem is said to observe. ## Static For each i∈N we define it in terms of the static information function, ηi s.t. yi=ηi(ξ)≡η~i(ω),ω∈Ωwhere ξ∈Ξ is the random state of nature (which is a RV mapping from Ξ to Ω). Given how we tend to treat Ξ⟺Ω in the definition of ηi we do similarly here with yi=η~i(ω).
We know that a decision problem is said to be dynamic if the measurements of at least one of the agents involves past actions hence the information function also takes in past measurements as input: yi=ηi(ξ;u),i∈N(1)where the dependence on u (the N−tuple of past actions by each agent) is assumed to be Strictly Causal, which means that under a given fixed clock the information received by each agent can depend only on actions taken in the past.
Let T:={1,…,T} and t∈T. Let uti,yti denote the action variable and information signal of agent Ai at time t∈T. Furthermore let ut:{ut1,…,utN}yt={yt1,…,ytN}and u[t0,t1)≡u[t0,t1−1]:={ut0,ut0+1,…,ut1−1}≡{u[t0,t1)1,…,u[t0,t1)N}Then, under the strictly causal assumption, (1) becomes equivalent to yti=nti(ξ,u[1,t)),t∈T,i∈Nfor some information functions nti,t∈T,i∈N. The variable yti∈Yti is the on-line information available to Ai which we can use for the construction of uti through an appropriate policy γti:Yti→Uti: uti=γti(yti)≡γti(ηti[ξ;u[1,t)]),t∈T,i∈N