FIND ME ON

GitHub

LinkedIn

Lagrange Multipliers Technique

🌱

InfoTheory

Method (Constrained Optimization)

Want to minimize a function f(xn)f(x^{n}) over its alphabet xn=(x1,,xn)Rnx^{n}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n} subject to inequality constraints gi(xn)0, i=1,,mg_{i}(x^{n})\le0, \ i=1,\ldots,mand equality constraints hj(xn)=0, j=1,,lh_{j}(x^{n})=0, \ j=1,\ldots,li.e. we want to determine minxnQf(xn)(#)\tag{\#}\min_{x^{n}\in Q}f(x^{n})where Q={xnRn:gi(xn0,i=1,,m\mboxandhj(xn)=0,j=1,,l}Q=\{x^{n}\in\mathbb{R}^{n}: g_{i}(x^{n}\le0,i=1,\ldots,m\mbox{ and }h_{j}(x^{n})=0,j=1,\ldots,l \} # Remark In general #\# is hard to solve so instead we consider a dual optimization without constraints given by L(λm,vl):=minxnRn[f(xn)+i=1mλigi(xn)+j=1lvjhj(xn)]L(\lambda^{m},v^{l}):=\min_{x^{n}\in\mathbb{R}^{n}}\left[f(x^{n})+\sum\limits_{i=1}^{m}\lambda_{i}g_{i}(x^{n})+\sum\limits_{j=1}^{l}v_{j}h_{j}(x^{n})\right]If ff and {gi}i=1m\{g_{i}\}_{i=1}^{m} are differentiable convex functions and {hj}j=1l\{h_{j}\}_{j=1}^{l} are affine, then it can be shown that \exists multipliers λ~m=(λ~1,,λ~m)\tilde\lambda^{m}=(\tilde\lambda_{1},\ldots,\tilde\lambda_{m}) with λ~i0, i=1,,m\tilde\lambda_{i}\ge0, \ i=1,\ldots,m and v~l=(v~1,,v~l)\tilde v^{l}=(\tilde v_{1},\ldots, \tilde v_{l}) such that L(λ~m,v~l)=minxnQf(xn)=f(x+n)L(\tilde\lambda^{m},\tilde v^{l})=\min_{x^{n}\in Q}f(x^{n})=f(x^{+^{n}})(where x+nx^{+^n} is the optimizer point) iff xn=(x1,,xn), λ~mx^{*n}=(x_{1}^{*},\ldots,x_{n}^{*}), \ \tilde\lambda^{m} and v~l\tilde v^{l} satisfy the following Karush-Kuhn-Tucker (KKT) conditions (i.e. we have solved #\# via L(λ~m,v~lL(\tilde\lambda^{m},\tilde v^{l}).

Linked from