FIND ME ON

GitHub

LinkedIn

Lagrange Multiplier

🌱

Theorem
InfoTheory

Theorem

Assume f:RkRf:\mathbb{R}^{k}\to \mathbb{R} and g:RkRg:\mathbb{R}^{k}\to \mathbb{R} are continuously differentiable functions. We want to solve the following constrained minimization problem (CMP): minimize f(x)subject to g(x)=0\begin{align*} &\text{minimize }f(\mathbf{x})\\ &\text{subject to }g(\mathbf{x})=0 \end{align*}Let u=(x1u,xku)\nabla u=\left( \frac{ \partial }{ \partial x_{1} }{u}\dots,\frac{ \partial }{ \partial x_{k} } u \right) denote the gradient of a u:RkRu:\mathbb{R}^{k}\to \mathbb{R}. We say now if x\mathbf{x}^{*} is a solution of the CMP and g(x)0\nabla g(\mathbf{x}^{*})\not=0, then λR\exists\lambda\in\mathbb{R}^{*} called the Lagrange multiplier such that f(x)+λg(x)=0\nabla f(\mathbf{x}^{*})+\lambda \nabla g(\mathbf{x}^{*})=0 ## Remark The theorem yields the following equations: xj(f(x)+λg(x))x=x=0g(x)=0 \begin{align*} \left.\frac{ \partial }{ \partial x_{j} } (f(\mathbf{x})+\lambda g(\mathbf{x}))\right|_{\mathbf{x}=\mathbf{x}^{*}}&=0\\ g(\mathbf{x}^{*})&=0 \end{align*} this gives us k+1k+1 equations for k+1k+1 unknowns.