Lagrangian Duality I. Definition

考虑原始 Constrained Optimization Problem

min  f(x)s.t.    ci(x)0,  i=1,2,,nhj(x)=0,  j=1,2,,m(1)\begin{aligned} &\min\; f({\bf x}) \\ {\rm s.t.}\;\;c_i({\bf x}) &\leq 0, \; i=1,2,\cdots,n \\ h_j({\bf x}) &= 0, \; j = 1,2,\cdots,m\\ \end{aligned} \tag{1}

其 Lagrangian 形式为

L(x,α,β)=f(x)+αc(x)+βh(x)\mathcal{L}({\bf x}, \boldsymbol{\alpha},\boldsymbol{\beta})=f({\bf x})+\boldsymbol{\alpha}^\top {\bf c}({\bf x})+\boldsymbol{\beta}^\top{\bf h}({\bf x})

阅读更多

The Method of Lagrangian Multipliers and The KKT Conditions

This article simply introduced strategies for finding the stationary points of the objective function subject to one or more equality or inequality constraints.

Consider a standard form of continuous optimization problem,

minxf(x)s.t.    gk(x)0,    k=1,2,,mhl(x)=0,    l=1,2,,p\min\limits_{\bf x} f({\bf x}) \\ {\rm s.t.}\;\;g_k({\bf x})\leq0,\;\;k=1,2,\cdots,m \\ h_l({\bf x})=0,\;\;l=1,2,\cdots,p \\

in which

xnf,gk,hl:n    for    k=1,2,,m    and    l=1,2,,pm,p0{\bf x} \in \Re^{n} \\ f,g_k,h_l:\Re^n \rightarrow \Re\;\;{\rm for}\;\;k=1,2,\cdots,m\;\;{\rm and}\;\;l=1,2,\cdots,p \\ m,p\geq0

And f,gk,hlf,g_k,h_l are all continuous differentable.

We divided the problem into two cases: p=0p = 0 or p0p \neq 0. For the former we introduced The Method of Lagrange Multipliers as the solving strategy, and simply introduced KKT Conditions for the other one when it suits some Regularity Conditions.

Notice that it was easy to treat a maximization problem by negating the objective function, we only use the maximization problem as a general example.

阅读更多