# Lagrangian Duality I. Definition

\begin{aligned} &\min\; f({\bf x}) \\ {\rm s.t.}\;\;c_i({\bf x}) &\leq 0, \; i=1,2,\cdots,n \\ h_j({\bf x}) &= 0, \; j = 1,2,\cdots,m\\ \end{aligned} \tag{1}

$\mathcal{L}({\bf x}, \boldsymbol{\alpha},\boldsymbol{\beta})=f({\bf x})+\boldsymbol{\alpha}^\top {\bf c}({\bf x})+\boldsymbol{\beta}^\top{\bf h}({\bf x})$

# Differences Towards the Four Constrained Optimization Methods

These four constrained optimization methods looks similarly when first seen:

• Lagrange Multipliers
• Penalty Methods
• Augmented Lagrangian Methods
• Merit Methods

Here is a comprehensive explaination towards these four methods written by Brian Borchers.

# The Method of Lagrangian Multipliers and The KKT Conditions

This article simply introduced strategies for finding the stationary points of the objective function subject to one or more equality or inequality constraints.

Consider a standard form of continuous optimization problem,

$\min\limits_{\bf x} f({\bf x}) \\ {\rm s.t.}\;\;g_k({\bf x})\leq0,\;\;k=1,2,\cdots,m \\ h_l({\bf x})=0,\;\;l=1,2,\cdots,p \\$

in which

${\bf x} \in \Re^{n} \\ f,g_k,h_l:\Re^n \rightarrow \Re\;\;{\rm for}\;\;k=1,2,\cdots,m\;\;{\rm and}\;\;l=1,2,\cdots,p \\ m,p\geq0$

And $f,g_k,h_l$ are all continuous differentable.

We divided the problem into two cases: $p = 0$ or $p \neq 0$. For the former we introduced The Method of Lagrange Multipliers as the solving strategy, and simply introduced KKT Conditions for the other one when it suits some Regularity Conditions.

Notice that it was easy to treat a maximization problem by negating the objective function, we only use the maximization problem as a general example.