Algorithms for constrained optimization

We keep adhering to our previous decision to focus on the algorithms that use derivatives. But even then the number of derivative-based algorithms for constrained optimization – and we consider both equality and inequality constraints – is large. They can be classified in many ways.

One way to classify the derivative-based algorithms for constrained optimization is based on is to based on the dimension of the space in which they work. For an optimization problem with n variables and m constraints, we have the following possibilities: n-m, n, m, and n+m.

Primal methods

  • With m equality constraints, they work in the space of dimension n-m.
  • Three advantages
    • each point generated by the iterative algoritm is feasible – if terminated early, such point is feaible.
    • if they generate a converging sequence, it typically converges at least to a local constrained minimum.
    • it does not rely on a special structure of the problem, it can be even nonconvex.
  • but it needs a feasible initial point.
  • They may fail for inequality constraints.

They are particularly useful for linear/affine constraints or simple nonlinear constraints (norm balls or ellipsoids).

Projected gradient method

Active set methods

Sequential quadratic programming (SQP)

KKT conditions for a nonlinear program with equality constraints solved by Newton’s method.

Interpretation: at each iteration, we solve a quadratic program (QP) with linear constraints.

Penalty and barrier methods

Dual methods

Primal-dual methods

Back to top