Learning goals

Knowledge (remember and understand)

Theory

  • Give a rigorous definition of a local minimum (and maximum) and explain how it differs from a global minimum (maximum).
  • Define a convex function, convex set and convex optimization problem and explain the impact the convexity has on the optimization.
  • State the Weierstrass (extreme value) theorem on the existence of a minimum (maximum).
  • Explain the concepts of big O() and little o() within the framework of truncated Taylor series.
  • Give first-order necessary conditions of optimality for a scalar function of a scalar argument, and define critical (or stationary) point. Extend these to the vector argument case. Formulate them both using a Fréchet and Gateaux derivatives. Specialize the result to quadratic functions.
  • Give second-order sufficient conditions of optimality for a scalar function of a scalar argument. How can we distinguish between a minimum, maximum, and an inflection point? Extend these to the vector case. Define Hessian and show how it can be used to classify the critical points into minimum, maximum, saddle point, and singularity point. Specialize the results to quadratic functions.
  • Give first-order necessary condition of optimality for an equality-constrained optimization problem using Lagrange multipliers. Specialize the results to quadratic cost functions and linear constraints.
  • Characterize the regular point (for a given set of equality constraints). Give an example of equality constraints lacking regularity.
  • Give second-order sufficient conditions of optimality for an equality-constrained optimization problem using the concept of a projected Hessian.
  • State and explain the Karush-Kuhn-Tucker (KKT) conditions for inequality-constrained optimization problems.

Algorithms

  • Explain the main principle of descent direction methods for unconstrained optimization. In particular, give the descent direction condition.
  • Give an overview of approaches for line search, that is, a one-dimensional optimization.
  • Explain the steepest descent (aka gradient) method. Discuss its shortcomings.
  • Explain conditioning of a matrix and what impact it has on convergence of steepest descent algorithm. Propose a modification of a steepest descent method that includes scaling of the original matrix such that conditioning is improved.
  • Explain the Newton method for unconstrained minimization. Give also the its interpretation as a method for root finding. Discuss its shortcomings.
  • Discuss the issue of solving a set of linear equations (in matrix-vector form) as they appear in the Newton method. Which matrix factorization will be appropriate?
  • Explain the key idea behind Quasi-Newton methods.
  • Explain the key idea behind trust region methods for unconstrained optimization. What are the advantages with respect to descent direction methods?
  • Write a code implementing a Quasi-Newton method for minimization of a provided function.

Skills (use the knowledge to solve a problem)

  • Formulate a provided problem as an instance of mathematical optimization: identify the cost function, the constraints, decide if the problems fits into one of the (numerous) families of optimization problems such as linear program, quadratic program (with linear constraints, with quadratic constraints), (general) nonlinear program, …
  • Solve a provided linear and/or quadratic programming problem using a solver of your choice.
Back to top