Learning goals

Knowledge (remember and understand)

  • Explain the main principle of descent direction methods for unconstrained optimization. In particular, give the descent direction condition.
  • Give an overview of approaches for line search, that is, a one-dimensional optimization.
  • Explain the steepest descent (aka gradient) method. Discuss its shortcomings.
  • Explain conditioning of a matrix and what impact it has on convergence of steepest descent algorithm. Propose a modification of a steepest descent method that includes scaling of the original matrix such that conditioning is improved.
  • Explain the Newton method for unconstrained minimization. Give also the its interpretation as a method for root finding. Discuss its shortcomings.
  • Discuss the issue of solving a set of linear equations (in matrix-vector form) as they appear in the Newton method. Which matrix factorization will be appropriate?
  • Explain the key idea behind Quasi-Newton methods.
  • Explain the key idea behind trust region methods for unconstrained optimization. What are the advantages with respect to descent direction methods?

Skills (use the knowledge to solve a problem)

  • Write a code implementing a Quasi-Newton method for minimization of a provided function.
Back to top