Prerequisites

The student is expected to have a solid undergraduate-level understanding of linear algebra and multivariate analysis. In addition, familiarity with linear programming and convex analysis is required, sufficient to follow the material and complete the exercises from the following sources:

- Chapters 1 and 2 of Linear Programming: A Concise Introduction by Thomas S. Ferguson (www.math.ucla.edu/~tom/LP.pdf)

- Sections 2.1, 2.2, and 3.1 of Convex Optimization by Stephen Boyd and Lieven Vandenberghe (stanford.edu/~boyd/cvxbook), particularly exercises 2.1, 2.2, 2.12, 3.1, 3.3, 3.5, and 3.7

Aim of the Course

Continuous optimization is the branch of optimization that deals with optimizing a (differentiable) function over continuous (rather than discrete) variables. These variables may be subject to (differentiable) equality and inequality constraints, as well as convex cone constraints. Such optimization problems arise frequently in science and engineering, in machine learning, and as relaxations of discrete optimization problems. The differentiability of the functions involved enables the use of multivariate analysis and linear algebra techniques for studying the problems and designing and analyzing efficient algorithms.

In this course, we cover the fundamental theory and several key algorithms of continuous optimization. On the theoretical side, we explore convexity, Lagrange duality, optimality conditions, and conic programming. On the algorithmic side, we examine derivative-free methods, steepest descent methods, automatic differentiation (with application to neural networks), Newton’s method, trust-region methods, interior-point methods, and augmented Lagrangian methods. For some of these algorithms, we analyze the convergence.

Lectures 

Lectures take place weekly in Utrecht from 10:00 to 12:45. Each session includes 90 minutes of actual lecturing, interspersed with short in-class exercises to help students become familiar with newly introduced concepts. At the end of each lecture, there is an exercise session for additional practice.

To encourage an interactive and engaging learning environment, the lectures are neither streamed nor recorded. Attendance is therefore strongly recommended. Lectures are delivered using a blackboard, and the corresponding notes will be made available online.

Exercises

There are weekly exercise sheets with problems that mostly reflect the level of difficulty expected on the exam. These exercises are not graded, but solutions will be provided so students can assess their own understanding.

Assessment

The course is assessed through a written, closed-book exam, which accounts for 100% of the final grade. This also applies to the re-exam.

Learning goals

1. Identify convex and non-convex problems, and derive key properties of convex optimization problems.

2. Apply multivariate analysis techniques to establish results in continuous optimization.

3. Formulate Lagrange dual problems and derive fundamental results related to duality.

4. Apply necessary and sufficient optimality conditions to constrained continuous optimization problems.

5. Model problems in the conic programming framework.

6. Apply a range of techniques to solve unconstrained and constrained continuous optimization problems.

7. Perform automatic differentiation and derive computational properties related to this.

8. Analyze algorithmic properties, such as convergence rates, of various methods for solving continuous optimization problems.

Lecturer

 

David de Laat (TU Delft)

d.delaat@tudelft.nl 

www.daviddelaat.nl

 

Literature

 

We will use the book Convex Optimization by Boyd and Vandenberghe which is available for free online. The lecturer will provide additional material in the form of the notes.

 Contents of the lectures

 

1. Introduction / Convexity / Unconstrained optimality conditions

2. Lagrange duality

3. Minimax theorem and zero-sum games / Sensitivity analysis / Constrained optimality conditions

4. Generalized inequalities and conic programming

5. Semidefinite programming

6. Polynomial optimization / Convergence rates / Univariate derivative free optimization

7. Gradient descent (including convergence analysis)

8. Reverse-mode automatic differentiation (with application to neural networks)

9. Steepest descent methods / Newton's method

10. Quadratic convergence of Newton's method / Trust-region methods

11. Newton’s method with equality constraints / Primal interior-point methods

12. Analysis of interior-point methods / Primal-dual interior-point methods

13. Augmented Lagrangian methods