The student should have a solid bachelor level knowledge linear algebra and multivariate analysis. The student should also have knowledge of linear optimization and convex analysis to the level of being able to follow the text and do the exercises from the following:

Linear Programming, A Concise Introduction, Thomas S. Ferguson:
Available at
Chapters 1 and 2, along with the accompanying exercises.
Convex Optimization, Stephen Boyd and Lieven Vandenberghe:
Available at
Sections: 2.1, 2.2 and 3.1.
Exercises (from the book): 2.1, 2.2, 2.12, 3.1, 3.3, 3.5 and 3.7

Aim of the course
Continuous optimization is the branch of optimization where we optimize a (differentiable) function over continuous (as opposed to discrete) variables. Here the variables can be constrained by (differentiable) equality and inequality constraints as well as by convex cone constraints. Optimization problems like this occur naturally and commonly in science and engineering and also occur as relaxations of discrete optimization problems. Differentiability of the functions defining the problems allows for the use of multivariable calculus and linear algebra techniques to study the problems and to design and analyze efficient algorithms.

This course aims to provide a concise introduction into the basics of continuous unconstrained, constrained and conic optimization.

Learning goals
The student will be able to:

  • Prove results on (convex) optimization problems.
  • Solve the KKT conditions for basic constrained optimization problems.
  • Be able to formulate the Lagrange dual, and understand/prove basic results on these problems.
  • Give both sufficient and necessary optimality conditions for constrained continuous optimization problems.
  • Use a range of techniques to solve both unconstrained and constrained continuous optimization problems, and prove results on these techniques.
  • Formulate and recognize conic optimization problems, along with being able to construct their dual problems.