[Fall 2020]
Prerequisites
The student should have a solid knowledge of linear algebra and multivariable calculus. The student should also have knowledge of linear programming (including linear programming duality) and convex analysis to the level of being able to follow the text and do the exercises from:
- - Chapters 1 and 2 including all exercises from the book 'Linear Programming, A Concise Introduction, Thomas S. Ferguson' (https://www.math.ucla.edu/~tom/LP.pdf)
- - Exercises 2.1, 2.2, 2.12, 3.1, 3.3, 3.5, and 3.7 from the book 'Convex Optimization, Stephen Boyd and Lieven Vandenberghe' (http://stanford.edu/~boyd/cvxbook)
Aim of the course
In continuous optimization the variables take on continuous (as opposed to discrete) values, and the objective and constraints are typically differentiable. This allows for the use of (multivariable) calculus techniques to study the problems and their solutions, and to design and analyze efficient algorithms for finding solutions. In this course we study the theory, algorithms, and applications of continuous optimization. In the theory part we discuss Lagrangian duality, optimality conditions, convexity, and conic programming. In the algorithmic part we discuss first order optimization methods, neural networks/supervised learning, second order optimization methods, and interior point methods, where we also discuss some of the convergence analysis. Throughout we discuss many relevant applications.
Lecturers
David de Laat
- Docent: David de Laat
- Docent: Marc Uetz