Module overview
This module is about the fundamentals of algorithms solving continuous optimisation problems, which involve minimising functions of multiple real-valued variables, possibly subject to restrictions, constraints, and nondifferentiable regularisations on the values that the variables may take. We focus (not exclusively) on convex optimisation, where the choice of topics is motivated by relevance to machine learning and data science.
The module has a two-part syllabus.
Part 1 covers the theoretical foundation of optimisation: convex analysis. Topics include the notion of convexity, subdifferential, optimality conditions and properties of various formulations of continuous optimisation problems.
Part 2 focuses on methods for solving optimisation problems. Topics include various gradient descent methods, higher-order methods, coordinate descent, randomisation, and heuristics.
[Module focus] This module is on structural continuous nonlinear optimisation in the real Euclidean space. This module is not about linear programming, combinatorial optimisation nor PDE-constrained optimisation.
[Prerequisites] A good knowledge of linear algebra and (differential) calculus is required for this module. Exposure to numerical analysis and vector calculus is helpful but not required; the applications will be kept basic and simple. Students will write scripts in MATLAB/Python, so familiarity with programming is required.
[Who should enrol] This module is expected to be beneficial to anyone who uses or will uses optimisation in machine learning and related work. More specifically, people from the following fields: machine learning, signal and image processing, communications, bioinformatics, control, robotics, computer graphics, computer vision, operation research, scientific computing, computational mathematics, and finance.