Multilevel algorithms for the optimization of structured problems
File(s)
Author(s)
Ho, Chinpang
Type
Thesis or dissertation
Abstract
Although large scale optimization problems are very difficult to solve in general, problems that arise from practical applications often exhibit particular structure. In this thesis we study and improve algorithms that can efficiently solve structured problems. Three separate settings are considered.
The first part concerns the topic of singularly perturbed Markov decision processes (MDPs). When a MDP is singularly perturbed, one can construct an aggregate model in which the solution is asymptotically optimal. We develop an algorithm that takes advantage of existing results to compute the solution of the original model. The proposed algorithm can compute the optimal solution with a reduction in complexity without any penalty in accuracy.
In the second part, the class of empirical risk minimization (ERM) problems is studied. When using a first order method, the Lipschitz constant of the empirical risk plays a crucial role in the convergence analysis and stepsize strategy of these problems. We derive the probabilistic bounds for such Lipschitz constants using random matrix theory. Our results are used to derive the probabilistic complexity and develop a new stepsize strategy for first order methods. The proposed stepsize strategy, Probabilistic Upper-bound Guided stepsize strategy (PUG), has a strong theoretical guarantee on its performance compared to the standard stepsize strategy.
In the third part, we extend the existing results on multilevel methods for unconstrained convex optimization. We study a special case where the hierarchy of models is created by approximating first and second order information of the exact model. This is known as Galerkin approximation, and we named the corresponding algorithm Galerkin-based Algebraic Multilevel Algorithm (GAMA). Three case studies are conducted to show how the structure of a problem could affect the convergence of GAMA.
The first part concerns the topic of singularly perturbed Markov decision processes (MDPs). When a MDP is singularly perturbed, one can construct an aggregate model in which the solution is asymptotically optimal. We develop an algorithm that takes advantage of existing results to compute the solution of the original model. The proposed algorithm can compute the optimal solution with a reduction in complexity without any penalty in accuracy.
In the second part, the class of empirical risk minimization (ERM) problems is studied. When using a first order method, the Lipschitz constant of the empirical risk plays a crucial role in the convergence analysis and stepsize strategy of these problems. We derive the probabilistic bounds for such Lipschitz constants using random matrix theory. Our results are used to derive the probabilistic complexity and develop a new stepsize strategy for first order methods. The proposed stepsize strategy, Probabilistic Upper-bound Guided stepsize strategy (PUG), has a strong theoretical guarantee on its performance compared to the standard stepsize strategy.
In the third part, we extend the existing results on multilevel methods for unconstrained convex optimization. We study a special case where the hierarchy of models is created by approximating first and second order information of the exact model. This is known as Galerkin approximation, and we named the corresponding algorithm Galerkin-based Algebraic Multilevel Algorithm (GAMA). Three case studies are conducted to show how the structure of a problem could affect the convergence of GAMA.
Version
Open Access
Date Issued
2016-10
Date Awarded
2016-12
Advisor
Parpas, Panos
Sponsor
Engineering and Physical Sciences Research Council; European Commission
Grant Number
EP/K040723/1
PCIG11-GA-2012-321698 SOC-MP-ES
Publisher Department
Computing
Publisher Institution
Imperial College London
Qualification Level
Doctoral
Qualification Name
Doctor of Philosophy (PhD)