729
IRUS Total
Downloads
  Altmetric

Deep Gaussian Processes: Advances in Models and Inference

File Description SizeFormat 
Salimbeni-H-2019-Phd-Thesis.pdfThesis11.16 MBAdobe PDFView/Open
Title: Deep Gaussian Processes: Advances in Models and Inference
Authors: Salimbeni, Hugh
Item Type: Thesis or dissertation
Abstract: Hierarchical models are certainly in fashion these days. It seems difficult to navigate the field of machine learning without encountering `deep' models of one sort or another. The popularity of the deep learning revolution has been driven by some striking empirical successes, prompting both intense rapture and intense criticism. The criticisms often centre around the lack of model uncertainty, leading to sometimes drastically overconfident predictions. Others point to the lack of a mechanism for incorporating prior knowledge, and the reliance on large datasets. A widely held hope is that a Bayesian approach might overcome these problems. The deep Gaussian process presents a paradigm for building deep models from a Bayesian perspective. A Gaussian process is a prior for functions. A deep Gaussian process uses several Gaussian process functions and combines them hierarchically through composition (that is, the output of one is the input to the next). The deep Gaussian process promises to capture the compositional nature of deep learning while mitigating some of the disadvantages through a Bayesian approach. The thesis develops deep Gaussian process modelling in a number of ways. The model is first interpreted differently from previous work, not as a `hierarchical prior' but as a factorized prior with an hierarchical likelihood. Mean functions are suggested to avoid issues of degeneracy and to aid initialization. The main contribution is a new method of inference that avoids the burden of representing the function values directly through an application of sparse variational inference. This method scales to arbitrarily large data and is shown to work well in practice through experiments. The use of variational inference recasts (approximate) inference as optimization of Gaussian distributions. This optimization has an exploitable geometry via the natural gradient. The natural gradient is shown to be advantageous for single layer non-conjugate models, and for the (final layer of a) deep Gaussian process model. Deep Gaussian processes can be a model both for complex associations between variables and complex marginal distributions of single variables. Incorporating noise in the hierarchy leads to complex marginal distribution through the non-linearities of the mappings at each layer. The inference required for noisy variables cannot be handled with sparse methods, as sparse methods rely on correlations between variables, which are absent for noisy variables. Instead, a more direct approach is developed, using an importance weighted variational scheme.
Content Version: Open Access
Issue Date: Jul-2019
Date Awarded: Jul-2020
URI: http://hdl.handle.net/10044/1/81669
DOI: https://doi.org/10.25560/81669
Copyright Statement: Creative Commons Attribution NonCommercial Licence
Supervisor: Deisenroth, Marc
Department: Computing
Publisher: Imperial College London
Qualification Level: Doctoral
Qualification Name: Doctor of Philosophy (PhD)
Appears in Collections:Computing PhD theses



This item is licensed under a Creative Commons License Creative Commons