12
IRUS TotalDownloads
Altmetric
Inverse-free inference and reliable uncertainty quantification with gaussian processes
File | Description | Size | Format | |
---|---|---|---|---|
Popescu-S-2023-PhD-Thesis.pdf | Thesis | 42.32 MB | Adobe PDF | View/Open |
Title: | Inverse-free inference and reliable uncertainty quantification with gaussian processes |
Authors: | Popescu, Sebastian Gabriel |
Item Type: | Thesis or dissertation |
Abstract: | Standard machine learning research involves running experiments on training and testing data stemming from the same distribution, which is usually a data generating pipeline that involves clean data occurring in a clearly defined environment. However, in real-life scenarios models face unexpected distributions shifts, with the imperative need to discern unknown ``unknowns'', more specifically to detect outliers. Deep Neural Networks have proven their prowess in correctly classifying objects, albeit the lack of uncertainty, of a framework to incorporate prior knowledge and the reliance on large datasets. Bayesian methods are considered to fix said issues, with Gaussian Processes being an example of models that place function-space priors, finding usage due to their uncertainty quantification properties. Associated drawbacks to Gaussian Processes range from designing data-specific kernels to distributional mismatch between Gaussian predictive distribution and the real data generation distribution. This thesis posits that deep Gaussian Processes represent the optimal choice towards adequate out-of-distribution detection, as they circumvent aforementioned issues due to hierarchical nature. Firstly, we investigate deep Gaussian Processes' capabilities to properly detect outliers and propose changes to enhance out-of-distribution detection. Subsequently, we introduce a probabilistic layer that acts as a drop-in replacement for layers in convolutional architectures, with the property of reliably propagating uncertainty forward. We re-frame medical imaging prediction tasks as outlier detection, showing that our probabilistic module is more capable of detecting pathologies in MR scans as outliers given healthy samples in the training set. To ensure the competitiveness of our models we need to address the computational drawbacks associated to training Gaussian Processes. We propose an inverse-free variational lower bound to sparse Student-t Processes, showing through various experiments similar behaviour to matrix-inversion dependent models. Lastly, we dwell on future research pathways and applications, concluding that safe machine learning deployment is conditioned on probabilistic models with strong uncertainty guarantees. |
Content Version: | Open Access |
Issue Date: | Jun-2023 |
Date Awarded: | Nov-2023 |
URI: | http://hdl.handle.net/10044/1/108227 |
DOI: | https://doi.org/10.25560/108227 |
Copyright Statement: | Creative Commons Attribution Licence |
Supervisor: | Sharp, David Glocker, Ben Cole, James |
Department: | Department of Brain Sciences |
Publisher: | Imperial College London |
Qualification Level: | Doctoral |
Qualification Name: | Doctor of Philosophy (PhD) |
Appears in Collections: | Department of Brain Sciences PhD Theses |
This item is licensed under a Creative Commons License