Repository logo
  • Log In
    Log in via Symplectic to deposit your publication(s).
Repository logo
  • Communities & Collections
  • Research Outputs
  • Statistics
  • Log In
    Log in via Symplectic to deposit your publication(s).
  1. Home
  2. Faculty of Engineering
  3. Computing
  4. Computing PhD theses
  5. Deep representation learning for fetal screening
 
  • Details
Deep representation learning for fetal screening
File(s)
Meng-Q-2020-PhD-Thesis (48.59 MB)
Thesis
Author(s)
Meng, Qingjie
Type
Thesis or dissertation
Abstract
Deep learning approaches enable automatic interpretation for fetal screening, which yields useful clinical information for prenatal medical care. However, utilising deep learning for fetal screening analysis is still challenging. Shadow artefacts in fetal ultrasound imaging may conceal anatomical structures, and thus result in poor anatomy visualisation and inaccurate image interpretation. The utilisation of learning-based fetal screening analysis algorithms in patient care at scale is hindered by the data difference of images obtained from different acquisition devices, hospitals and geographic regions. Shortage of workforce and requirement of special expertise lead to insufficient and coarse prior knowledge for fetal screening analysis. This thesis aims at developing deep learning methods for fetal screening analysis with little or no supervision, specifically on shadow artefacts and anatomical classification across different datasets.

We first propose learning-based methods to estimate shadow confidence maps for fetal ultrasound images based on coarse and weak image annotations. The predicted dense shadow confidence maps show the probability of each pixel being shadow regions and can provide extra information for improving the performance of downstream automatic image analysis algorithms such as fetal ultrasound classification, multi-view image fusion and biometric measurement. We then address the problem of the constrained utilisation of fetal ultrasound standard plane classification model across different datasets. A deep learning-based method is proposed to align anatomical features between different datasets and it enables the consistent performance of fetal ultrasound standard plane classification on images acquired from different devices. Finally, we investigate the generalisation of task-specific models when images from different clinics do not share the same anatomical categories. We propose learning-based methods to separate the anatomical features from all other types of features, and thus to learn generalisable anatomical features. The proposed methods enable the classification of unseen anatomical categories, which can help clinicians from different clinical sites in a wide range of geographic areas to use the same fetal ultrasound standard plane classification model for the analysis of their own data.
Version
Open Access
Date Issued
2020-09
Date Awarded
2021-02
URI
http://hdl.handle.net/10044/1/87390
DOI
https://doi.org/10.25560/87390
Copyright Statement
Creative Commons Attribution NonCommercial Licence
License URL
https://creativecommons.org/licenses/by-nc-nd/4.0/
Advisor
Kainz, Bernhard
Publisher Department
Computing
Publisher Institution
Imperial College London
Qualification Level
Doctoral
Qualification Name
Doctor of Philosophy (PhD)
About
Spiral Depositing with Spiral Publishing with Spiral Symplectic
Contact us
Open access team Report an issue
Other Services
Scholarly Communications Library Services
logo

Imperial College London

South Kensington Campus

London SW7 2AZ, UK

tel: +44 (0)20 7589 5111

Accessibility Modern slavery statement Cookie Policy

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback