A generative adversarial model for right ventricle segmentation
File(s)1810.03969v1.pdf (2.14 MB)
Working paper
Author(s)
Savioli, Nicoló
Vieira, Miguel Silva
Lamata, Pablo
Montana, Giovanni
Type
Working Paper
Abstract
The clinical management of several cardiovascular conditions, such as
pulmonary hypertension, require the assessment of the right ventricular (RV)
function. This work addresses the fully automatic and robust access to one of
the key RV biomarkers, its ejection fraction, from the gold standard imaging
modality, MRI. The problem becomes the accurate segmentation of the RV blood
pool from cine MRI sequences. This work proposes a solution based on Fully
Convolutional Neural Networks (FCNN), where our first contribution is the
optimal combination of three concepts (the convolution Gated Recurrent Units
(GRU), the Generative Adversarial Networks (GAN), and the L1 loss function)
that achieves an improvement of 0.05 and 3.49 mm in Dice Index and Hausdorff
Distance respectively with respect to the baseline FCNN. This improvement is
then doubled by our second contribution, the ROI-GAN, that sets two GANs to
cooperate working at two fields of view of the image, its full resolution and
the region of interest (ROI). Our rationale here is to better guide the FCNN
learning by combining global (full resolution) and local Region Of Interest
(ROI) features. The study is conducted in a large in-house dataset of $\sim$
23.000 segmented MRI slices, and its generality is verified in a publicly
available dataset.
pulmonary hypertension, require the assessment of the right ventricular (RV)
function. This work addresses the fully automatic and robust access to one of
the key RV biomarkers, its ejection fraction, from the gold standard imaging
modality, MRI. The problem becomes the accurate segmentation of the RV blood
pool from cine MRI sequences. This work proposes a solution based on Fully
Convolutional Neural Networks (FCNN), where our first contribution is the
optimal combination of three concepts (the convolution Gated Recurrent Units
(GRU), the Generative Adversarial Networks (GAN), and the L1 loss function)
that achieves an improvement of 0.05 and 3.49 mm in Dice Index and Hausdorff
Distance respectively with respect to the baseline FCNN. This improvement is
then doubled by our second contribution, the ROI-GAN, that sets two GANs to
cooperate working at two fields of view of the image, its full resolution and
the region of interest (ROI). Our rationale here is to better guide the FCNN
learning by combining global (full resolution) and local Region Of Interest
(ROI) features. The study is conducted in a large in-house dataset of $\sim$
23.000 segmented MRI slices, and its generality is verified in a publicly
available dataset.
Date Issued
2018-09-27
Citation
2018
Publisher
arXiv
Copyright Statement
© 2018 The Author(s)
Identifier
http://arxiv.org/abs/1810.03969v1
Subjects
cs.CV
cs.CV
cs.LG
stat.ML
Notes
9 pages, 8 figures
Publication Status
Published