Variational autoencoded regression: high dimensional regression of visual data on complex manifold

File Description SizeFormat 
cvpr-2017-variational-paper-stamped.pdfFile embargoed until 01 January 100005.22 MBAdobe PDF    Request a copy
Title: Variational autoencoded regression: high dimensional regression of visual data on complex manifold
Author(s): Yoo, YJ
Chang, H
Yun, S
Demiris, Y
Choi, JY
Item Type: Conference Paper
Abstract: This paper proposes a new high dimensional regression method by merging Gaussian process regression into a variational autoencoder framework. In contrast to other regression methods, the proposed method focuses on the case where output responses are on a complex high dimensional manifold, such as images. Our contributions are summarized as follows: (i) A new regression method estimating high dimensional image responses, which is not handled by existing regression algorithms, is proposed. (ii) The proposed regression method introduces a strategy to learn the latent space as well as the encoder and decoder so that the result of the regressed response in the latent space coincide with the corresponding response in the data space. (iii) The proposed regression is embedded into a generative model, and the whole procedure is developed by the variational autoencoder framework. We demonstrate the robustness and effectiveness of our method through a number of experiments on various visual data regression problems.
Publication Date: 22-Jul-2017
Date of Acceptance: 27-Feb-2017
Publisher: IEEE
Copyright Statement: This paper is embargoed until publication.
Sponsor/Funder: Commission of the European Communities
Funder's Grant Number: 612139
Conference Name: IEEE Conference on Computer Vision and Pattern Recognition
Publication Status: Accepted
Start Date: 2017-07-22
Finish Date: 2017-07-25
Conference Place: Honolulu, Hawaii, USA
Embargo Date: publication subject to indefinite embargo
Appears in Collections:Faculty of Engineering
Electrical and Electronic Engineering

Items in Spiral are protected by copyright, with all rights reserved, unless otherwise indicated.

Creative Commons