Repository logo
  • Log In
    Log in via Symplectic to deposit your publication(s).
Repository logo
  • Communities & Collections
  • Research Outputs
  • Statistics
  • Log In
    Log in via Symplectic to deposit your publication(s).
  1. Home
  2. Faculty of Engineering
  3. Faculty of Engineering
  4. RT-GENE: Real-time eye gaze estimation in natural environments
 
  • Details
RT-GENE: Real-time eye gaze estimation in natural environments
File(s)
Fischer_ECCV2018_RT-GENE_stamped3.pdf (4.84 MB)
Accepted version
OA Location
http://openaccess.thecvf.com/content_ECCV_2018/html/Tobias_Fischer_RT-GENE_Real-Time_Eye_ECCV_2018_paper.html
Author(s)
Fischer, Tobias
Chang, Hyung Jin
Demiris, Yiannis
Type
Conference Paper
Abstract
In this work, we consider the problem of robust gaze estimation in natural environments. Large camera-to-subject distances and high variations in head pose and eye gaze angles are common in such environments. This leads to two main shortfalls in state-of-the-art methods for gaze estimation: hindered ground truth gaze annotation and diminished gaze estimation accuracy as image resolution decreases with distance. We first record a novel dataset of varied gaze and head pose images in a natural environment, addressing the issue of ground truth annotation by measuring head pose using a motion capture system and eye gaze using mobile eyetracking glasses. We apply semantic image inpainting to the area covered by the glasses to bridge the gap between training and testing images by removing the obtrusiveness of the glasses. We also present a new real-time algorithm involving appearance-based deep convolutional neural networks with increased capacity to cope with the diverse images in the new dataset. Experiments with this network architecture are conducted on a number of diverse eye-gaze datasets including our own, and in cross dataset evaluations. We demonstrate state-of-the-art performance in terms of estimation accuracy in all experiments, and the architecture performs well even on lower resolution images.
Date Issued
2018-10-06
Date Acceptance
2018-07-03
Citation
Lecture Notes in Computer Science, 2018, 11214, pp.339-357
URI
http://hdl.handle.net/10044/1/62579
URL
https://www.springer.com/us/book/9783030012663
DOI
https://www.dx.doi.org/10.1007/978-3-030-01249-6_21
ISSN
0302-9743
Publisher
Springer Verlag
Start Page
339
End Page
357
Journal / Book Title
Lecture Notes in Computer Science
Volume
11214
Copyright Statement
© Springer Nature Switzerland AG 2018. The final publication is available at Springer via https://link.springer.com/chapter/10.1007/978-3-030-01249-6_21
Sponsor
Commission of the European Communities
Samsung Electronics Co Ltd
Identifier
https://www.springer.com/us/book/9783030012663
Grant Number
643783
N/A
Source
European Conference on Computer Vision
Subjects
Gaze estimation
Gaze dataset
Convolutional Neural Network
Semantic inpainting
Eyetracking glasses
Start Date
2018-09-08
Finish Date
2018-09-14
Coverage Spatial
Munich, Germany
About
Spiral Depositing with Spiral Publishing with Spiral Symplectic
Contact us
Open access team Report an issue
Other Services
Scholarly Communications Library Services
logo

Imperial College London

South Kensington Campus

London SW7 2AZ, UK

tel: +44 (0)20 7589 5111

Accessibility Modern slavery statement Cookie Policy

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback