Repository logo
  • Log In
    Log in via Symplectic to deposit your publication(s).
Repository logo
  • Communities & Collections
  • Research Outputs
  • Statistics
  • Log In
    Log in via Symplectic to deposit your publication(s).
  1. Home
  2. Faculty of Engineering
  3. Faculty of Engineering
  4. Facial Affect "in-the-wild": A survey and a new database
 
  • Details
Facial Affect "in-the-wild": A survey and a new database
File(s)
egpaper_final_2.pdf (6.4 MB)
Accepted version
Author(s)
Zafeiriou, S
Papaioannou, A
Kotsia, I
Nicolaou, M
Zhao, G
Type
Conference Paper
Abstract
Well-established databases and benchmarks have been developed in the past 20 years for automatic facial behaviour analysis. Nevertheless, for some important problems regarding analysis of facial behaviour, such as (a) estimation of affect in a continuous dimensional space (e.g., valence and arousal) in videos displaying spontaneous facial behaviour and (b) detection of the activated facial muscles (i.e., facial action unit detection), to the best of our knowledge, well-established in-the-wild databases and benchmarks do not exist. That is, the majority of the publicly available corpora for the above tasks contain samples that have been captured in controlled recording conditions and/or captured under a very specific milieu. Arguably, in order to make further progress in automatic understanding of facial behaviour, datasets that have been captured in in the-wild and in various milieus have to be developed. In this paper, we survey the progress that has been recently made on understanding facial behaviour in-the-wild, the datasets that have been developed so far and the methodologies that have been developed, paying particular attention to deep learning techniques for the task. Finally, we make a significant step further and propose a new comprehensive benchmark for training methodologies, as well as assessing the performance of facial affect/behaviour analysis/ understanding in-the-wild. To the best of our knowledge, this is the first time that such a benchmark for valence and arousal "in-the-wild" is presented.
Date Issued
2016-12-19
Date Acceptance
2016-06-26
Citation
Computer Vision and Pattern Recognition Workshops (CVPRW), 2016 IEEE Conference on, 2016, pp.1487-1498
URI
http://hdl.handle.net/10044/1/50465
DOI
https://www.dx.doi.org/10.1109/CVPRW.2016.186
ISSN
2160-7508
Publisher
IEEE
Start Page
1487
End Page
1498
Journal / Book Title
Computer Vision and Pattern Recognition Workshops (CVPRW), 2016 IEEE Conference on
Copyright Statement
© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Sponsor
Commission of the European Communities
Identifier
http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000391572100179&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=1ba7043ffcc86c417c072aa74d649202
Grant Number
688520
Source
Computer Vision and Pattern Recognition Workshops (CVPRW)
Subjects
Science & Technology
Technology
Computer Science, Artificial Intelligence
Computer Science
EXPRESSION RECOGNITION
LEARNING ALGORITHM
FACE-RECOGNITION
IMAGE SEQUENCES
NEURAL-NETWORKS
CHALLENGE
MACHINES
EMOTION
Publication Status
Published
Start Date
2016-06-26
Finish Date
2016-07-01
Coverage Spatial
Las Vegas, NV
About
Spiral Depositing with Spiral Publishing with Spiral Symplectic
Contact us
Open access team Report an issue
Other Services
Scholarly Communications Library Services
logo

Imperial College London

South Kensington Campus

London SW7 2AZ, UK

tel: +44 (0)20 7589 5111

Accessibility Modern slavery statement Cookie Policy

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback