High-quality facial geometry from sparse heterogeneous cameras under active illumination
File(s)NeuralGeometry-cvmp24-lowres.pdf (2.35 MB)
Accepted version
Author(s)
Bridgeman, Lewis
Rainer, Gilles
Ghosh, Abhijeet
Type
Conference Paper
Abstract
High-resolution facial geometry is essential for realistic digital avatars. Traditional reconstruction methods, such as multi-view stereo, often struggle with materials like skin, which exhibit complex light reflection, absorption, and scattering properties. Neural reconstruction methods have shown greater robustness to these view-dependent effects. However, positional-encoding-based implementations are typically slow, while faster hash-encoded methods
may falter under sparse camera views. We present a geometry reconstruction method tailored for an active-illumination facial capture setup featuring sparse cameras with varying characteristics. Our technique builds upon hash-encoded neural surface reconstruction, which we enhance with additional active-illumination-based
supervision and loss functions, allowing us to maintain high reconstruction speed and geometrical fidelity even with reduced camera coverage. We validate our approach through qualitative evaluations across diverse subjects, and quantitative evaluation using a synthetic dataset rendered with a virtual reproduction of our capture setup. Our results demonstrate that our method significantly out-performs previous neural reconstruction techniques on datasets
with sparse camera configurations.
may falter under sparse camera views. We present a geometry reconstruction method tailored for an active-illumination facial capture setup featuring sparse cameras with varying characteristics. Our technique builds upon hash-encoded neural surface reconstruction, which we enhance with additional active-illumination-based
supervision and loss functions, allowing us to maintain high reconstruction speed and geometrical fidelity even with reduced camera coverage. We validate our approach through qualitative evaluations across diverse subjects, and quantitative evaluation using a synthetic dataset rendered with a virtual reproduction of our capture setup. Our results demonstrate that our method significantly out-performs previous neural reconstruction techniques on datasets
with sparse camera configurations.
Date Issued
2024-11-18
Date Acceptance
2024-09-27
Citation
CVMP '24: Proceedings of 21st ACM SIGGRAPH Conference on Visual Media Production, 2024, pp.1-10
ISBN
9798400712814
Publisher
ACM
Start Page
1
End Page
10
Journal / Book Title
CVMP '24: Proceedings of 21st ACM SIGGRAPH Conference on Visual Media Production
Copyright Statement
© 2024 ACM. This is the author’s accepted manuscript made available under a CC-BY licence in accordance with Imperial’s Research Publications Open Access policy (www.imperial.ac.uk/oa-policy)
License URL
Identifier
https://dl.acm.org/doi/10.1145/3697294.3697296
Source
ACM SIGGRAPH Conference on Visual Media Production (CVMP) 2024
Publication Status
Published
Start Date
2024-11-18
Finish Date
2024-11-19
Coverage Spatial
London, UK