Repository logo
  • Log In
    Log in via Symplectic to deposit your publication(s).
Repository logo
  • Communities & Collections
  • Research Outputs
  • Statistics
  • Log In
    Log in via Symplectic to deposit your publication(s).
  1. Home
  2. Faculty of Engineering
  3. Computing
  4. Computing
  5. S3-Face: SSS-compliant facial reflectance estimation via diffusion priors
 
  • Details
S3-Face: SSS-compliant facial reflectance estimation via diffusion priors
File(s)
CVPR2025_XingyuRen.pdf (7.45 MB)
Accepted version
Author(s)
Ren, Xingyu
Deng, Jiankang
Cheng, Yuhao
Zhu, Wenhan
Yan, Yichao
more
Type
Conference Paper
Abstract
Recent 3D face reconstruction methods have made remarkable advancements, yet achieving high-quality facial reflectance from monocular input remains challenging. Existing methods rely on the light-stage captured data to learn facial reflectance models. However, limited subject diversity in these datasets poses challenges in achieving good generalization and broad applicability. This motivates us to explore whether the extensive priors captured in recent generative diffusion models (e.g., Stable Diffusion) can enable more generalizable facial reflectance estimation as these models have been pre-trained on large-scale internet image collections containing rich visual patterns. In this paper, we introduce the use of Stable Diffusion as a prior for facial reflectance estimation, achieving robust results with minimal captured data for fine-tuning. We present S3-Face, a comprehensive framework capable of producing SSS-compliant skin reflectance from in-the-wild images. Our method adopts a two-stage training approach: in the first stage, DSN-Net is trained to predict diffuse albedo, specular albedo, and normal maps from in-the-wild images using a novel joint reflectance attention module. In the second stage, HM-Net is trained to generate hemoglobin and melanin maps based on the diffuse albedo predicted in the first stage, yielding SSS-compliant and detailed reflectance maps. Extensive experiments demonstrate that our method achieves strong generalization and produces high-fidelity, SSS-compliant facial reflectance estimation.
Date Issued
2025-08-13
Date Acceptance
2025-06-01
Citation
2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025, pp.16051-16060
URI
https://hdl.handle.net/10044/1/125479
URL
https://doi.org/10.1109/cvpr52734.2025.01496
DOI
10.1109/cvpr52734.2025.01496
ISSN
1063-6919
Publisher
IEEE
Start Page
16051
End Page
16060
Journal / Book Title
2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Copyright Statement
Copyright © 2025, IEEE. This is the author’s accepted manuscript made available under a CC-BY licence in accordance with Imperial’s Research Publications Open Access policy (www.imperial.ac.uk/oa-policy)
License URL
https://creativecommons.org/licenses/by/4.0/
Source
2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Publication Status
Published
Start Date
2025-06-10
Finish Date
2025-06-17
Coverage Spatial
Nashville, TN, USA
About
Spiral Depositing with Spiral Publishing with Spiral Symplectic
Contact us
Open access team Report an issue
Other Services
Scholarly Communications Library Services
logo

Imperial College London

South Kensington Campus

London SW7 2AZ, UK

tel: +44 (0)20 7589 5111

Accessibility Modern slavery statement Cookie Policy

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback