LitNeRF: Intrinsic Radiance Decomposition for High-Quality View Synthesis and Relighting of Faces

Kripasindhu Sarkar1     Marcel C. Bühler2,1     Gengyan Li2,1     Daoye Wang1     Delio Vicini1     Jérémy Riviere1     Yinda Zhang1     Sergio Orts-Escolano1     Paulo Gotardo1     Thabo Beeler1     Abhimitra Meka1    
1Google Inc., 2ETH Zurich
ACM SIGGRAPH Asia Conference Papers, 2023
Teaser

We present a novel and physically-plausible intrinsic decomposition and reconstruction of a radiance field from small datasets with only 15 viewpoints and 15 OLAT lighting conditions, allowing for photorealistic volumetric rendering of human portraits under both novel views and novel lighting, including near point sources and distant, environment illumination.

Abstract

High-fidelity, photorealistic 3D capture of a human face is a long-standing problem in computer graphics -- the complex material of skin, intricate geometry of hair, and fine scale textural details make it challenging. Traditional techniques rely on very large and expensive capture rigs to reconstruct explicit mesh geometry and appearance maps and require complex differentiable path-tracing to achieve photorealistic results. More recent volumetric methods (e.g., NeRFs) have enabled view-synthesis and sometimes relighting by learning an implicit representation of the density and reflectance basis, but suffer from artifacts and blurriness due to the inherent ambiguities in volumetric modeling. These problems are further exacerbated when capturing with few cameras and light sources. We present a novel technique for high-quality capture of a human face for 3D view synthesis and relighting using a sparse, compact capture rig consisting of 15 cameras and 15 lights. Our method combines a volumetric representation of the face reflectance with traditional multi-view stereo based geometry reconstruction. The proxy geometry allows us to anchor the 3D density field to prevent artifacts and guide the disentanglement of intrinsic radiance components of the face appearance such as diffuse and specular reflectance, and Direct Light Transport (shadowing) fields. Our hybrid representation significantly improves the state-of-the-art quality for arbitrarily dense renders of a face from desired camera viewpoint as well as environmental, directional, and near-field lighting.

Video

Downloads & resources

Text Reference Copy to clipboard

Kripasindhu Sarkar, Marcel C. Bühler, Gengyan Li, Daoye Wang, Delio Vicini, Jérémy Riviere, Yinda Zhang, Sergio Orts-Escolano, Paulo Gotardo, Thabo Beeler, Abhimitra Meka. LitNeRF: Intrinsic Radiance Decomposition for High-Quality View Synthesis and Relighting of Faces. ACM SIGGRAPH Asia Conference Papers, December 2023.

BibTex Reference Copy to clipboard

@proceedings{sarkar2023litnerf,
  title     = {LitNeRF: Intrinsic Radiance Decomposition for High-Quality View Synthesis and Relighting of Faces},
  author    = {Kripasindhu Sarkar and Marcel C. Buehler and Gengyan Li and Daoye Wang and Delio Vicini and Jérémy Riviere and Yinda Zhang and Sergio Orts-Escolano and Paulo Gotardo and Thabo Beeler and Abhimitra Meka},
  year      = 2023,
  booktitle = {ACM SIGGRAPH Asia 2023 Conference Papers},
  doi       = {10.1145/3610548.3618210},
  isbn      = {979-8-4007-0315-7/23/12},
  url       = {https://doi.org/10.1145/3550469}
}