Multi-View Photometric Stereo Revisited


ETH Zurich1
Google Research2
KU Lueven3

IEEE Winter Conference on Applications of Computer Vision (WACV), 2023
Waikoloa - Hawaii

Paper
Supplementary
Video



Abstract

Multi-view photometric stereo (MVPS) is a preferred method for detailed and precise 3D acquisition of an object from images. Although popular methods for MVPS can provide outstanding results, they are often complex to execute and limited to isotropic material objects. To address such limitations, we present a simple, practical approach to MVPS, which works well for isotropic as well as other object material types such as anisotropic and glossy. The proposed approach in this paper exploits the benefit of uncertainty modeling in a deep neural network for a reliable fusion of photometric stereo (PS) and multi-view stereo (MVS) network predictions. Yet, contrary to the recently proposed state-of-the-art, we introduce neural volume rendering methodology for a trustworthy fusion of MVS and PS measurements. The advantage of introducing neural volume rendering is that it helps in the reliable modeling of objects with diverse material types, where existing MVS methods, PS methods, or both may fail. Furthermore, it allows us to work on neural 3D shape representation, which has recently shown outstanding results for many geometric processing tasks. Our suggested new loss function aims to fit the zero level set of the implicit neural function using the most certain MVS and PS network predictions coupled with weighted neural volume rendering cost. The proposed approach shows state-of-the-art results when tested extensively on several benchmark datasets.



Multi-View Photometric Stereo Setup


Benefit of our Approach



Reconstruction Results Comparison




Video Presentation




Authors



Berk Kaya

Suryansh Kumar

Carlos Oliveira

Vittorio Ferrari

Luc Van Gool


Citation


@inproceedings{kaya2023multi,
  title={Multi-View Photometric Stereo Revisited},
  author={Kaya, Berk and Kumar, Suryansh and Oliveira, Carlos and Ferrari, Vittorio and Van Gool, Luc},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={3126--3135},
  year={2023}
}	     
					


Acknowledgements



This work was funded by Focused Research Award from Google(CVL, ETH 2019-HE-318, 2019-HE-323, 2020-FS-351, 2020-HS-411). Suryansh Kumar's project is supported by "ETH Zurich Foundation and Google" for bringing together best academic and industrial research.