Return to search

Model-based and Learned, Inverse Rendering for 3D Scene Reconstruction and View Synthesis

Recent advancements in inverse rendering have exhibited promising results for 3D representation, novel view synthesis, scene parameter reconstruction, and direct graphical asset generation and editing.
Inverse rendering attempts to recover the scene parameters of interest from a set of camera observations by optimizing the photometric error between rendering model output and the true observation with appropriate regularization.

The objective of this dissertation is to study inverse problems from several perspectives: (1) Software Framework: the general differentiable pipeline for solving physically-based or neural-based rendering problems, (2) Closed-Form: efficient and closed-form solutions in specific condition in inverse problems, (3) Representation Structure: hybrid 3D scene representation for efficient training and adaptive resource allocation, and (4) Robustness: enhanced robustness and accuracy from controlled lighting aspect.

We aim to solve the following tasks:

1. How to address the challenge of rendering and optimizing scene parameters such as geometry, texture, and lighting, while considering multiple viewpoints from physically-based or neural 3D representations. To this end, we present a comprehensive software toolkit that provides support for diverse ray-based sampling and tracing schemes that enable the optimization of a wide range of targeting scene parameters. Our approach emphasizes the importance of maintaining differentiability throughout the entire pipeline to ensure efficient and effective optimization of the desired parameters.
2. Is there a 3D representation that has a fixed computational complexity or closed-form solution for forward rendering when the target has specific geometry or simplified lighting cases for better relaxing computational problems or reducing complexity. We consider multi-bounce reflection inside the plane transparent medium, and design differentiable polarization simulation engine that jointly optimize medium's parameters as well as the polarization state of reflection and transmission light.
3. How can we use our hybrid, learned 3D scene representation to solve inverse rendering problems for scene reconstruction and novel view synthesis, with a particular interest in several scientific fields, including density, radiance field, signed distance function, etc.
4. Unknown lighting condition significantly influence object appearance, to enhance the robustness of inverse rendering, we adopt invisible co-located lighting to further control lighting and suppress unknown lighting by jointly optimize separated channels of RGB and near infrared light, and enable accurate all scene parameters reconstruction from wider application environment.

We also demonstrate the visually and quantitatively improved results for the aforementioned tasks and make comparisons with other state-of-the-art methods to demonstrate superior performance on representation and reconstruction tasks.

Identiferoai:union.ndltd.org:kaust.edu.sa/oai:repository.kaust.edu.sa:10754/693530
Date24 July 2023
CreatorsLi, Rui
ContributorsHeidrich, Wolfgang, Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division, Wonka, Peter, Goldlücke, Bastian, Park, Shinkyu
Source SetsKing Abdullah University of Science and Technology
LanguageEnglish
Detected LanguageEnglish
TypeDissertation
RelationN/A

Page generated in 0.0018 seconds