• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 6
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Model-based and Learned, Inverse Rendering for 3D Scene Reconstruction and View Synthesis

Li, Rui 24 July 2023 (has links)
Recent advancements in inverse rendering have exhibited promising results for 3D representation, novel view synthesis, scene parameter reconstruction, and direct graphical asset generation and editing. Inverse rendering attempts to recover the scene parameters of interest from a set of camera observations by optimizing the photometric error between rendering model output and the true observation with appropriate regularization. The objective of this dissertation is to study inverse problems from several perspectives: (1) Software Framework: the general differentiable pipeline for solving physically-based or neural-based rendering problems, (2) Closed-Form: efficient and closed-form solutions in specific condition in inverse problems, (3) Representation Structure: hybrid 3D scene representation for efficient training and adaptive resource allocation, and (4) Robustness: enhanced robustness and accuracy from controlled lighting aspect. We aim to solve the following tasks: 1. How to address the challenge of rendering and optimizing scene parameters such as geometry, texture, and lighting, while considering multiple viewpoints from physically-based or neural 3D representations. To this end, we present a comprehensive software toolkit that provides support for diverse ray-based sampling and tracing schemes that enable the optimization of a wide range of targeting scene parameters. Our approach emphasizes the importance of maintaining differentiability throughout the entire pipeline to ensure efficient and effective optimization of the desired parameters. 2. Is there a 3D representation that has a fixed computational complexity or closed-form solution for forward rendering when the target has specific geometry or simplified lighting cases for better relaxing computational problems or reducing complexity. We consider multi-bounce reflection inside the plane transparent medium, and design differentiable polarization simulation engine that jointly optimize medium's parameters as well as the polarization state of reflection and transmission light. 3. How can we use our hybrid, learned 3D scene representation to solve inverse rendering problems for scene reconstruction and novel view synthesis, with a particular interest in several scientific fields, including density, radiance field, signed distance function, etc. 4. Unknown lighting condition significantly influence object appearance, to enhance the robustness of inverse rendering, we adopt invisible co-located lighting to further control lighting and suppress unknown lighting by jointly optimize separated channels of RGB and near infrared light, and enable accurate all scene parameters reconstruction from wider application environment. We also demonstrate the visually and quantitatively improved results for the aforementioned tasks and make comparisons with other state-of-the-art methods to demonstrate superior performance on representation and reconstruction tasks.
2

Fast Extraction of BRDFs and Material Maps from Images

Jaroszkiewicz, Rafal January 2003 (has links)
The bidirectional reflectance distribution function has a four dimensional parameter space and such high dimensionality makes it impractical to use it directly in hardware rendering. When a BRDF has no analytical representation, common solutions to overcome this problem include expressing it as a sum of basis functions or factorizing it into several functions of smaller dimensions. This thesis describes factorization extensions that significantly improve factor computation speed and eliminate drawbacks of previous techniques that overemphasize low sample values. The improved algorithm is used to calculate factorizations and material maps from colored images. The technique presented in this thesis allows interactive definition of arbitrary materials, and although this method is based on physical parameters, it can be also used for achieving a variety of non-photorealistic effects.
3

Fast Extraction of BRDFs and Material Maps from Images

Jaroszkiewicz, Rafal January 2003 (has links)
The bidirectional reflectance distribution function has a four dimensional parameter space and such high dimensionality makes it impractical to use it directly in hardware rendering. When a BRDF has no analytical representation, common solutions to overcome this problem include expressing it as a sum of basis functions or factorizing it into several functions of smaller dimensions. This thesis describes factorization extensions that significantly improve factor computation speed and eliminate drawbacks of previous techniques that overemphasize low sample values. The improved algorithm is used to calculate factorizations and material maps from colored images. The technique presented in this thesis allows interactive definition of arbitrary materials, and although this method is based on physical parameters, it can be also used for achieving a variety of non-photorealistic effects.
4

Texture Synthesis and Photorealistic Re-rendering of Room Scene Images

Kyle J. Ziga (5930963) 03 January 2019 (has links)
<div>In this thesis, we investigate methods for texture synthesis and texture re-rendering of indoor room scene images. The goal is to create a photorealistic redesign of interior spaces by replacing surface finishes with a new product based on a single room scene image. Specically, we focus on automating this process to reduce manual input while enabling high-quality and easy-to-use experience. The most common method of rendering textures into a scene is called texture mapping. Texture mapping involves mapping pixels in a texture sample to vertices in an object model. Typically, a large texture sample is required to perform texture mapping properly. Given a small texture sample, texture synthesis creates a large sized texture that appears to have</div><div>been made by the same underlying process. In the first part of this thesis, we present a method of texture synthesis that automatically determines a set of parameters to produce satisfactory results based on the texture types. The next challenge is to create a photorealistic re-rendering of the synthesized texture in the room scene image. 3D scene information such as geometry, lighting and reflectance is crucial to making the re-rendered image realistic. These properties contribute to the image formation process and must be estimated to create a scene-consistent modication. Knowing these parameters allows effects like highlights, shadows and inter-object reflections to be maintained during the re-rendering process. We detail methods for estimating</div><div>these parameters from a single indoor image. Finally, we will show a web-based implementation of these methods using the WebGL library ThreeJS.</div>
5

Peinture de lumière incidente dans des scènes 3D

Rozon, Frédérik 08 1900 (has links)
Le design d'éclairage est une tâche qui est normalement faite manuellement, où les artistes doivent manipuler les paramètres de plusieurs sources de lumière pour obtenir le résultat désiré. Cette tâche est difficile, car elle n'est pas intuitive. Il existe déjà plusieurs systèmes permettant de dessiner directement sur les objets afin de positionner ou modifier des sources de lumière. Malheureusement, ces systèmes ont plusieurs limitations telles qu'ils ne considèrent que l'illumination locale, la caméra est fixe, etc. Dans ces deux cas, ceci représente une limitation par rapport à l'exactitude ou la versatilité de ces systèmes. L'illumination globale est importante, car elle ajoute énormément au réalisme d'une scène en capturant toutes les interréflexions de la lumière sur les surfaces. Ceci implique que les sources de lumière peuvent avoir de l'influence sur des surfaces qui ne sont pas directement exposées. Dans ce mémoire, on se consacre à un sous-problème du design de l'éclairage: la sélection et la manipulation de l'intensité de sources de lumière. Nous présentons deux systèmes permettant de peindre sur des objets dans une scène 3D des intentions de lumière incidente afin de modifier l'illumination de la surface. De ces coups de pinceau, le système trouve automatiquement les sources de lumière qui devront être modifiées et change leur intensité pour effectuer les changements désirés. La nouveauté repose sur la gestion de l'illumination globale, des surfaces transparentes et des milieux participatifs et sur le fait que la caméra n'est pas fixe. On présente également différentes stratégies de sélection de modifications des sources de lumière. Le premier système utilise une carte d'environnement comme représentation intermédiaire de l'environnement autour des objets. Le deuxième système sauvegarde l'information de l'environnement pour chaque sommet de chaque objet. / Lighting design is usually a task that is done manually, where the artists must manipulate the parameters of several light sources to obtain the desired result. This task is difficult because it is not intuitive. Some systems already exist that enable a user to paint light directly on objects in a scene to position or alter light sources. Unfortunately, these systems have some limitations such that they only consider local lighting, or the camera must be fixed, etc. Either way, this limitates the accuracy or the versatility of these systems. Global illumination is important because it adds a lot of realism to a scene by capturing all the light interreflections on the surfaces. This means that light sources can influence surfaces even if they are not directly exposed. In this M. Sc. thesis, we study a subset of the lighting design problem: the selection and alteration of the intensity of light sources. We present two different systems to design lighting on objects in 3D scenes. The user paints light intentions directly on the objects to alter the surface illumination. From these paint strokes, the systems find the light sources and alter their intensity to obtain as much as possible what the user wants. The novelty of our technique is that global illumination, transparent surfaces and subsurface scattering are all considered, and also that the camera is free to take any position. We also present strategies for selecting and altering the light sources. The first system uses an environment map as an intermediate representation of the environment surrounding the objects. The second system saves all the information of the environment for each vertex of each object.
6

Peinture de lumière incidente dans des scènes 3D

Rozon, Frédérik 08 1900 (has links)
Le design d'éclairage est une tâche qui est normalement faite manuellement, où les artistes doivent manipuler les paramètres de plusieurs sources de lumière pour obtenir le résultat désiré. Cette tâche est difficile, car elle n'est pas intuitive. Il existe déjà plusieurs systèmes permettant de dessiner directement sur les objets afin de positionner ou modifier des sources de lumière. Malheureusement, ces systèmes ont plusieurs limitations telles qu'ils ne considèrent que l'illumination locale, la caméra est fixe, etc. Dans ces deux cas, ceci représente une limitation par rapport à l'exactitude ou la versatilité de ces systèmes. L'illumination globale est importante, car elle ajoute énormément au réalisme d'une scène en capturant toutes les interréflexions de la lumière sur les surfaces. Ceci implique que les sources de lumière peuvent avoir de l'influence sur des surfaces qui ne sont pas directement exposées. Dans ce mémoire, on se consacre à un sous-problème du design de l'éclairage: la sélection et la manipulation de l'intensité de sources de lumière. Nous présentons deux systèmes permettant de peindre sur des objets dans une scène 3D des intentions de lumière incidente afin de modifier l'illumination de la surface. De ces coups de pinceau, le système trouve automatiquement les sources de lumière qui devront être modifiées et change leur intensité pour effectuer les changements désirés. La nouveauté repose sur la gestion de l'illumination globale, des surfaces transparentes et des milieux participatifs et sur le fait que la caméra n'est pas fixe. On présente également différentes stratégies de sélection de modifications des sources de lumière. Le premier système utilise une carte d'environnement comme représentation intermédiaire de l'environnement autour des objets. Le deuxième système sauvegarde l'information de l'environnement pour chaque sommet de chaque objet. / Lighting design is usually a task that is done manually, where the artists must manipulate the parameters of several light sources to obtain the desired result. This task is difficult because it is not intuitive. Some systems already exist that enable a user to paint light directly on objects in a scene to position or alter light sources. Unfortunately, these systems have some limitations such that they only consider local lighting, or the camera must be fixed, etc. Either way, this limitates the accuracy or the versatility of these systems. Global illumination is important because it adds a lot of realism to a scene by capturing all the light interreflections on the surfaces. This means that light sources can influence surfaces even if they are not directly exposed. In this M. Sc. thesis, we study a subset of the lighting design problem: the selection and alteration of the intensity of light sources. We present two different systems to design lighting on objects in 3D scenes. The user paints light intentions directly on the objects to alter the surface illumination. From these paint strokes, the systems find the light sources and alter their intensity to obtain as much as possible what the user wants. The novelty of our technique is that global illumination, transparent surfaces and subsurface scattering are all considered, and also that the camera is free to take any position. We also present strategies for selecting and altering the light sources. The first system uses an environment map as an intermediate representation of the environment surrounding the objects. The second system saves all the information of the environment for each vertex of each object.

Page generated in 0.1226 seconds