• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Deep Synthesis of Distortion-free 3D Omnidirectional Imagery from 2D Images

Christopher K May (18422640) 22 April 2024 (has links)
<p dir="ltr">Omnidirectional images are a way to visualize an environment in all directions. They have a spherical topology and require careful attention when represented by a computer. Namely, mapping the sphere to a plane introduces stretching of the spherical image content, and requires at least one seam in the image to be able to unwrap the sphere. Generative neural networks have shown impressive ability to synthesize images, but generating spherical images is still challenging. Without specific handling of the spherical topology, the generated images often exhibit distorted contents and discontinuities across the seams. We describe strategies for mitigating such distortions during image generation, as well as ensuring the image remains continuous across all boundaries. Our solutions can be applied to a variety of spherical image representations, including cube-maps and equirectangular projections.</p><p dir="ltr">A closely related problem in generative networks is 3D-aware scene generation, wherein the task involves the creation of an environment in which the viewpoint can be directly controlled. Many NeRF-based solutions have been proposed, but they generally focus on generation of single objects or faces. Full 3D environments are more difficult to synthesize and are less studied. We approach this problem by leveraging omnidirectional image synthesis, using the initial features of the network as a transformable foundation upon which to build the scene. By translating within the initial feature space, we correspondingly translate in the output omnidirectional image, preserving the scene characteristics. We additionally develop a regularizing loss based on epipolar geometry to encourage geometric consistency between viewpoints. We demonstrate the effectiveness of our method with a structure-from-motion-based reconstruction metric, along with comparisons to related works.</p>

Page generated in 0.1161 seconds