Creating 360-degree 3D content has gained traction in the past few years, being used for Virtual Reality environments. However, creating such content is challenging because it requires a multi-camera setup or a collection of images from different perspectives. This paper proposes 3D Pano Inpainting, a pipeline capable of transforming a single equirectangular panoramic RGBD image into a complete 360° 3D virtual reality scene represented as a textured mesh. Our methodology is as follows: we estimate a consistent depth map for the input panorama; we use a pre built framework to convert the image and its depth map into a textured mesh with inpainted background edges; we account for wrapping the resulting mesh around the viewer’s perspective for better immersion in VR headsets. Additionally, we evaluate our method’s effectiveness in producing consistent novel views through the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and learned perceptual image patch similarity (LPIPS) between a rendering produced from the ground truth image and depth map to that produced from our model. Furthermore, we compare our model’s scores with those of a non-inpainted textured mesh.
Identifer | oai:union.ndltd.org:CALPOLY/oai:digitalcommons.calpoly.edu:theses-4438 |
Date | 01 March 2024 |
Creators | Asija, Shivam |
Publisher | DigitalCommons@CalPoly |
Source Sets | California Polytechnic State University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Master's Theses |
Page generated in 0.0019 seconds