• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 53
  • 53
  • 53
  • 24
  • 17
  • 16
  • 14
  • 12
  • 11
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

View Rendering for 3DTV

Muddala, Suryanarayana Murthy January 2013 (has links)
Advancements in three dimensional (3D) technologies are rapidly increasing. Three Dimensional Television (3DTV) aims at creating 3D experience for the home user. Moreover, multiview autostereoscopic displays provide a depth impression without the requirement for any special glasses and can be viewed from multiple locations. One of the key issues in the 3DTV processing chain is the content generation from the available input data format video plus depth and multiview video plus depth. This data allows for the possibility of producing virtual views using depth-image-based rendering. Although depth-image-based rendering is an efficient method, it is known for appearance of artifacts such as cracks, corona and empty regions in rendered images. While several approaches have tackled the problem, reducing the artifacts in rendered images is still an active field of research.   Two problems are addressed in this thesis in order to achieve a better 3D video quality in the context of view rendering: firstly, how to improve the quality of rendered views using a direct approach (i.e. without applying specific processing steps for each artifact), and secondly, how to fill the large missing areas in a visually plausible manner using neighbouring details from around the missing regions. This thesis introduces a new depth-image-based rendering and depth-based texture inpainting in order to address these two problems. The first problem is solved by an edge-aided rendering method that relies on the principles of forward warping and one dimensional interpolation. The other problem is addressed by using the depth-included curvature inpainting method that uses appropriate depth level texture details around disocclusions.   The proposed edge-aided rendering method and depth-included curvature inpainting methods are evaluated and compared with the state-of-the-art methods. The results show an increase in the objective quality and the visual gain over reference methods. The quality gain is encouraging as the edge-aided rendering method omits the specific processing steps to remove the rendering artifacts. Moreover, the results show that large disocclusions can be effectively filled using the depth-included curvature inpainting approach. Overall, the proposed approaches improve the content generation for 3DTV and additionally, for free view point television.
2

Digital Watermarking for Depth-Image-Based Rendering 3D Images and Its Application to Quality Evaluation

Chen, Lei 10 October 2018 (has links)
Due to the rapid development of 3D display market, the protection and authentication of the intellectual property rights of 3D multimedia has become an essential concern. As a consequence, the digital watermarking for 3D image and video is attracting considerable attention. The depth-image-based rendering (DIBR) technique has been playing a critical role in 3D contents representation because of its numerous advantages. A good digital watermarking algorithm should be robust to various possible attacks, including geometric distortions and compressions. And di erent from ordinary 2D digital watermarking, there are more speci c requirements for 3D watermarking, especially for DIBR 3D image watermarking. Not only the center view, but also the virtual left and right views can be illegally distributed. Therefore, the embedded watermark information should be accurately extracted from these three views individually for content authentication, even under attacks. In this thesis, we focus on the research of digital watermarking and watermarking based quality evaluation for DIBR 3D images. We first present a 2D image and video watermarking method based on contourlet transform, which is then extended to a robust contourlet-based watermarking algorithm for DIBR 3D images. The watermark is embedded into the center view by quantizing certain contourlet coe cients. The virtual left and right views are synthesized from the watermarked center view and the corresponding depth map. One advantage of our algorithm is its simplicity and practicality. However, the performance on watermark extraction needs to be further improved. As an improvement, a blind watermarking algorithm for DIBR 3D images based on feature regions and ridgelet transform is proposed. The watermarked view has good perceptual quality under both the objective and subjective image quality measures. Compared with other related and state-of-the-art methods, the proposed algorithm shows superiority in terms of watermark extraction and robustness to various attacks. Furthermore, as one of the most promising techniques for quality evaluation, a watermarking based quality evaluation scheme is developed for DIBR 3D images. The qualities of the watermarked center view and the synthesized left and right views under distortions can be estimated by examining the degradation of corresponding extracted watermarks. The simulation results demonstrate that our scheme has good performance of quality evaluation for DIBR 3D images under the attacks.
3

Improving the Perception of Depth of Image-Based Objects in a Virtual Environment

Whang, JooYoung 29 July 2020 (has links)
In appreciation of High-Performance Computing, modern scientific simulations are scaling into millions and even billions of grid points. As we enter the exa-scale, new strategies are required for visualization and analysis. While Image-Based Rendering (IBR) has emerged as a viable solution to the asymmetry between data size and its storage and required rendering power, it is limited in its 2D image portrayal of 3D spatial objects. This work describes a novel technique to capture, represent, and render depth information in the context of 3D IBR. We tested the value of displacement by displacement map, shading by normal, and image angle interval with our technique. We ran an online user study of 60 participants to evaluate the value of adding depth information back to Image-Based Rendering and found significant benefits. / Master of Science / In scientific research, data visualization is important for better understanding data. Modern experiments and simulations are expanding rapidly in scale, and there will come a day when rendering the entire 3D geometry becomes impossible resource-wise. Cinema was proposed as an image-Based solution to this problem, where the model was represented by an interpolated series of images. However, using flat images cannot fully express the 3D characteristics of a data. Therefore, in this work, we try to improve the depth portrayal of the images by protruding the pixels and applying shading. We show the results of a user study conducted with 60 participants on the effect of pixel protrusion, shading, and varying the number of images representing the object. Results show that this method would be useful for 3D scientific visualizations. The resulting object almost accurately resembles the 3D object.
4

A complete and practical system for interactive walkthroughs of arbitrarily complex scenes

Yang, Lining 06 August 2003 (has links)
No description available.
5

Camera positioning for 3D panoramic image rendering

Audu, Abdulkadir Iyyaka January 2015 (has links)
Virtual camera realisation and the proposition of trapezoidal camera architecture are the two broad contributions of this thesis. Firstly, multiple camera and their arrangement constitute a critical component which affect the integrity of visual content acquisition for multi-view video. Currently, linear, convergence, and divergence arrays are the prominent camera topologies adopted. However, the large number of cameras required and their synchronisation are two of prominent challenges usually encountered. The use of virtual cameras can significantly reduce the number of physical cameras used with respect to any of the known camera structures, hence adequately reducing some of the other implementation issues. This thesis explores to use image-based rendering with and without geometry in the implementations leading to the realisation of virtual cameras. The virtual camera implementation was carried out from the perspective of depth map (geometry) and use of multiple image samples (no geometry). Prior to the virtual camera realisation, the generation of depth map was investigated using region match measures widely known for solving image point correspondence problem. The constructed depth maps have been compare with the ones generated using the dynamic programming approach. In both the geometry and no geometry approaches, the virtual cameras lead to the rendering of views from a textured depth map, construction of 3D panoramic image of a scene by stitching multiple image samples and performing superposition on them, and computation of virtual scene from a stereo pair of panoramic images. The quality of these rendered images were assessed through the use of either objective or subjective analysis in Imatest software. Further more, metric reconstruction of a scene was performed by re-projection of the pixel points from multiple image samples with a single centre of projection. This was done using sparse bundle adjustment algorithm. The statistical summary obtained after the application of this algorithm provides a gauge for the efficiency of the optimisation step. The optimised data was then visualised in Meshlab software environment, hence providing the reconstructed scene. Secondly, with any of the well-established camera arrangements, all cameras are usually constrained to the same horizontal plane. Therefore, occlusion becomes an extremely challenging problem, and a robust camera set-up is required in order to resolve strongly the hidden part of any scene objects. To adequately meet the visibility condition for scene objects and given that occlusion of the same scene objects can occur, a multi-plane camera structure is highly desirable. Therefore, this thesis also explore trapezoidal camera structure for image acquisition. The approach here is to assess the feasibility and potential of several physical cameras of the same model being sparsely arranged on the edge of an efficient trapezoid graph. This is implemented both Matlab and Maya. The quality of the depth maps rendered in Matlab are better in Quality.
6

Multi-View Reconstruction and Camera Recovery using a Real or Virtual Reference Plane

Rother, Carsten January 2003 (has links)
Reconstructing a 3-dimensional scene from a set of2-dimensional images is a fundamental problem in computervision. A system capable of performing this task can be used inmany applications in robotics, architecture, archaeology,biometrics, human computer interaction and the movie andentertainment industry. Most existing reconstruction approaches exploit one sourceof information to tackle the problem. This is the motion of thecamera, the 2D images are taken from different viewpoints. Weexploit an additional information source, the reference plane,which makes it possible to reconstruct difficult scenes whereother methods fail. A real scene plane may serve as thereference plane. Furthermore, there are many alternativetechniques to obtain virtual reference planes. For instance,orthogonal directions in the scene provide a virtual referenceplane, the plane at infinity, or images taken with a parallelprojection camera. A collection of known and novel referenceplane scenarios is presented in this thesis. The main contribution of the thesis is a novel multi-viewreconstruction approach using a reference plane. The techniqueis applicable to three different feature types, points, linesand planes. The novelty of our approach is that all cameras andall features (off the reference plane) are reconstructedsimultaneously from a single linear system of imagemeasurements. It is based on the novel observation that camerasand features have a linear relationship if a reference plane isknown. In the absence of a reference plane, this relationshipis non-linear. Thus many previousmethods must reconstructfeatures and cameras sequentially. Another class of methods,popular in the literature, is factorization, but, in contrastto our approach, this has the serious practical drawback thatall features are required to be visible in all views. Extensiveexperiments show that our approach is superior to allpreviously suggested reference plane and non-reference planemethods for difficult reference plane scenarios. Furthermore, the thesis studies scenes which do not have aunique reconstruction, so-called critical configurations. It isproven that in the presence of a reference plane the set ofcritical configurations is small. Finally, the thesis introduces a complete, automaticmulti-view reconstruction system based on the reference planeapproach. The input data is a set of images and the output a 3Dpoint reconstruction together with the correspondingcameras.
7

Edge-aided virtual view rendering for multiview video plus depth

Muddala, Suryanarayana Murthy, Sjöström, Mårten, Olsson, Roger, Tourancheau, Sylvain January 2013 (has links)
Depth-Image-Based Rendering (DIBR) of virtual views is a fundamental method in three dimensional 3-D video applications to produce dierent perspectives from texture and depth information, in particular the multi-viewplus-depth (MVD) format. Artifacts are still present in virtual views as a consequence of imperfect rendering using existing DIBR methods. In this paper, we propose an alternative DIBR method for MVD. In the proposed method we introduce an edge pixel and interpolate pixel values in the virtual view using the actual projected coordinates from two adjacent views, by which cracks and disocclusions are automatically lled. In particular, we propose a method to merge pixel information from two adjacent views in the virtual view before the interpolation; we apply a weighted averaging of projected pixels within the range of one pixel in the virtual view. We compared virtual view images rendered by the proposed method to the corresponding view images rendered by state-of-theart methods. Objective metrics demonstrated an advantage of the proposed method for most investigated media contents. Subjective test results showed preference to dierent methods depending on media content, and the test could not demonstrate a signicant dierence between the proposed method and state-of-the-art methods.
8

Example Based Processing For Image And Video Synthesis

Haro, Antonio 25 November 2003 (has links)
The example based processing problem can be expressed as: "Given an example of an image or video before and after processing, apply a similar processing to a new image or video". Our thesis is that there are some problems where a single general algorithm can be used to create varieties of outputs, solely by presenting examples of what is desired to the algorithm. This is valuable if the algorithm to produce the output is non-obvious, e.g. an algorithm to emulate an example painting's style. We limit our investigations to example based processing of images, video, and 3D models as these data types are easy to acquire and experiment with. We represent this problem first as a texture synthesis influenced sampling problem, where the idea is to form feature vectors representative of the data and then sample them coherently to synthesize a plausible output for the new image or video. Grounding the problem in this manner is useful as both problems involve learning the structure of training data under some assumptions to sample it properly. We then reduce the problem to a labeling problem to perform example based processing in a more generalized and principled manner than earlier techniques. This allows us to perform a different estimation of what the output should be by approximating the optimal (and possibly not known) solution through a different approach.
9

Implementation of Disparity Estimation Using Stereo Matching

Wang, Ying-Chung 08 August 2011 (has links)
General 3D stereo vision is composed of two major phases. In the first phase, an image and its corresponding depth map are generated using stereo matching. In the second phase, depth-based image rendering (DIBR) is employed to generate images of different view angles. Stereo matching, a computation-intensive operation, generates the depth maps from two images captured at two different view positions. In this thesis, we present hardware designs of three different stereo matching methods: pixel-based, window-based, and dynamic programming (DP)-based. Pixel--based and window-based methods belong to the local optimization stereo matching methods while DP, one of the global optimization methods, consists of three main processing steps: matching cost computation, cost aggregation, and back-tracing. Hardware implementation of DP-based stereo matching usually requires large memory space to store the intermediate results, leading to large area cost. In this thesis, we propose a tile-based DP method by partition the original image into smaller tiles so that the processing of each tile requires smaller memory size.
10

Design of a Depth-Image-Based Rendering (DIBR) 3D Stereo View Synthesis Engine

Chang, Wei-Chun 01 September 2011 (has links)
Depth-Based Image Rendering (DIBR) is a popular method to generate 3D virtual image at different view positions using an image and a depth map. In general, DIBR consists of two major operations: image warping and hole filling. Image warping calculates the disparity from the depth map given some information of viewers and display screen. Hole filling is to calculate the color of pixel locations that do not correspond to any pixels in the original image after image warping. Although there are many different hole filling methods that determine the colors of the blank pixels, some undesirable artifacts are still observed in the synthesized virtual image. In this thesis, we present an approach that examines the geometry information near the region of blank pixels in order to reduce the artifacts near the edges of objects. Experimental results show that the proposed design can generate more natural shape around the edges of objects at the cost of more hardware and computation time.

Page generated in 0.122 seconds