• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 57
  • 57
  • 57
  • 25
  • 17
  • 16
  • 14
  • 13
  • 12
  • 11
  • 10
  • 10
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Image-Based View Synthesis

Avidan, Shai, Evgeniou, Theodoros, Shashua, Amnon, Poggio, Tomaso 01 January 1997 (has links)
We present a new method for rendering novel images of flexible 3D objects from a small number of example images in correspondence. The strength of the method is the ability to synthesize images whose viewing position is significantly far away from the viewing cone of the example images ("view extrapolation"), yet without ever modeling the 3D structure of the scene. The method relies on synthesizing a chain of "trilinear tensors" that governs the warping function from the example images to the novel image, together with a multi-dimensional interpolation function that synthesizes the non-rigid motions of the viewed object from the virtual camera position. We show that two closely spaced example images alone are sufficient in practice to synthesize a significant viewing cone, thus demonstrating the ability of representing an object by a relatively small number of model images --- for the purpose of cheap and fast viewers that can run on standard hardware.
2

View Rendering for 3DTV

Muddala, Suryanarayana Murthy January 2013 (has links)
Advancements in three dimensional (3D) technologies are rapidly increasing. Three Dimensional Television (3DTV) aims at creating 3D experience for the home user. Moreover, multiview autostereoscopic displays provide a depth impression without the requirement for any special glasses and can be viewed from multiple locations. One of the key issues in the 3DTV processing chain is the content generation from the available input data format video plus depth and multiview video plus depth. This data allows for the possibility of producing virtual views using depth-image-based rendering. Although depth-image-based rendering is an efficient method, it is known for appearance of artifacts such as cracks, corona and empty regions in rendered images. While several approaches have tackled the problem, reducing the artifacts in rendered images is still an active field of research.   Two problems are addressed in this thesis in order to achieve a better 3D video quality in the context of view rendering: firstly, how to improve the quality of rendered views using a direct approach (i.e. without applying specific processing steps for each artifact), and secondly, how to fill the large missing areas in a visually plausible manner using neighbouring details from around the missing regions. This thesis introduces a new depth-image-based rendering and depth-based texture inpainting in order to address these two problems. The first problem is solved by an edge-aided rendering method that relies on the principles of forward warping and one dimensional interpolation. The other problem is addressed by using the depth-included curvature inpainting method that uses appropriate depth level texture details around disocclusions.   The proposed edge-aided rendering method and depth-included curvature inpainting methods are evaluated and compared with the state-of-the-art methods. The results show an increase in the objective quality and the visual gain over reference methods. The quality gain is encouraging as the edge-aided rendering method omits the specific processing steps to remove the rendering artifacts. Moreover, the results show that large disocclusions can be effectively filled using the depth-included curvature inpainting approach. Overall, the proposed approaches improve the content generation for 3DTV and additionally, for free view point television.
3

Visualisering av brottsplatser

Beck, Jonas, Brorsson Läthén, Klas January 2006 (has links)
<p>Detta arbete har gjorts i samarbete med Rikspolisstyrelsen för att ta fram en metod för hur modern medieteknik kan användas för att skapa en ”virtuell brottsplats”. Syftet är att arbetet ska leda till ett förslag till en metod som lämpar sig för att integrera i polisens brottsplatsundersökningar och rättsliga processer, med beaktande av de speciella krav som ställs.</p><p>Arbetet innehåller två huvuddelar där den första delens utgångspunkt är vad som går att göra med utrustning och teknik som redan finns tillgänglig och den andra delen hur det skulle kunna utvecklas vidare. Till första delen har ett förslag på en metod som kan användas för att utnyttja panoramatekniken, tagits fram. Därför har det också genomförts utvärderingar och tester på befintliga programvaror för att utröna vad som passar syftet bäst. För den andra delen togs en egen lösning fram och implementerades i OpenGL/C++. Denna lösning baseras på laserskanningsdata. Resultatet av denna del är inte en färdig metod som kan börja användas direkt utan mer ett exempel på hur panoramatekniken kan användas till något mer än att bara visa hur en plats ser ut. För att knyta samman projektet med verkligheten har båda dessa delar tillämpats på flera riktiga fall.</p><p>En slutsats som kan dras av arbetet är att visualiseringar av denna typ är väldigt användbara och till fördel för utredare och åklagare. Det finns mycket kvar att undersöka men det är ingen tvekan om att den här typen av teknik är användbar för detta syfte.</p>
4

Visualisering av brottsplatser

Beck, Jonas, Brorsson Läthén, Klas January 2006 (has links)
Detta arbete har gjorts i samarbete med Rikspolisstyrelsen för att ta fram en metod för hur modern medieteknik kan användas för att skapa en ”virtuell brottsplats”. Syftet är att arbetet ska leda till ett förslag till en metod som lämpar sig för att integrera i polisens brottsplatsundersökningar och rättsliga processer, med beaktande av de speciella krav som ställs. Arbetet innehåller två huvuddelar där den första delens utgångspunkt är vad som går att göra med utrustning och teknik som redan finns tillgänglig och den andra delen hur det skulle kunna utvecklas vidare. Till första delen har ett förslag på en metod som kan användas för att utnyttja panoramatekniken, tagits fram. Därför har det också genomförts utvärderingar och tester på befintliga programvaror för att utröna vad som passar syftet bäst. För den andra delen togs en egen lösning fram och implementerades i OpenGL/C++. Denna lösning baseras på laserskanningsdata. Resultatet av denna del är inte en färdig metod som kan börja användas direkt utan mer ett exempel på hur panoramatekniken kan användas till något mer än att bara visa hur en plats ser ut. För att knyta samman projektet med verkligheten har båda dessa delar tillämpats på flera riktiga fall. En slutsats som kan dras av arbetet är att visualiseringar av denna typ är väldigt användbara och till fördel för utredare och åklagare. Det finns mycket kvar att undersöka men det är ingen tvekan om att den här typen av teknik är användbar för detta syfte.
5

Digital Watermarking for Depth-Image-Based Rendering 3D Images and Its Application to Quality Evaluation

Chen, Lei 10 October 2018 (has links)
Due to the rapid development of 3D display market, the protection and authentication of the intellectual property rights of 3D multimedia has become an essential concern. As a consequence, the digital watermarking for 3D image and video is attracting considerable attention. The depth-image-based rendering (DIBR) technique has been playing a critical role in 3D contents representation because of its numerous advantages. A good digital watermarking algorithm should be robust to various possible attacks, including geometric distortions and compressions. And di erent from ordinary 2D digital watermarking, there are more speci c requirements for 3D watermarking, especially for DIBR 3D image watermarking. Not only the center view, but also the virtual left and right views can be illegally distributed. Therefore, the embedded watermark information should be accurately extracted from these three views individually for content authentication, even under attacks. In this thesis, we focus on the research of digital watermarking and watermarking based quality evaluation for DIBR 3D images. We first present a 2D image and video watermarking method based on contourlet transform, which is then extended to a robust contourlet-based watermarking algorithm for DIBR 3D images. The watermark is embedded into the center view by quantizing certain contourlet coe cients. The virtual left and right views are synthesized from the watermarked center view and the corresponding depth map. One advantage of our algorithm is its simplicity and practicality. However, the performance on watermark extraction needs to be further improved. As an improvement, a blind watermarking algorithm for DIBR 3D images based on feature regions and ridgelet transform is proposed. The watermarked view has good perceptual quality under both the objective and subjective image quality measures. Compared with other related and state-of-the-art methods, the proposed algorithm shows superiority in terms of watermark extraction and robustness to various attacks. Furthermore, as one of the most promising techniques for quality evaluation, a watermarking based quality evaluation scheme is developed for DIBR 3D images. The qualities of the watermarked center view and the synthesized left and right views under distortions can be estimated by examining the degradation of corresponding extracted watermarks. The simulation results demonstrate that our scheme has good performance of quality evaluation for DIBR 3D images under the attacks.
6

A complete and practical system for interactive walkthroughs of arbitrarily complex scenes

Yang, Lining 06 August 2003 (has links)
No description available.
7

Improving the Perception of Depth of Image-Based Objects in a Virtual Environment

Whang, JooYoung 29 July 2020 (has links)
In appreciation of High-Performance Computing, modern scientific simulations are scaling into millions and even billions of grid points. As we enter the exa-scale, new strategies are required for visualization and analysis. While Image-Based Rendering (IBR) has emerged as a viable solution to the asymmetry between data size and its storage and required rendering power, it is limited in its 2D image portrayal of 3D spatial objects. This work describes a novel technique to capture, represent, and render depth information in the context of 3D IBR. We tested the value of displacement by displacement map, shading by normal, and image angle interval with our technique. We ran an online user study of 60 participants to evaluate the value of adding depth information back to Image-Based Rendering and found significant benefits. / Master of Science / In scientific research, data visualization is important for better understanding data. Modern experiments and simulations are expanding rapidly in scale, and there will come a day when rendering the entire 3D geometry becomes impossible resource-wise. Cinema was proposed as an image-Based solution to this problem, where the model was represented by an interpolated series of images. However, using flat images cannot fully express the 3D characteristics of a data. Therefore, in this work, we try to improve the depth portrayal of the images by protruding the pixels and applying shading. We show the results of a user study conducted with 60 participants on the effect of pixel protrusion, shading, and varying the number of images representing the object. Results show that this method would be useful for 3D scientific visualizations. The resulting object almost accurately resembles the 3D object.
8

Camera positioning for 3D panoramic image rendering

Audu, Abdulkadir Iyyaka January 2015 (has links)
Virtual camera realisation and the proposition of trapezoidal camera architecture are the two broad contributions of this thesis. Firstly, multiple camera and their arrangement constitute a critical component which affect the integrity of visual content acquisition for multi-view video. Currently, linear, convergence, and divergence arrays are the prominent camera topologies adopted. However, the large number of cameras required and their synchronisation are two of prominent challenges usually encountered. The use of virtual cameras can significantly reduce the number of physical cameras used with respect to any of the known camera structures, hence adequately reducing some of the other implementation issues. This thesis explores to use image-based rendering with and without geometry in the implementations leading to the realisation of virtual cameras. The virtual camera implementation was carried out from the perspective of depth map (geometry) and use of multiple image samples (no geometry). Prior to the virtual camera realisation, the generation of depth map was investigated using region match measures widely known for solving image point correspondence problem. The constructed depth maps have been compare with the ones generated using the dynamic programming approach. In both the geometry and no geometry approaches, the virtual cameras lead to the rendering of views from a textured depth map, construction of 3D panoramic image of a scene by stitching multiple image samples and performing superposition on them, and computation of virtual scene from a stereo pair of panoramic images. The quality of these rendered images were assessed through the use of either objective or subjective analysis in Imatest software. Further more, metric reconstruction of a scene was performed by re-projection of the pixel points from multiple image samples with a single centre of projection. This was done using sparse bundle adjustment algorithm. The statistical summary obtained after the application of this algorithm provides a gauge for the efficiency of the optimisation step. The optimised data was then visualised in Meshlab software environment, hence providing the reconstructed scene. Secondly, with any of the well-established camera arrangements, all cameras are usually constrained to the same horizontal plane. Therefore, occlusion becomes an extremely challenging problem, and a robust camera set-up is required in order to resolve strongly the hidden part of any scene objects. To adequately meet the visibility condition for scene objects and given that occlusion of the same scene objects can occur, a multi-plane camera structure is highly desirable. Therefore, this thesis also explore trapezoidal camera structure for image acquisition. The approach here is to assess the feasibility and potential of several physical cameras of the same model being sparsely arranged on the edge of an efficient trapezoid graph. This is implemented both Matlab and Maya. The quality of the depth maps rendered in Matlab are better in Quality.
9

Multi-View Reconstruction and Camera Recovery using a Real or Virtual Reference Plane

Rother, Carsten January 2003 (has links)
Reconstructing a 3-dimensional scene from a set of2-dimensional images is a fundamental problem in computervision. A system capable of performing this task can be used inmany applications in robotics, architecture, archaeology,biometrics, human computer interaction and the movie andentertainment industry. Most existing reconstruction approaches exploit one sourceof information to tackle the problem. This is the motion of thecamera, the 2D images are taken from different viewpoints. Weexploit an additional information source, the reference plane,which makes it possible to reconstruct difficult scenes whereother methods fail. A real scene plane may serve as thereference plane. Furthermore, there are many alternativetechniques to obtain virtual reference planes. For instance,orthogonal directions in the scene provide a virtual referenceplane, the plane at infinity, or images taken with a parallelprojection camera. A collection of known and novel referenceplane scenarios is presented in this thesis. The main contribution of the thesis is a novel multi-viewreconstruction approach using a reference plane. The techniqueis applicable to three different feature types, points, linesand planes. The novelty of our approach is that all cameras andall features (off the reference plane) are reconstructedsimultaneously from a single linear system of imagemeasurements. It is based on the novel observation that camerasand features have a linear relationship if a reference plane isknown. In the absence of a reference plane, this relationshipis non-linear. Thus many previousmethods must reconstructfeatures and cameras sequentially. Another class of methods,popular in the literature, is factorization, but, in contrastto our approach, this has the serious practical drawback thatall features are required to be visible in all views. Extensiveexperiments show that our approach is superior to allpreviously suggested reference plane and non-reference planemethods for difficult reference plane scenarios. Furthermore, the thesis studies scenes which do not have aunique reconstruction, so-called critical configurations. It isproven that in the presence of a reference plane the set ofcritical configurations is small. Finally, the thesis introduces a complete, automaticmulti-view reconstruction system based on the reference planeapproach. The input data is a set of images and the output a 3Dpoint reconstruction together with the correspondingcameras.
10

Edge-aided virtual view rendering for multiview video plus depth

Muddala, Suryanarayana Murthy, Sjöström, Mårten, Olsson, Roger, Tourancheau, Sylvain January 2013 (has links)
Depth-Image-Based Rendering (DIBR) of virtual views is a fundamental method in three dimensional 3-D video applications to produce dierent perspectives from texture and depth information, in particular the multi-viewplus-depth (MVD) format. Artifacts are still present in virtual views as a consequence of imperfect rendering using existing DIBR methods. In this paper, we propose an alternative DIBR method for MVD. In the proposed method we introduce an edge pixel and interpolate pixel values in the virtual view using the actual projected coordinates from two adjacent views, by which cracks and disocclusions are automatically lled. In particular, we propose a method to merge pixel information from two adjacent views in the virtual view before the interpolation; we apply a weighted averaging of projected pixels within the range of one pixel in the virtual view. We compared virtual view images rendered by the proposed method to the corresponding view images rendered by state-of-theart methods. Objective metrics demonstrated an advantage of the proposed method for most investigated media contents. Subjective test results showed preference to dierent methods depending on media content, and the test could not demonstrate a signicant dierence between the proposed method and state-of-the-art methods.

Page generated in 0.0887 seconds