Spelling suggestions: "subject:"3D conversion"" "subject:"3D eonversion""
1 |
Exemplar-based image inpainting on the GPU applied to 3D video conversionWallace, Ryan 22 February 2012 (has links)
My thesis investigates automation and optimizations for occlusion filling, a problem resulting from the generation of new viewpoints in the 3D video conversion process. Image inpainting is a popular topic in image processing research. The ability to fill a region of an image in a manner that is visually pleasing is a difficult and computationally expensive task. Recently, the most successful methods have been exemplar-based, copying patches of the image from a specified source region into the region to be filled. These algorithms are designed to propagate both structure and texture into the fill region. They are brute force algorithms however, and are generally implemented as sequential algorithms to be run on the CPU. In this research, I have effectively mapped the costly portions of an exemplar-based image inpainting algorithm to the GPU. I produce equivalent inpainting results in less time by parallelizing the brute force patch searching portion of the algorithm. Furthermore, I compare the results with another recent, optimized inpainting algorithm, and apply both algorithms to the real world problem of occlusion filling in a 3D video conversion pipeline. / Graduate / 10000-01-01
|
2 |
Exemplar-based image inpainting on the GPU applied to 3D video conversionWallace, Ryan 22 February 2012 (has links)
My thesis investigates automation and optimizations for occlusion filling, a problem resulting from the generation of new viewpoints in the 3D video conversion process. Image inpainting is a popular topic in image processing research. The ability to fill a region of an image in a manner that is visually pleasing is a difficult and computationally expensive task. Recently, the most successful methods have been exemplar-based, copying patches of the image from a specified source region into the region to be filled. These algorithms are designed to propagate both structure and texture into the fill region. They are brute force algorithms however, and are generally implemented as sequential algorithms to be run on the CPU. In this research, I have effectively mapped the costly portions of an exemplar-based image inpainting algorithm to the GPU. I produce equivalent inpainting results in less time by parallelizing the brute force patch searching portion of the algorithm. Furthermore, I compare the results with another recent, optimized inpainting algorithm, and apply both algorithms to the real world problem of occlusion filling in a 3D video conversion pipeline. / Graduate
|
3 |
2D to 3D conversion with direct geometrical search and approximation spacesBorkowski, Maciej 14 September 2007 (has links)
This dissertation describes the design and implementation of a system that has been designed to extract 3D information from pairs of 2D images. System input consists of two images taken by an ordinary digital camera. System output is a full 3D model extracted from 2D images. There are no assumptions about the positions of the cameras during the time when the images are being taken, but the scene must not undergo any modifications.
The process of extracting 3D information from 2D images consists of three basic steps. First, point matching is performed. The main contribution of this step is the introduction of an approach to matching image segments in the context of an approximation space. The second step copes with the problem of estimating external camera parameters. The proposed solution to this problem uses 3D geometry rather than the fundamental matrix widely used in 2D to 3D conversion. In the proposed approach (DirectGS), the distances between reprojected rays for all image points are minimised. The contribution of the approach considered in this step is a definition of an optimal search space for solving the 2D to 3D conversion problem and introduction of an efficient algorithm that minimises reprojection error. In the third step, the problem of dense matching is considered. The contribution of this step is the introduction of a proposed approach to dense matching of 3D object structures that utilises the presence of points on lines in 3D space.
The theory and experiments developed for this dissertation demonstrate the usefulness of the proposed system in the process of digitizing 3D information. The main advantage of the proposed approach is its low cost, simplicity in use for an untrained user and the high precision of reconstructed objects. / October 2007
|
4 |
2D to 3D conversion with direct geometrical search and approximation spacesBorkowski, Maciej 14 September 2007 (has links)
This dissertation describes the design and implementation of a system that has been designed to extract 3D information from pairs of 2D images. System input consists of two images taken by an ordinary digital camera. System output is a full 3D model extracted from 2D images. There are no assumptions about the positions of the cameras during the time when the images are being taken, but the scene must not undergo any modifications.
The process of extracting 3D information from 2D images consists of three basic steps. First, point matching is performed. The main contribution of this step is the introduction of an approach to matching image segments in the context of an approximation space. The second step copes with the problem of estimating external camera parameters. The proposed solution to this problem uses 3D geometry rather than the fundamental matrix widely used in 2D to 3D conversion. In the proposed approach (DirectGS), the distances between reprojected rays for all image points are minimised. The contribution of the approach considered in this step is a definition of an optimal search space for solving the 2D to 3D conversion problem and introduction of an efficient algorithm that minimises reprojection error. In the third step, the problem of dense matching is considered. The contribution of this step is the introduction of a proposed approach to dense matching of 3D object structures that utilises the presence of points on lines in 3D space.
The theory and experiments developed for this dissertation demonstrate the usefulness of the proposed system in the process of digitizing 3D information. The main advantage of the proposed approach is its low cost, simplicity in use for an untrained user and the high precision of reconstructed objects.
|
5 |
2D to 3D conversion with direct geometrical search and approximation spacesBorkowski, Maciej 14 September 2007 (has links)
This dissertation describes the design and implementation of a system that has been designed to extract 3D information from pairs of 2D images. System input consists of two images taken by an ordinary digital camera. System output is a full 3D model extracted from 2D images. There are no assumptions about the positions of the cameras during the time when the images are being taken, but the scene must not undergo any modifications.
The process of extracting 3D information from 2D images consists of three basic steps. First, point matching is performed. The main contribution of this step is the introduction of an approach to matching image segments in the context of an approximation space. The second step copes with the problem of estimating external camera parameters. The proposed solution to this problem uses 3D geometry rather than the fundamental matrix widely used in 2D to 3D conversion. In the proposed approach (DirectGS), the distances between reprojected rays for all image points are minimised. The contribution of the approach considered in this step is a definition of an optimal search space for solving the 2D to 3D conversion problem and introduction of an efficient algorithm that minimises reprojection error. In the third step, the problem of dense matching is considered. The contribution of this step is the introduction of a proposed approach to dense matching of 3D object structures that utilises the presence of points on lines in 3D space.
The theory and experiments developed for this dissertation demonstrate the usefulness of the proposed system in the process of digitizing 3D information. The main advantage of the proposed approach is its low cost, simplicity in use for an untrained user and the high precision of reconstructed objects.
|
6 |
3D Modeling using Multi-View ImagesJanuary 2010 (has links)
abstract: There is a growing interest in the creation of three-dimensional (3D) images and videos due to the growing demand for 3D visual media in commercial markets. A possible solution to produce 3D media files is to convert existing 2D images and videos to 3D. The 2D to 3D conversion methods that estimate the depth map from 2D scenes for 3D reconstruction present an efficient approach to save on the cost of the coding, transmission and storage of 3D visual media in practical applications. Various 2D to 3D conversion methods based on depth maps have been developed using existing image and video processing techniques. The depth maps can be estimated either from a single 2D view or from multiple 2D views. This thesis presents a MATLAB-based 2D to 3D conversion system from multiple views based on the computation of a sparse depth map. The 2D to 3D conversion system is able to deal with the multiple views obtained from uncalibrated hand-held cameras without knowledge of the prior camera parameters or scene geometry. The implemented system consists of techniques for image feature detection and registration, two-view geometry estimation, projective 3D scene reconstruction and metric upgrade to reconstruct the 3D structures by means of a metric transformation. The implemented 2D to 3D conversion system is tested using different multi-view image sets. The obtained experimental results of reconstructed sparse depth maps of feature points in 3D scenes provide relative depth information of the objects. Sample ground-truth depth data points are used to calculate a scale factor in order to estimate the true depth by scaling the obtained relative depth information using the estimated scale factor. It was found out that the obtained reconstructed depth map is consistent with the ground-truth depth data. / Dissertation/Thesis / M.S. Electrical Engineering 2010
|
7 |
Shell-based geometric image and video inpaintingHocking, Laird Robert January 2018 (has links)
The subject of this thesis is a class of fast inpainting methods (image or video) based on the idea of filling the inpainting domain in successive shells from its boundary inwards. Image pixels (or video voxels) are filled by assigning them a color equal to a weighted average of either their already filled neighbors (the ``direct'' form of the method) or those neighbors plus additional neighbors within the current shell (the ``semi-implicit'' form). In the direct form, pixels (voxels) in the current shell may be filled independently, but in the semi-implicit form they are filled simultaneously by solving a linear system. We focus in this thesis mainly on the image inpainting case, where the literature contains several methods corresponding to the {\em direct} form of the method - the semi-implicit form is introduced for the first time here. These methods effectively differ only in the order in which pixels (voxels) are filled, the weights used for averaging, and the neighborhood that is averaged over. All of them are very fast, but at the same time all of them leave undesirable artifacts such as ``kinking'' (bending) or blurring of extrapolated isophotes. This thesis has two main goals. First, we introduce new algorithms within this class, which are aimed at reducing or eliminating these artifacts, and also target a specific application - the 3D conversion of images and film. The first part of this thesis will be concerned with introducing 3D conversion as well as Guidefill, a method in the above class adapted to the inpainting problems arising in 3D conversion. However, the second and more significant goal of this thesis is to study these algorithms as a class. In particular, we develop a mathematical theory aimed at understanding the origins of artifacts mentioned. Through this, we seek is to understand which artifacts can be eliminated (and how), and which artifacts are inevitable (and why). Most of the thesis is occupied with this second goal. Our theory is based on two separate limits - the first is a {\em continuum} limit, in which the pixel width →0, and in which the algorithm converges to a partial differential equation. The second is an asymptotic limit in which h is very small but non-zero. This latter limit, which is based on a connection to random walks, relates the inpainted solution to a type of discrete convolution. The former is useful for studying kinking artifacts, while the latter is useful for studying blur. Although all the theoretical work has been done in the context of image inpainting, experimental evidence is presented suggesting a simple generalization to video. Finally, in the last part of the thesis we explore shell-based video inpainting. In particular, we introduce spacetime transport, which is a natural generalization of the ideas of Guidefill and its predecessor, coherence transport, to three dimensions (two spatial dimensions plus one time dimension). Spacetime transport is shown to have much in common with shell-based image inpainting methods. In particular, kinking and blur artifacts persist, and the former of these may be alleviated in exactly the same way as in two dimensions. At the same time, spacetime transport is shown to be related to optical flow based video inpainting. In particular, a connection is derived between spacetime transport and a generalized Lucas-Kanade optical flow that does not distinguish between time and space.
|
8 |
Rekonstrukce krevního řečiště prstu ve 3D z videosekvence / Reconstruction of the Bloodstream of the Finger in 3D from a Video SequenceZáleský, Jiří January 2020 (has links)
The goal of the master thesis is the design and construction of a device for capturing video sequences of the cardiovascular system of the finger of a human hand and the subsequently design and implementation of a method of data extraction for its reconstruction into a 3D model.
|
Page generated in 0.0799 seconds