Spelling suggestions: "subject:"scene reconstruction"" "subject:"acene reconstruction""
1 |
Ray Traversal for Incremental Voxel ColouringBatchelor, Oliver William January 2006 (has links)
Image based scene reconstruction from multiple views is an interesting challenge, with many ambiguities and sources of noise. One approach to scene reconstruction is Voxel Colouring, Seitz and Dyer [26], which uses colour information in images and handles the problem of occlusion. Culbertson and Malzbender [11], introduced Generalised Voxel Colouring (GVC) which uses projection and rasterization to establish global scene visibility. Our work has involved investigating the use of ray traversal as an efficient alternative. We have developed two main approaches along this line, Ray Images and Ray Buckets. Comparisons between implementations of our algorithms and variations of GVC are presented, as well as applications to areas of optimisation colour consistency and level of detail. Ray traversal seems a promising approach to scene visibility, but requires more work to be of practical use. Our methods show some advantages over existing approaches in time use. However we have not been as succesful as an- ticipated in reconstruction quality shown by implementation of optimisation colour consistency.
|
2 |
Generating 3D Scenes From Single RGB Images in Real-Time Using Neural NetworksGrundberg, Måns, Altintas, Viktor January 2021 (has links)
The ability to reconstruct 3D scenes of environments is of great interest in a number of fields such as autonomous driving, surveillance, and virtual reality. However, traditional methods often rely on multiple cameras or sensor-based depth measurements to accurately reconstruct 3D scenes. In this thesis we propose an alternative, deep learning-based approach to 3D scene reconstruction for objects of interest, using nothing but single RGB images. We evaluate our approach using the Deep Object Pose Estimation (DOPE) neural network for object detection and pose estimation, and the NVIDIA Deep learning Dataset Synthesizer for synthetic data generation. Using two unique objects, our results indicate that it is possible to reconstruct 3D scenes from single RGB images within a few centimeters of error margin.
|
3 |
Broadband World Modeling and Scene ReconstructionGoldman, Benjamin Joseph 24 May 2013 (has links)
Perception is a key feature in how any creature or autonomous system relates to its environment. While there are many types of perception, this thesis focuses on the improvement of the visual robotics perception systems. By implementing a broadband passive sensing system in conjunction with current perception algorithms, this thesis explores scene reconstruction and world modeling.
The process involves two main steps. The first is stereo correspondence using block matching algorithms with filtering to improve the quality of this matching process. The disparity maps are then transformed into 3D point clouds. These point clouds are filtered again before the registration process is done. The registration uses a SAC-IA matching technique to align the point clouds with minimum error. The registered final cloud is then filtered again to smooth and down sample the large amount of data. This process was implemented through software architecture that utilizes Qt, OpenCV, and Point Cloud Library. It was tested using a variety of experiments on each of the components of the process. It shows promise for being able to replace or augment existing UGV perception systems in the future. / Master of Science
|
4 |
Problem-oriented approach to criminal investigation: implementation issues and challengesOzeren, Suleyman 08 1900 (has links)
As a proactive, information-based policing approach, problem-oriented policing emphasizes the use of crime analysis techniques in the analysis of the underlying causes of the problems that police deal with. In particular, analysis applications can be powerful tools for criminal investigation, such as crime reconstruction, profiling, IAFIS, VICAP, and CODIS. The SARA Model represents a problem-solving strategy of problemoriented policing. It aims to address the underlying causes of the problems and create substantial solutions. However, implementing problem-oriented policing requires a significant change in both the philosophy and structure of police agencies. Not only American policing but also the Turkish National Police should consider problem-oriented policing as an alternative approach for solving criminal activities.
|
5 |
From shape-based object recognition and discovery to 3D scene interpretationPayet, Nadia 12 May 2011 (has links)
This dissertation addresses a number of inter-related and fundamental problems in computer vision. Specifically, we address object discovery, recognition, segmentation, and 3D pose estimation in images, as well as 3D scene reconstruction and scene interpretation. The key ideas behind our approaches include using shape as a basic object feature, and using structured prediction modeling paradigms for representing objects and scenes.
In this work, we make a number of new contributions both in computer vision and machine learning. We address the vision problems of shape matching, shape-based mining of objects in arbitrary image collections, context-aware object recognition, monocular estimation of 3D object poses, and monocular 3D scene reconstruction using shape from texture. Our work on shape-based object discovery is the first to show that meaningful objects can be extracted from a collection of arbitrary images, without any human supervision, by shape matching. We also show that a spatial repetition of objects in images (e.g., windows on a building facade, or cars lined up along a street) can be used for 3D scene reconstruction from a single image. The aforementioned topics have never been addressed in the literature.
The dissertation also presents new algorithms and object representations for the aforementioned vision problems. We fuse two traditionally different modeling paradigms Conditional Random Fields (CRF) and Random Forests (RF) into a unified framework, referred to as (RF)^2. We also derive theoretical error bounds of estimating distribution ratios by a two-class RF, which is then used to derive the theoretical performance bounds of a two-class
(RF)^2.
Thorough experimental evaluation of individual aspects of all our approaches is presented. In general, the experiments demonstrate that we outperform the state of the art on the benchmark datasets, without increasing complexity and supervision in training. / Graduation date: 2011 / Access restricted to the OSU Community at author's request from May 12, 2011 - May 12, 2012
|
6 |
Robust Extraction Of Sparse 3d Points From Image SequencesVural, Elif 01 September 2008 (has links) (PDF)
In this thesis, the extraction of sparse 3D points from calibrated image sequences is studied. The presented method for sparse 3D reconstruction is examined in two steps, where the first part addresses the problem of two-view reconstruction, and the second part is the extension of the two-view reconstruction algorithm to include multiple views. The examined two-view reconstruction method consists of some basic building blocks, such as feature detection and matching, epipolar geometry estimation, and the reconstruction of cameras and scene structure. Feature detection and matching is achieved by Scale Invariant Feature Transform (SIFT) method. For the estimation of epipolar geometry, the 7-point and 8-point algorithms are examined for Fundamental matrix (F-matrix) computation, while RANSAC and PROSAC are utilized for the robustness and accuracy for model estimation. In the final stage of two-view reconstruction, the camera projection matrices are computed from the F-matrix, and the locations of 3D scene points are estimated by triangulation / hence, determining the scene structure and cameras up to a projective transformation. The extension of the two-view reconstruction to multiple views is achieved by estimating the camera projection matrix of each additional view from the already reconstructed matches, and then adding new points to the scene structure by triangulating the unreconstructed matches. Finally, the reconstruction is upgraded from projective to metric by a rectifying homography computed from the camera calibration information. In order to obtain a refined reconstruction, two different methods are suggested for the removal of erroneous points from the scene structure. In addition to the examination of the solution to the reconstruction problem, experiments have been conducted that compare the performances of competing algorithms used in various stages of reconstruction. In connection with sparse reconstruction, a rate-distortion efficient piecewise planar scene representation algorithm that generates mesh models of scenes from reconstructed point clouds is examined, and its performance is evaluated through experiments.
|
7 |
Multiview 3d Reconstruction Of A Scene Containing Independently Moving ObjectsTola, Engin 01 August 2005 (has links) (PDF)
In this thesis, the structure from motion problem for calibrated scenes containing independently moving objects (IMO) has been studied. For this purpose, the overall reconstruction process is partitioned into various stages. The first stage deals with the fundamental problem of estimating structure and motion by using only two views. This process starts with finding some salient features using a sub-pixel version of the Harris corner detector. The features are matched by the help of a similarity and neighborhood-based matcher. In order to reject the outliers and estimate the fundamental matrix of the two images, a robust estimation is performed via RANSAC and normalized 8-point algorithms. Two-view reconstruction is finalized by decomposing the fundamental matrix and estimating the 3D-point locations as a result of triangulation. The second stage of the reconstruction is the generalization of the two-view algorithm for the N-view case. This goal is accomplished by first reconstructing an initial framework from the first stage and then relating the additional views by finding correspondences between the new view and already reconstructed views. In this way, 3D-2D projection pairs are determined and the projection matrix of this new view is estimated by using a robust procedure. The final section deals with scenes containing IMOs. In order to reject the correspondences due to moving objects, parallax-based rigidity constraint is used. In utilizing this constraint, an automatic background pixel selection algorithm is developed and an IMO rejection algorithm is also proposed. The results of the proposed algorithm are compared against that of a robust outlier rejection algorithm and found to be quite promising in terms of execution time vs. reconstruction quality.
|
8 |
Navigation and tools in a virtual crime sceneKomulainen, Oscar, Lögdlund, Måns January 2018 (has links)
Revisiting a crime scene is a vital part of investigating a crime. When physically visiting a crime scene there is however always a risk of contaminating the scene, and when working on a cold case, chances are that the physical crime has been altered. This thesis aims to explore what tools a criminal investigator would need to investigate a crime in a virtual environment and if a virtual reconstruction of a crime scene can be used to aid investigators when solving crimes. To explore these questions, an application has been developed in Unreal Engine that uses virtual reality (VR) to investigate a scene, reconstructed from data that has been obtained through laser scanning. The result is an application where the user is located in the court of Stockholm city, which was scanned with a laser scanner by NFC in conjunction with the terror attack on Drottninggatan in April 2017. The user can choose between a set of tools, e.g. a measuring tool and to place certain objects in the scene, in order to draw conclusions of what has happened. User tests with criminal investigators show that this type of application might be of use in some way for the Swedish police. It is however not clear how or when this would be possible which can be expected since this is a new type of application that has not been used by the police before.
|
9 |
Rekonstrukce trestného činu / The Crime Scene ReconstructionHesová, Veronika January 2021 (has links)
The Crime Scene Reconstruction Abstract This diploma thesis deals with the issue of crime scene reconstruction both from the point of view of criminal law and from the point of view of criminological science and practice. With the help of reconstruction as a means of evidence, which is classified in the Criminal Procedure Code as a special means of proof, the authorities involved in criminal proceedings try to find out the facts of the case, about which there is no reasonable doubt. Through reconstruction as a method of criminalistic practice, the factual circumstances under which the investigated crime was committed are restored. The main goal of this thesis is a detailed analysis of the crime scene reconstruction from a criminal and forensic point of view, and the result of this analysis is a chapter devoted to considerations de lege ferenda. The secondary goal of the diploma thesis is to compare the crime scene reconstruction with selected investigative acts with the help of a comparative method. The diploma thesis is divided into three parts. The first part deals with the comparison of the crime scene reconstruction with investigative acts, with which there are very frequent interchanges in criminal practice. Part of the first part is also an approximation of the legal regulation of reconstruction in...
|
10 |
Sensor Fused Scene Reconstruction and Surface InspectionMoodie, Daniel Thien-An 17 April 2014 (has links)
Optical three dimensional (3D) mapping routines are used in inspection robots to detect faults by creating 3D reconstructions of environments. To detect surface faults, sub millimeter depth resolution is required to determine minute differences caused by coating loss and pitting. Sensors that can detect these small depth differences cannot quickly create contextual maps of large environments.
To solve the 3D mapping problem, a sensor fused approach is proposed that can gather contextual information about large environments with one depth sensor and a SLAM routine; while local surface defects can be measured with an actuated optical profilometer. The depth sensor uses a modified Kinect Fusion to create a contextual map of the environment. A custom actuated optical profilometer is created and then calibrated. The two systems are then registered to each other to place local surface scans from the profilometer into a scene context created by Kinect Fusion.
The resulting system can create a contextual map of large scale features (0.4 m) with less than 10% error while the optical profilometer can create surface reconstructions with sub millimeter resolution. The combination of the two allows for the detection and quantification of surface faults with the profilometer placed in a contextual reconstruction. / Master of Science
|
Page generated in 0.122 seconds