51 |
3D Modeling of Indoor EnvironmentsDahlin, Johan January 2013 (has links)
With the aid of modern sensors it is possible to create models of buildings. These sensorstypically generate 3D point clouds and in order to increase interpretability and usability,these point clouds are often translated into 3D models.In this thesis a way of translating a 3D point cloud into a 3D model is presented. The basicfunctionality is implemented using Matlab. The geometric model consists of floors, wallsand ceilings. In addition, doors and windows are automatically identified and integrated intothe model. The resulting model also has an explicit representation of the topology betweenentities of the model. The topology is represented as a graph, and to do this GraphML isused. The graph is opened in a graph editing program called yEd.The result is a 3D model that can be plotted in Matlab and a graph describing the connectivitybetween entities. The GraphML file is automatically generated in Matlab. An interfacebetween Matlab and yEd allows the user to choose which rooms should be plotted.
|
52 |
Fast Feature Extraction From 3d Point CloudTarcin, Serkan 01 February 2013 (has links) (PDF)
To teleoperate an unmanned vehicle a rich set of information should be gathered from surroundings.These systems use sensors which sends high amounts of data and processing the data in CPUs can be time consuming. Similarly, the algorithms that use the data may work slow because of the amount of the data. The solution is, preprocessing the data taken from the sensors on the vehicle and transmitting only the necessary parts or the results of the preprocessing. In this thesis a 180 degree laser scanner at the front end of an unmanned ground vehicle (UGV) tilted up and down on a horizontal axis and point clouds constructed from the surroundings. Instead of transmitting this data directly to the path planning or obstacle avoidance algorithms, a preprocessing stage has been run. In this preprocess rst, the points belonging to the ground plane have been detected and a simplied version of ground has been constructed then the obstacles have been detected. At last, a simplied ground plane as ground and simple primitive geometric shapes as obstacles have been sent to the path planning algorithms instead of sending the whole point cloud.
|
53 |
Scan Registration Using the Normal Distributions Transform and Point Cloud Clustering TechniquesDas, Arun January 2013 (has links)
As the capabilities of autonomous vehicles increase, their use in situations that are dangerous or dull for humans is becoming more popular. Autonomous systems are currently being used in several military and civilian domains, including search and rescue operations, disaster relief coordination, infrastructure inspection and surveillance missions. In order to perform high level mission autonomy tasks, a method is required for the vehicle to localize itself, as well as generate a map of the environment. Algorithms which allow the vehicle to concurrently localize and create a map of its surroundings are known as solutions to the Simultaneous Localization and Mapping (SLAM) problem. Certain high level tasks, such as drivability analysis and obstacle avoidance, benefit from the use of a dense map of the environment, and are typically generated with the use of point cloud data. The point cloud data is incorporated into SLAM algorithms with scan registration techniques, which determine the relative transformation between two sufficiently overlapping point clouds. The Normal Distributions Transform (NDT) algorithm is a promising method for scan registration, however many issues with the NDT approach exist, including a poor convergence basin, discontinuities in the NDT cost function, and unreliable pose estimation in sparse, outdoor environments.
This thesis presents methods to overcome the shortcomings of the NDT algorithm, in both 2D and 3D scenarios. To improve the convergence basin of NDT for 2D scan registration, the Multi-Scale k-Means NDT (MSKM-NDT) algorithm is presented, which divides a 2D point cloud using k-means clustering and performs the scan registration optimization over multiple scales of clustering. The k-means clustering approach generates fewer Gaussian distributions when compared to the standard NDT algorithm, allowing for evaluation of the cost function across all Gaussian clusters. Cost evaluation across all the clusters guarantees that the optimization will converge, as it resolves the issue of discontinuities in the cost function found in the standard NDT algorithm. Experiments demonstrate that the MSKM-NDT approach can be used to register partially overlapping scans with large initial transformation error, and that the convergence basin of MSKM-NDT is superior to NDT for the same test data.
As k-means clustering does not scale well to 3D, the Segmented Greedy Cluster NDT (SGC-NDT) method is proposed as an alternative approach to improve and guarantee convergence using 3D point clouds that contain points corresponding to the ground of the environment. The SGC-NDT algorithm segments the ground points using a Gaussian Process (GP) regression model and performs clustering of the non ground points using a greedy method. The greedy clustering extracts natural features in the environment and generates Gaussian clusters to be used within the NDT framework for scan registration. Segmentation of the ground plane and generation of the Gaussian distributions using natural features results in fewer Gaussian distributions when compared to the standard NDT algorithm. Similar to MSKM-NDT, the cost function can be evaluated across all the clusters in the scan, resulting in a smooth and continuous cost function that guarantees convergence of the optimization. Experiments demonstrate that the SGC-NDT algorithm results in scan registrations with higher accuracy and better convergence properties than other state-of-the-art methods for both urban and forested environments.
|
54 |
Estimating the Intrinsic Dimension of High-Dimensional Data Sets: A Multiscale, Geometric ApproachLittle, Anna Victoria January 2011 (has links)
<p>This work deals with the problem of estimating the intrinsic dimension of noisy, high-dimensional point clouds. A general class of sets which are locally well-approximated by <italic>k</italic> dimensional planes but which are embedded in a <italic>D</italic>>><italic>k</italic> dimensional Euclidean space are considered. Assuming one has samples from such a set, possibly corrupted by high-dimensional noise, if the data is linear the dimension can be recovered using PCA. However, when the data is non-linear, PCA fails, overestimating the intrinsic dimension. A multiscale version of PCA is thus introduced which is robust to small sample size, noise, and non-linearities in the data.</p> / Dissertation
|
55 |
Inverse geometry : from the raw point cloud to the 3d surface : theory and algorithmsDigne, Julie 23 November 2010 (has links) (PDF)
Many laser devices acquire directly 3D objects and reconstruct their surface. Nevertheless, the final reconstructed surface is usually smoothed out as a result of the scanner internal de-noising process and the offsets between different scans. This thesis, working on results from high precision scans, adopts the somewhat extreme conservative position, not to loose or alter any raw sample throughout the whole processing pipeline, and to attempt to visualize them. Indeed, it is the only way to discover all surface imperfections (holes, offsets). Furthermore, since high precision data can capture the slightest surface variation, any smoothing and any sub-sampling can incur in the loss of textural detail.The thesis attempts to prove that one can triangulate the raw point cloud with almost no sample loss. It solves the exact visualization problem on large data sets of up to 35 million points made of 300 different scan sweeps and more. Two major problems are addressed. The first one is the orientation of the complete raw point set, an the building of a high precision mesh. The second one is the correction of the tiny scan misalignments which can cause strong high frequency aliasing and hamper completely a direct visualization.The second development of the thesis is a general low-high frequency decomposition algorithm for any point cloud. Thus classic image analysis tools, the level set tree and the MSER representations, are extended to meshes, yielding an intrinsic mesh segmentation method.The underlying mathematical development focuses on an analysis of a half dozen discrete differential operators acting on raw point clouds which have been proposed in the literature. By considering the asymptotic behavior of these operators on a smooth surface, a classification by their underlying curvature operators is obtained.This analysis leads to the development of a discrete operator consistent with the mean curvature motion (the intrinsic heat equation) defining a remarkably simple and robust numerical scale space. By this scale space all of the above mentioned problems (point set orientation, raw point set triangulation, scan merging, segmentation), usually addressed by separated techniques, are solved in a unified framework.
|
56 |
Reconstruction of 3D object's surface image using linear beam / Erdvinio objekto paviršiaus atvaizdo rekonstravimas apšviečiant linijiniu šviesos pluoštuMatiukas, Vilius 15 February 2012 (has links)
This dissertation investigates issues relevant to virtualization of a real 3D object – that is, producing a model of the object from its image data, and then visualizing this model as an image seen on the computer screen The object of investigation is methods and algorythms for reconstruction of a complex geometric shape from a number of unorganised point sets in to electronic form. Unorganised point sets are obtained by scanning the 3d object from defferent points of view. The choise of the complex oject is made so that it can not be described by a simple mathematical expression. To attain the aim, the following tasks were put forward: developing a model for a source of linear beam (scanner) and generating an unorganized point set that approximates the object scanned, filtering the unorganized point set and aggregating the unorganized point set to build an entire image of the object and reconstruct its surface. The aim of this work was to reconstruct the surface image of a 3D object using linear beam. This aim was sought by modifying the existing methods, or proposing new methods, and evaluating the accuracy of the reconstruction using statistical techniques The work consists of the general characteristic, four chapters, conclusions, list of literature and list of publications. The first section reviews the human visual system, computer vision and three-dimensional imaging technologies. The second section is addressed to a problem of linear beam’s centreline extraction in 2d... [to full text] / Šioje disertacijoje nagrinėjamas realaus trimačio objekto virtualizavimas – t. y. objekto paviršiaus modelio sukūrimas iš skenuotų vaizdų aibės, po to vizualizuojant šį modelį atvaizdo kompiuterio monitoriaus ekrane pavidalu. Tyrimų objektas – sudėtingos geometrinės formos erdvinio objekto paviršiaus atkūrimo elektroniniu pavidalu iš keleto nestruktūrizuotų taškų rinkinių, gautų nuskaitant objektą skirtingomis apžvalgos kryptimis, metodai ir algoritmai. Sudėtinga objekto forma pasirinkta tam, kad jos nebūtų galima perteikti paprasta matematine išraiška. Erdvinio objekto virtualizavimo procesą galima išskaidyti į šiuos keturis etapus: nestruktūrizuotų taškų rinkinio formavimą, filtravimą, apjungimą ir rekonstravimą. Pirmojo etapo metu, naudojant optinius jutiklius ir kontaktinį skaitymo metodą, objektas nuskaitomas skirtingomis apžvalgos kryptimis, taip gaunant keletą objekto paviršiaus nestruktūrizuotų taškų rinkinių. Antrame etape pašalinami taškai, atsiradę dėl optinių iškraipymų, atspindžių ir šešėlinių sričių įtakos. Trečiame etape atskiri taškų rinkiniai apjungiami į visumą. Paskutinis etapas skirtas erdvinio objekto paviršių aproksimuojančiam tinkleliui gauti. Pagrindinis šio darbo tikslas buvo rekonstruoti erdvinio objekto paviršiaus atvaizdą, objektą apšviečiant linijiniu šviesos pluoštu. Šio tikslo buvo siekiama tobulinant esamus metodus arba kuriant naujus, o taip pat statistiniais metodais vertinant rekonstrukcijos tikslumą. Disertacijos... [toliau žr. visą tekstą]
|
57 |
Scan Registration Using the Normal Distributions Transform and Point Cloud Clustering TechniquesDas, Arun January 2013 (has links)
As the capabilities of autonomous vehicles increase, their use in situations that are dangerous or dull for humans is becoming more popular. Autonomous systems are currently being used in several military and civilian domains, including search and rescue operations, disaster relief coordination, infrastructure inspection and surveillance missions. In order to perform high level mission autonomy tasks, a method is required for the vehicle to localize itself, as well as generate a map of the environment. Algorithms which allow the vehicle to concurrently localize and create a map of its surroundings are known as solutions to the Simultaneous Localization and Mapping (SLAM) problem. Certain high level tasks, such as drivability analysis and obstacle avoidance, benefit from the use of a dense map of the environment, and are typically generated with the use of point cloud data. The point cloud data is incorporated into SLAM algorithms with scan registration techniques, which determine the relative transformation between two sufficiently overlapping point clouds. The Normal Distributions Transform (NDT) algorithm is a promising method for scan registration, however many issues with the NDT approach exist, including a poor convergence basin, discontinuities in the NDT cost function, and unreliable pose estimation in sparse, outdoor environments.
This thesis presents methods to overcome the shortcomings of the NDT algorithm, in both 2D and 3D scenarios. To improve the convergence basin of NDT for 2D scan registration, the Multi-Scale k-Means NDT (MSKM-NDT) algorithm is presented, which divides a 2D point cloud using k-means clustering and performs the scan registration optimization over multiple scales of clustering. The k-means clustering approach generates fewer Gaussian distributions when compared to the standard NDT algorithm, allowing for evaluation of the cost function across all Gaussian clusters. Cost evaluation across all the clusters guarantees that the optimization will converge, as it resolves the issue of discontinuities in the cost function found in the standard NDT algorithm. Experiments demonstrate that the MSKM-NDT approach can be used to register partially overlapping scans with large initial transformation error, and that the convergence basin of MSKM-NDT is superior to NDT for the same test data.
As k-means clustering does not scale well to 3D, the Segmented Greedy Cluster NDT (SGC-NDT) method is proposed as an alternative approach to improve and guarantee convergence using 3D point clouds that contain points corresponding to the ground of the environment. The SGC-NDT algorithm segments the ground points using a Gaussian Process (GP) regression model and performs clustering of the non ground points using a greedy method. The greedy clustering extracts natural features in the environment and generates Gaussian clusters to be used within the NDT framework for scan registration. Segmentation of the ground plane and generation of the Gaussian distributions using natural features results in fewer Gaussian distributions when compared to the standard NDT algorithm. Similar to MSKM-NDT, the cost function can be evaluated across all the clusters in the scan, resulting in a smooth and continuous cost function that guarantees convergence of the optimization. Experiments demonstrate that the SGC-NDT algorithm results in scan registrations with higher accuracy and better convergence properties than other state-of-the-art methods for both urban and forested environments.
|
58 |
Feature Based Learning for Point Cloud Labeling and Grasp Point DetectionOlsson, Fredrik January 2018 (has links)
Robotic bin picking is the problem of emptying a bin of randomly distributedobjects through a robotic interface. This thesis examines an SVM approach to ex-tract grasping points for a vacuum-type gripper. The SVM is trained on syntheticdata and used to classify the points of a non-synthetic 3D-scanned point cloud aseither graspable or non-graspable. The classified points are then clustered intograspable regions from which the grasping points are extracted. The SVM models and the algorithm as a whole are trained and evaluated againstcubic and cylindrical objects. Separate SVM models are trained for each type ofobject in addition to one model being trained on a dataset containing both typesof objects. It is shown that the performance of the SVM in terms accuracy isdependent on the objects and their geometrical properties. Further, it is shownthat the algorithm is reasonably robust in terms of successfully picking objects,regardless of the scale of the objects.
|
59 |
Automatické spojování mračen bodů / Automatic Point Clouds MergingHörner, Jiří January 2018 (has links)
Multi-robot systems are an established research area with a growing number of applications. Efficient coordination in such systems usually requires knowledge of robot positions and the global map. This work presents a novel map-merging algorithm for merging 3D point cloud maps in multi-robot systems, which produces the global map and estimates robot positions. The algorithm is based on feature- matching transformation estimation with a novel descriptor matching scheme and works solely on point cloud maps without any additional auxiliary information. The algorithm can work with different SLAM approaches and sensor types and it is applicable in heterogeneous multi-robot systems. The map-merging algorithm has been evaluated on real-world datasets captured by both aerial and ground-based robots with a variety of stereo rig cameras and active RGB-D cameras. It has been evaluated in both indoor and outdoor environments. The proposed algorithm was implemented as a ROS package and it is currently distributed in the ROS distribution. To the best of my knowledge, it is the first ROS package for map-merging of 3D maps.
|
60 |
Least-Squares Fit For Points Measured Along Line-Profiles Formed From Line And Arc SegmentsJanuary 2013 (has links)
abstract: Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified tolerances. This thesis describes a new method for quality control of a manufacturing process by improving the method used to convert measured points on a part to a geometric entity that can be compared directly with tolerance specifications. The focus of this paper is the development of a new computational method for obtaining the least-squares fit of a set of points that have been measured with a coordinate measurement machine along a line-profile. The pseudo-inverse of a rectangular matrix is used to convert the measured points to the least-squares fit of the profile. Numerical examples are included for convex and concave line-profiles, that are formed from line- and circular arc-segments. / Dissertation/Thesis / M.S. Mechanical Engineering 2013
|
Page generated in 0.0593 seconds