21 |
Automatic Registration of Point Clouds Acquired by a Sweeping Single-Pixel TCSPC Lidar SystemMejerfalk, Mattias January 2017 (has links)
This project investigates an image registration process, involving a method known as K-4PCS. This registration process was applied to a set of 16 long range lidar scans, acquired at different positions by a single pixel TCSPC (Time Correlated Single-Photon Counting) lidar system. By merging these lidar scans, after having been transformed by proper scan alignments, one could obtain clear information regarding obscured surfaces. Using all available data, the investigated method was able to provide adequate alignments for all lidar scans.The data in each lidar scan was subsampled and a subsampling ratio of 50% proved to be sufficient in order to construct sparse, representative point clouds that, when subjected to the image registration process, result in adequate alignments. This was approximately equivalent to 9 million collected photon detections per scan position. Lower subsampling ratios failed to generate representative point clouds that could be used in the imageregistration process in order to obtain adequate alignments. Large errors followed, especially in the horisontal and elevation angles, of each alignment. The computation time for one scan pair matching at a subsampling ratio = 100%. was, on average, approximately 120 s, and 95s for a subsampling = 50%.To summarise, the investigated method can be used to register lidar scans acquired by a lidar system using TCSPC principles, and with proper equipment and code implementation, one could potentially acquire 3D images of a measurement area every second, however, at a delay depending on the efficiency of the lidar data processing.
|
22 |
Motion Segmentation for Autonomous Robots Using 3D Point Cloud DataKulkarni, Amey S. 13 May 2020 (has links)
Achieving robot autonomy is an extremely challenging task and it starts with developing algorithms that help the robot understand how humans perceive the environment around them. Once the robot understands how to make sense of its environment, it is easy to make efficient decisions about safe movement. It is hard for robots to perform tasks that come naturally to humans like understanding signboards, classifying traffic lights, planning path around dynamic obstacles, etc. In this work, we take up one such challenge of motion segmentation using Light Detection and Ranging (LiDAR) point clouds. Motion segmentation is the task of classifying a point as either moving or static. As the ego-vehicle moves along the road, it needs to detect moving cars with very high certainty as they are the areas of interest which provide cues to the ego-vehicle to plan it's motion. Motion segmentation algorithms segregate moving cars from static cars to give more importance to dynamic obstacles. In contrast to the usual LiDAR scan representations like range images and regular grid, this work uses a modern representation of LiDAR scans using permutohedral lattices. This representation gives ease of representing unstructured LiDAR points in an efficient lattice structure. We propose a machine learning approach to perform motion segmentation. The network architecture takes in two sequential point clouds and performs convolutions on them to estimate if 3D points from the first point cloud are moving or static. Using two temporal point clouds help the network in learning what features constitute motion. We have trained and tested our learning algorithm on the FlyingThings3D dataset and a modified KITTI dataset with simulated motion.
|
23 |
Využití laserového skenování v informačním modelování budov / Laser scanning in building information modellingMagda, Jakub January 2020 (has links)
This thesis deals with creating BIM model using laser scanning. It includes information about laser scanning, BIM and proces of modelling. Result of thesis is information model created in software Revit.
|
24 |
Segmentace 2D Point-cloudu pro proložení křivkami / 2D Point-cloud segmentation for curve fittingŠooš, Marek January 2021 (has links)
The presented diploma thesis deals with the division of points into homogeneous groups. The work provides a broad overview of the current state in this topic and a brief explanation of the main segmentation methods principles. From the analysis of the articles are selected and programmed five algorithms. The work defines the principles of selected algorithms and explains their mathematical models. For each algorithm is also given a code design description. The diploma thesis also contains a cross comparison of segmentation capabilities of individual algorithms on created as well as on measured data. The results of the curves extraction are compared with each other graphically and numerically. At the end of the work is a comparison graph of time dependence on the number of points and the table that includes a mutual comparison of algorithms in specific areas.
|
25 |
Segmentace a klasifikace LIDAR dat / Segmentation and classification of LIDAR dataDušek, Dominik January 2020 (has links)
The goal of this work was to design fast and simple methods for processing point-cloud-data of urban areas for virtual reality applications. For the visualization of methods, we developed a simple renderer written in C++ and HLSL. The renderer is based on DirectX 11. For point-cloud processing, we designed a method based on height-histograms for filtering ground points out of point cloud. We also proposed a parallel method for point cloud segmentation based on the region growing algorithm. The individual segments are then tested by simple rules to check if it is or it is not corresponding to a predefined object.
|
26 |
Compositional and Low-shot Understanding of 3D ObjectsLi, Yuchen 12 April 2022 (has links)
Despite the significant progress in 3D vision in recent years, collecting large amounts of high-quality 3D data remains a challenge. Hence, developing solutions to extract 3D object information efficiently is a significant problem. We aim for an effective shape classification algorithm to facilitate accurate recognition and efficient search of sizeable 3D model databases. This thesis has two contributions in this space: a) a novel meta-learning approach for 3D object recognition and b) propose a new compositional 3D recognition task and dataset. For 3D recognition, we proposed a few-shot semi-supervised meta-learning model based on Pointnet++ representation with a prototypical random walk loss. In particular, we developed the random walk semi-supervised loss that enables fast learning from a few labeled examples by enforcing global consistency over the data manifold and magnetizing unlabeled points around their class prototypes. On the compositional recognition front, we create a large-scale, richly annotated stylized dataset called 3D CoMPaT. This large dataset primarily focuses on stylizing 3D shapes at part-level with compatible materials. We introduce Grounded CoMPaT Recognition as the task of collectively recognizing and grounding compositions of materials on parts of 3D Objects.
|
27 |
Leveraging Graph Convolutional Networks for Point Cloud UpsamplingQian, Guocheng 16 November 2020 (has links)
Due to hardware limitations, 3D sensors like LiDAR often produce sparse and
noisy point clouds. Point cloud upsampling is the task of converting such point
clouds into dense and clean ones. This thesis tackles the problem of point cloud upsampling
using deep neural networks. The effectiveness of a point cloud upsampling
neural network heavily relies on the upsampling module and the feature extractor used
therein. In this thesis, I propose a novel point upsampling module, called NodeShuffle.
NodeShuffle leverages Graph Convolutional Networks (GCNs) to better encode
local point information from point neighborhoods. NodeShuffle is versatile and can
be incorporated into any point cloud upsampling pipeline. Extensive experiments
show how NodeShuffle consistently improves the performance of previous upsampling
methods. I also propose a new GCN-based multi-scale feature extractor, called Inception
DenseGCN. By aggregating features at multiple scales, Inception DenseGCN
learns a hierarchical feature representation and enables further performance gains. I
combine Inception DenseGCN with NodeShuffle into the proposed point cloud upsampling
network called PU-GCN. PU-GCN sets new state-of-art performance with
much fewer parameters and more efficient inference.
|
28 |
Segmentation on point cloud data through a difference of normals approach combined with a statistical filterFahlstedt, Elof January 2022 (has links)
This study investigates how a statistical filter affects the quality of point cloud segmentation using a Difference of Normals (DoN) multiscale segmentation approach. A system of DoN segmentation combined with a statistical filter was implemented with the help of an open-source Point Cloud Library (PCL) and evaluated on a publicly available dataset containing large point clouds with labeled ground truth objects. The results shows that when a small number of points is filtered results in an improvement of segmentation quality whereas a large number of filtered points decreases segmentation quality. In conclusion, the statistical filter can be combined with DoN segmentation to achieve segmentations of high quality however, non carefully selected thresholds for the statistical filter decreases segmentation quality drastically.
|
29 |
Towards Scalable Deep 3D Perception and GenerationQian, Guocheng 11 October 2023 (has links)
Scaling up 3D deep learning systems emerges as a paramount issue, comprising two primary facets: (1) Model scalability that designs a 3D network that is scalefriendly, i.e. model archives improving performance with increasing parameters and can run efficiently. Unlike 2D convolutional networks, 3D networks have to accommodate the irregularities of 3D data, such as respecting permutation invariance in point clouds. (2) Data scalability: high-quality 3D data is conspicuously scarce in the 3D field. 3D data acquisition and annotations are both complex and costly, hampering the development of scalable 3D deep learning.
This dissertation delves into 3D deep learning including both perception and generation, addressing the scalability challenges. To address model scalability in 3D perception, I introduce ASSANet which outlines an approach for efficient 3D point cloud representation learning, allowing the model to scale up with a low cost of computation, and notably achieving substantial accuracy gains. I further introduce the PointNeXt framework, focusing on data augmentation and scalability of the architecture, that outperforms state-of-the-art 3D point cloud perception networks. To address data scalability, I present Pix4Point which explores the utilization of abundant 2D images to enhance 3D understanding. For scalable 3D generation, I propose Magic123 which leverages a joint 2D and 3D diffusion prior for zero-shot image-to-3D content generation without the necessity of 3D supervision. These collective efforts provide pivotal solutions to model and data scalability in 3D deep learning.
|
30 |
Generative adversarial network for point cloud upsamplingWidell Delgado, Edison January 2024 (has links)
Point clouds are a widely used system for the collection and application of 3D data. But most timesthe data gathered is too scarce to reliably be used in any application. Therefore this thesis presentsa GAN based upsampling method within a patch based approach together with a GCN based featureextractor, in an attempt to enhance the density and reliability of point cloud data. Our approachis rigorously compared with existing methods to compare the performance. The thesis also makescorrelations between input sizes and how the quality of the inputs affects the upsampled result. TheGAN is also applied to real-world data to assess the viability of its current state, and to test how it isaffected by the interference that occurs in an unsupervised scenario.
|
Page generated in 0.1176 seconds