• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • Tagged with
  • 9
  • 9
  • 9
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic processing of LiDAR point cloud data captured by drones / Automatisk bearbetning av punktmolnsdata från LiDAR infångat av drönare

Li Persson, Leon January 2023 (has links)
As automation is on the rise in the world at large, the ability to automatically differentiate objects in datasets via machine learning is of growing interest. This report details an experimental evaluation of supervised learning on point cloud data using random forest with varying setups. Acquired via airborne LiDAR using drones, the data holds a 3D representation of a landscape area containing power line corridors. Segmentation was performed with the goal of isolating data points belonging to power line objects from the rest of the surroundings. Pre-processing was performed on the data to extend the machine learning features used with geometry-based features that are not inherent to the LiDAR data itself. Due to how large-scale the data is, the labels were generated by the customer, Airpelago, and supervised learning was applied using this data. With their labels as benchmark, F1 scores of over 90% could be generated for both of the classes pertaining to power line objects. The best results were obtained when the data classes were balanced and both relevant intrinsic and extended features were used for the training of the classification models.
2

Improving 3D Point Cloud Segmentation Using Multimodal Fusion of Projected 2D Imagery Data : Improving 3D Point Cloud Segmentation Using Multimodal Fusion of Projected 2D Imagery Data

He, Linbo January 2019 (has links)
Semantic segmentation is a key approach to comprehensive image data analysis. It can be applied to analyze 2D images, videos, and even point clouds that contain 3D data points. On the first two problems, CNNs have achieved remarkable progress, but on point cloud segmentation, the results are less satisfactory due to challenges such as limited memory resource and difficulties in 3D point annotation. One of the research studies carried out by the Computer Vision Lab at Linköping University was aiming to ease the semantic segmentation of 3D point cloud. The idea is that by first projecting 3D data points to 2D space and then focusing only on the analysis of 2D images, we can reduce the overall workload for the segmentation process as well as exploit the existing well-developed 2D semantic segmentation techniques. In order to improve the performance of CNNs for 2D semantic segmentation, the study has used input data derived from different modalities. However, how different modalities can be optimally fused is still an open question. Based on the above-mentioned study, this thesis aims to improve the multistream framework architecture. More concretely, we investigate how different singlestream architectures impact the multistream framework with a given fusion method, and how different fusion methods contribute to the overall performance of a given multistream framework. As a result, our proposed fusion architecture outperformed all the investigated traditional fusion methods. Along with the best singlestream candidate and few additional training techniques, our final proposed multistream framework obtained a relative gain of 7.3\% mIoU compared to the baseline on the semantic3D point cloud test set, increasing the ranking from 12th to 5th position on the benchmark leaderboard.
3

A Perception Payload for Small-UAS Navigation in Structured Environments

Bharadwaj, Akshay S. 26 September 2018 (has links)
No description available.
4

Deep learning on attributed graphs / L'apprentissage profond sur graphes attribués

Simonovsky, Martin 14 December 2018 (has links)
Le graphe est un concept puissant pour la représentation des relations entre des paires d'entités. Les données ayant une structure de graphes sous-jacente peuvent être trouvées dans de nombreuses disciplines, décrivant des composés chimiques, des surfaces des modèles tridimensionnels, des interactions sociales ou des bases de connaissance, pour n'en nommer que quelques-unes. L'apprentissage profond (DL) a accompli des avancées significatives dans une variété de tâches d'apprentissage automatique au cours des dernières années, particulièrement lorsque les données sont structurées sur une grille, comme dans la compréhension du texte, de la parole ou des images. Cependant, étonnamment peu de choses ont été faites pour explorer l'applicabilité de DL directement sur des données structurées sous forme des graphes. L'objectif de cette thèse est d'étudier des architectures de DL sur des graphes et de rechercher comment transférer, adapter ou généraliser à ce domaine des concepts qui fonctionnent bien sur des données séquentielles et des images. Nous nous concentrons sur deux primitives importantes : le plongement de graphes ou leurs nœuds dans une représentation de l'espace vectorielle continue (codage) et, inversement, la génération des graphes à partir de ces vecteurs (décodage). Nous faisons les contributions suivantes. Tout d'abord, nous introduisons Edge-Conditioned Convolutions (ECC), une opération de type convolution sur les graphes réalisés dans le domaine spatial où les filtres sont générés dynamiquement en fonction des attributs des arêtes. La méthode est utilisée pour coder des graphes avec une structure arbitraire et variable. Deuxièmement, nous proposons SuperPoint Graph, une représentation intermédiaire de nuages de points avec de riches attributs des arêtes codant la relation contextuelle entre des parties des objets. Sur la base de cette représentation, l'ECC est utilisé pour segmenter les nuages de points à grande échelle sans sacrifier les détails les plus fins. Troisièmement, nous présentons GraphVAE, un générateur de graphes permettant de décoder des graphes avec un nombre de nœuds variable mais limité en haut, en utilisant la correspondance approximative des graphes pour aligner les prédictions d'un auto-encodeur avec ses entrées. La méthode est appliquée à génération de molécules / Graph is a powerful concept for representation of relations between pairs of entities. Data with underlying graph structure can be found across many disciplines, describing chemical compounds, surfaces of three-dimensional models, social interactions, or knowledge bases, to name only a few. There is a natural desire for understanding such data better. Deep learning (DL) has achieved significant breakthroughs in a variety of machine learning tasks in recent years, especially where data is structured on a grid, such as in text, speech, or image understanding. However, surprisingly little has been done to explore the applicability of DL on graph-structured data directly.The goal of this thesis is to investigate architectures for DL on graphs and study how to transfer, adapt or generalize concepts working well on sequential and image data to this domain. We concentrate on two important primitives: embedding graphs or their nodes into a continuous vector space representation (encoding) and, conversely, generating graphs from such vectors back (decoding). To that end, we make the following contributions.First, we introduce Edge-Conditioned Convolutions (ECC), a convolution-like operation on graphs performed in the spatial domain where filters are dynamically generated based on edge attributes. The method is used to encode graphs with arbitrary and varying structure.Second, we propose SuperPoint Graph, an intermediate point cloud representation with rich edge attributes encoding the contextual relationship between object parts. Based on this representation, ECC is employed to segment large-scale point clouds without major sacrifice in fine details.Third, we present GraphVAE, a graph generator allowing to decode graphs with variable but upper-bounded number of nodes making use of approximate graph matching for aligning the predictions of an autoencoder with its inputs. The method is applied to the task of molecule generation
5

Confidence Calibrated Point Cloud Segmentation with Limited Data

Borgstrand, Adam January 2024 (has links)
This thesis investigates the use of sampled CAD models for training and calibrating a semantic segmentation model, RandLA-Net, with the ultimate goal of localizing modules for digital twinning (the process of creating digital twins). A significant contribution is the development of the Random Placement of Component Generator (RPCG), a synthetic dataset generator that randomly places CAD models within scenes while preserving contextual information such as typical height above ground. Training and testing on datasets generated by RPCG demonstrated its ability to recognize class modules in various randomly generated scenes. Various hyperparameters related to the loss function and pre-processing steps were explored to improve RandLA-Net’s generalization to different contextual settings. Notably, using a class-weighted α in the focal loss showed promise in correctly classifying infrequent classes and reducing network overconfidence under domain shifts with similar prior probability distributions. The semantic segmentation results were promising for the RPCG test set, achieving a mean True Positive Rate (mTPR) of 98% and a mean Intersection over Union(mIoU) of 93.6%. However, the performance on a sampled version of a CAD model representing an installation named Undercentral was comparatively lower, with a mTPR of 41.1% and a mIoU of 33.4%, indicating the need for further adaptation to varied contextual environments. Proposed improvements include enhancing RPCG with an occupancy grid to better simulate compact scenes and evaluating different subsampling rates in RandLA-Net’s random sampling layers. For confidence calibration, the thesis finds that averaging multiple Monte Carlo (MC) dropout evaluations effectively reduces network overconfidence and improves model reliability. Although this work addresses only a portion of the overall digital twinning process, it highlights the potential of synthetic data generation in enhancing semantic segmentation models and contributes towards the localization of modules in digital twin creation.
6

Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning data

Vock, Dominik 08 May 2014 (has links) (PDF)
Virtual reconstruction of historic sites, planning of restorations and attachments of new building parts, as well as forest inventory are few examples of fields that benefit from the application of 3D surveying data. Originally using 2D photo based documentation and manual distance measurements, the 3D information obtained from multi camera and laser scanning systems realizes a noticeable improvement regarding the surveying times and the amount of generated 3D information. The 3D data allows a detailed post processing and better visualization of all relevant spatial information. Yet, for the extraction of the required information from the raw scan data and for the generation of useable visual output, time-consuming, complex user-based data processing is still required, using the commercially available 3D software tools. In this context, the automatic object recognition from 3D point cloud and depth data has been discussed in many different works. The developed tools and methods however, usually only focus on a certain kind of object or the detection of learned invariant surface shapes. Although the resulting methods are applicable for certain practices of data segmentation, they are not necessarily suitable for arbitrary tasks due to the varying requirements of the different fields of research. This thesis presents a more widespread solution for automatic scene reconstruction from 3D point clouds, targeting street scenarios, specifically for the task of traffic accident scene analysis and documentation. The data, obtained by sampling the scene using a mobile scanning system is evaluated, segmented, and finally used to generate detailed 3D information of the scanned environment. To realize this aim, this work adapts and validates various existing approaches on laser scan segmentation regarding the application on accident relevant scene information, including road surfaces and markings, vehicles, walls, trees and other salient objects. The approaches are therefore evaluated regarding their suitability and limitations for the given tasks, as well as for possibilities concerning the combined application together with other procedures. The obtained knowledge is used for the development of new algorithms and procedures to allow a satisfying segmentation and reconstruction of the scene, corresponding to the available sampling densities and precisions. Besides the segmentation of the point cloud data, this thesis presents different visualization and reconstruction methods to achieve a wider range of possible applications of the developed system for data export and utilization in different third party software tools.
7

Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning data

Vock, Dominik 18 December 2013 (has links)
Virtual reconstruction of historic sites, planning of restorations and attachments of new building parts, as well as forest inventory are few examples of fields that benefit from the application of 3D surveying data. Originally using 2D photo based documentation and manual distance measurements, the 3D information obtained from multi camera and laser scanning systems realizes a noticeable improvement regarding the surveying times and the amount of generated 3D information. The 3D data allows a detailed post processing and better visualization of all relevant spatial information. Yet, for the extraction of the required information from the raw scan data and for the generation of useable visual output, time-consuming, complex user-based data processing is still required, using the commercially available 3D software tools. In this context, the automatic object recognition from 3D point cloud and depth data has been discussed in many different works. The developed tools and methods however, usually only focus on a certain kind of object or the detection of learned invariant surface shapes. Although the resulting methods are applicable for certain practices of data segmentation, they are not necessarily suitable for arbitrary tasks due to the varying requirements of the different fields of research. This thesis presents a more widespread solution for automatic scene reconstruction from 3D point clouds, targeting street scenarios, specifically for the task of traffic accident scene analysis and documentation. The data, obtained by sampling the scene using a mobile scanning system is evaluated, segmented, and finally used to generate detailed 3D information of the scanned environment. To realize this aim, this work adapts and validates various existing approaches on laser scan segmentation regarding the application on accident relevant scene information, including road surfaces and markings, vehicles, walls, trees and other salient objects. The approaches are therefore evaluated regarding their suitability and limitations for the given tasks, as well as for possibilities concerning the combined application together with other procedures. The obtained knowledge is used for the development of new algorithms and procedures to allow a satisfying segmentation and reconstruction of the scene, corresponding to the available sampling densities and precisions. Besides the segmentation of the point cloud data, this thesis presents different visualization and reconstruction methods to achieve a wider range of possible applications of the developed system for data export and utilization in different third party software tools.
8

Deep Learning Semantic Segmentation of 3D Point Cloud Data from a Photon Counting LiDAR / Djupinlärning för semantisk segmentering av 3D punktmoln från en fotonräknande LiDAR

Süsskind, Caspian January 2022 (has links)
Deep learning has shown to be successful on the task of semantic segmentation of three-dimensional (3D) point clouds, which has many interesting use cases in areas such as autonomous driving and defense applications. A common type of sensor used for collecting 3D point cloud data is Light Detection and Ranging (LiDAR) sensors. In this thesis, a time-correlated single-photon counting (TCSPC) LiDAR is used, which produces very accurate measurements over long distances up to several kilometers. The dataset collected by the TCSPC LiDAR used in the thesis contains two classes, person and other, and it comes with several challenges due to it being limited in terms of size and variation, as well as being extremely class imbalanced. The thesis aims to identify, analyze, and evaluate state-of-the-art deep learning models for semantic segmentation of point clouds produced by the TCSPC sensor. This is achieved by investigating different loss functions, data variations, and data augmentation techniques for a selected state-of-the-art deep learning architecture. The results showed that loss functions tailored for extremely imbalanced datasets performed the best with regard to the metric mean intersection over union (mIoU). Furthermore, an improvement in mIoU could be observed when some combinations of data augmentation techniques were employed. In general, the performance of the models varied heavily, with some achieving promising results and others achieving much worse results.
9

Deep Learning for Semantic Segmentation of 3D Point Clouds from an Airborne LiDAR / Semantisk segmentering av 3D punktmoln från en luftburen LiDAR med djupinlärning

Serra, Sabina January 2020 (has links)
Light Detection and Ranging (LiDAR) sensors have many different application areas, from revealing archaeological structures to aiding navigation of vehicles. However, it is challenging to interpret and fully use the vast amount of unstructured data that LiDARs collect. Automatic classification of LiDAR data would ease the utilization, whether it is for examining structures or aiding vehicles. In recent years, there have been many advances in deep learning for semantic segmentation of automotive LiDAR data, but there is less research on aerial LiDAR data. This thesis investigates the current state-of-the-art deep learning architectures, and how well they perform on LiDAR data acquired by an Unmanned Aerial Vehicle (UAV). It also investigates different training techniques for class imbalanced and limited datasets, which are common challenges for semantic segmentation networks. Lastly, this thesis investigates if pre-training can improve the performance of the models. The LiDAR scans were first projected to range images and then a fully convolutional semantic segmentation network was used. Three different training techniques were evaluated: weighted sampling, data augmentation, and grouping of classes. No improvement was observed by the weighted sampling, neither did grouping of classes have a substantial effect on the performance. Pre-training on the large public dataset SemanticKITTI resulted in a small performance improvement, but the data augmentation seemed to have the largest positive impact. The mIoU of the best model, which was trained with data augmentation, was 63.7% and it performed very well on the classes Ground, Vegetation, and Vehicle. The other classes in the UAV dataset, Person and Structure, had very little data and were challenging for most models to classify correctly. In general, the models trained on UAV data performed similarly as the state-of-the-art models trained on automotive data.

Page generated in 0.1338 seconds