• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 40
  • 15
  • 11
  • 10
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 280
  • 280
  • 107
  • 65
  • 59
  • 54
  • 51
  • 44
  • 42
  • 40
  • 39
  • 38
  • 37
  • 35
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Enhanced Surveillance and Conflict Prediction for Airport Apron Operation using LiDAR Sensing

Braßel, Hannes 11 September 2024 (has links)
This dissertation is situated at the intersection of aviation safety, sensor technology, and computational modeling, increasing airport apron safety by developing and testing optical sensing methods for automated apron surveillance. Central to this research is the utilization of Light Detection and Ranging (LiDAR) technology combined with computer vision algorithms for automatic scene understanding, complemented by tracking, motion prediction, and accident prediction functionalities for dynamic entities. Serving as the impetus for this research, an in-depth empirical analysis of 1220 aviation ground accident reports from 2008 to 2017 exhibits that 76 % of these occurrences could have been visually observed. Notably, the data reveals that 44 % of events indicate human failure, resulting from deficiencies in situational awareness among the involved parties. These findings highlight the opportunity for increasing airport safety by integrating automated surveillance methodologies. However, the ambitious endeavor of transitioning airport surveillance tasks to an automated system presents three main challenges. First, algorithms for automatic scene understanding rely on training datasets with ground truth annotations, which refer to semantic information representing real-world conditions. Such datasets do not exist for airport apron environments. Creating a training dataset for such environments involves scanning and manually annotating every aircraft type, ground vehicle, or object from multiple perspectives in every conceivable pose, velocity, and weather condition at multiple airports. Second, developing accurate tracking algorithms for aircraft relying on LiDAR point clouds requires time-synchronized true states for validation, which are not available. Third, recognizing visual features for accident prediction requires corresponding sensor data, which cannot be acquired in sufficient quantities given aviation's high safety standards and security-related access limitations to airport airside. Thus, this dissertation addresses these challenges by developing a simulation environment that provides training data and a testing framework to develop recognition models and tracking algorithms for real-world applications, utilizing Dresden International Airport as the test field. This simulation environment includes 3D models of the test field, kinematic descriptions of aircraft ground movements, and a sensor model replicating LiDAR sensor behavior under different weather conditions. The simulation environment obviates real-world data acquisition and manual annotation by generating synthetic LiDAR scans, automatically annotated using context knowledge inherent to the simulation framework. Consequently, it enables training recognition models on synthetic data applicable to real-world data. The simulation environment can be adapted to any airport by modifying the static background elements, thus addressing the first challenge. Sensor positioning within the simulation is fully customizable. The developed motion models are formulated in a general manner, ensuring their functionality across any movement network. For validation purposes, a real LiDAR dataset was collected at the test airport and manually annotated. Competing recognition models were trained: employing real-world training data and the other leveraging synthetic training data. These models were tested on a real test dataset not seen during the training. The results show that the synthetic data-trained model achieves recognition performance comparable to, or even superior to, the real-data-trained model. Specifically, it demonstrates improved recognition of aircraft and weather-induced noise within the real test dataset. This enhanced performance is attributed to an overrepresentation of aircraft and weather effects in the synthetic training data. The semantic segmentation model assigns semantic labels to each point of the point cloud. Tracking algorithms leverage this information to estimate the pose of objects. These estimations are crucial for verifying compliance with operational rules and to predict aircraft movement. Object positioning and orientation data inherent to the simulation enables the development and evaluation of tracking algorithms, addressing the second challenge. This research introduces an adaptive point sampling method for aircraft tracking that considers the velocity and spatial relationships of the tracked object, enhancing localization accuracy compared to conventional naïve sampling strategies in a simulated test dataset. Finally, addressing the third challenge, the empirical study of accidents and incidents informing the generation of accident scenarios within the simulation environment. A kinematic motion prediction model, coupled with a deep learning architecture, is instrumental in establishing classifiers that distinguish between normal conditions and accident patterns. Evaluations conducted on a simulated test dataset have demonstrated considerable promise for accident and incident prediction while maintaining a minimal rate of false positives. The classifier has delivered lead times of up to 12 s before the precipitating event, facilitating adequate warnings for emergency braking in 80 % of the ground collision cases and 97 % of the scenarios involving infringements of holding restrictions within a test dataset. This result demonstrates a transformative potential for real-world applications, setting a new benchmark for preemptive measures in airport ground surveillance.
222

Transformer-Based Point Cloud Registration with a Photon-Counting LiDAR Sensor

Johansson, Josef January 2024 (has links)
Point cloud registration is an extensively studied field in computer vision, featuring a variety of existing methods, all aimed at achieving the common objective of determining a transformation that aligns two point clouds. Methods like the Iterative Closet Point (ICP) and Fast Global Registration (FGR) have shown to work well for many years, but recent work has explored different learning-based approaches, showing promising results. This work compares the performance of two learning-based methods GeoTransformer and RegFormer against three baseline methods ICP point-to-point, ICP point-to-plane, and FGR. The comparison was conducted on data provided by the Swedish Defence Research Agency (FOI), where the data was captured with a photon-counting LiDAR sensor. Findings suggest that while ICP point-to-point and ICP point-to-plane exhibit solid performance, the GeoTransformer demonstrates the potential for superior outcomes. Additionally, the RegFormer and FGR perform worse than the ICP variants and the GeoTransformer.
223

Acquisition et rendu 3D réaliste à partir de périphériques "grand public" / Capture and Realistic 3D rendering from consumer grade devices

Chakib, Reda 14 December 2018 (has links)
L'imagerie numérique, de la synthèse d'images à la vision par ordinateur est en train de connaître une forte évolution, due entre autres facteurs à la démocratisation et au succès commercial des caméras 3D. Dans le même contexte, l'impression 3D grand public, qui est en train de vivre un essor fulgurant, contribue à la forte demande sur ce type de caméra pour les besoins de la numérisation 3D. L'objectif de cette thèse est d'acquérir et de maîtriser un savoir-faire dans le domaine de la capture/acquisition de modèles 3D en particulier sur l'aspect rendu réaliste. La réalisation d'un scanner 3D à partir d'une caméra RGB-D fait partie de l'objectif. Lors de la phase d'acquisition, en particulier pour un dispositif portable, on est confronté à deux problèmes principaux, le problème lié au référentiel de chaque capture et le rendu final de l'objet reconstruit. / Digital imaging, from the synthesis of images to computer vision isexperiencing a strong evolution, due among other factors to the democratization and commercial success of 3D cameras. In the same context, the consumer 3D printing, which is experiencing a rapid rise, contributes to the strong demand for this type of camera for the needs of 3D scanning. The objective of this thesis is to acquire and master a know-how in the field of the capture / acquisition of 3D models in particular on the rendered aspect. The realization of a 3D scanner from a RGB-D camera is part of the goal. During the acquisition phase, especially for a portable device, there are two main problems, the problem related to the repository of each capture and the final rendering of the reconstructed object.
224

Optimierter Einsatz eines 3D-Laserscanners zur Point-Cloud-basierten Kartierung und Lokalisierung im In- und Outdoorbereich / Optimized use of a 3D laser scanner for point-cloud-based mapping and localization in indoor and outdoor areas

Schubert, Stefan 05 March 2015 (has links) (PDF)
Die Kartierung und Lokalisierung eines mobilen Roboters in seiner Umgebung ist eine wichtige Voraussetzung für dessen Autonomie. In dieser Arbeit wird der Einsatz eines 3D-Laserscanners zur Erfüllung dieser Aufgaben untersucht. Durch die optimierte Anordnung eines rotierenden 2D-Laserscanners werden hochauflösende Bereiche vorgegeben. Zudem wird mit Hilfe von ICP die Kartierung und Lokalisierung im Stillstand durchgeführt. Bei der Betrachtung zur Verbesserung der Bewegungsschätzung wird auch eine Möglichkeit zur Lokalisierung während der Bewegung mit 3D-Scans vorgestellt. Die vorgestellten Algorithmen werden durch Experimente mit realer Hardware evaluiert.
225

Point Cloud Registration in Augmented Reality using the Microsoft HoloLens

Kjellén, Kevin January 2018 (has links)
When a Time-of-Flight (ToF) depth camera is used to monitor a region of interest, it has to be mounted correctly and have information regarding its position. Manual configuration currently require managing captured 3D ToF data in a 2D environment, which limits the user and might give rise to errors due to misinterpretation of the data. This thesis investigates if a real time 3D reconstruction mesh from a Microsoft HoloLens can be used as a target for point cloud registration using the ToF data, thus configuring the camera autonomously. Three registration algorithms, Fast Global Registration (FGR), Joint Registration Multiple Point Clouds (JR-MPC) and Prerejective RANSAC, were evaluated for this purpose. It was concluded that despite using different sensors it is possible to perform accurate registration. Also, it was shown that the registration can be done accurately within a reasonable time, compared with the inherent time to perform 3D reconstruction on the Hololens. All algorithms could solve the problem, but it was concluded that FGR provided the most satisfying results, though requiring several constraints on the data.
226

Detekce a sledování polohy hlavy v obraze / Head Pose Estimation and Tracking

Pospíšil, Aleš January 2011 (has links)
Diplomová práce je zaměřena na problematiku detekce a sledování polohy hlavy v obraze jako jednu s možností jak zlepšit možnosti interakce mezi počítačem a člověkem. Hlavním přínosem diplomové práce je využití inovativních hardwarových a softwarových technologií jakými jsou Microsoft Kinect, Point Cloud Library a CImg Library. Na úvod je představeno shrnutí předchozích prací na podobné téma. Následuje charakteristika a popis databáze, která byla vytvořena pro účely diplomové práce. Vyvinutý systém pro detekci a sledování polohy hlavy je založený na akvizici 3D obrazových dat a registračním algoritmu Iterative Closest Point. V závěru diplomové práce je nabídnuto hodnocení vzniklého systému a jsou navrženy možnosti jeho budoucího zlepšení.
227

Deep Learning Semantic Segmentation of 3D Point Cloud Data from a Photon Counting LiDAR / Djupinlärning för semantisk segmentering av 3D punktmoln från en fotonräknande LiDAR

Süsskind, Caspian January 2022 (has links)
Deep learning has shown to be successful on the task of semantic segmentation of three-dimensional (3D) point clouds, which has many interesting use cases in areas such as autonomous driving and defense applications. A common type of sensor used for collecting 3D point cloud data is Light Detection and Ranging (LiDAR) sensors. In this thesis, a time-correlated single-photon counting (TCSPC) LiDAR is used, which produces very accurate measurements over long distances up to several kilometers. The dataset collected by the TCSPC LiDAR used in the thesis contains two classes, person and other, and it comes with several challenges due to it being limited in terms of size and variation, as well as being extremely class imbalanced. The thesis aims to identify, analyze, and evaluate state-of-the-art deep learning models for semantic segmentation of point clouds produced by the TCSPC sensor. This is achieved by investigating different loss functions, data variations, and data augmentation techniques for a selected state-of-the-art deep learning architecture. The results showed that loss functions tailored for extremely imbalanced datasets performed the best with regard to the metric mean intersection over union (mIoU). Furthermore, an improvement in mIoU could be observed when some combinations of data augmentation techniques were employed. In general, the performance of the models varied heavily, with some achieving promising results and others achieving much worse results.
228

Deep Learning for Semantic Segmentation of 3D Point Clouds from an Airborne LiDAR / Semantisk segmentering av 3D punktmoln från en luftburen LiDAR med djupinlärning

Serra, Sabina January 2020 (has links)
Light Detection and Ranging (LiDAR) sensors have many different application areas, from revealing archaeological structures to aiding navigation of vehicles. However, it is challenging to interpret and fully use the vast amount of unstructured data that LiDARs collect. Automatic classification of LiDAR data would ease the utilization, whether it is for examining structures or aiding vehicles. In recent years, there have been many advances in deep learning for semantic segmentation of automotive LiDAR data, but there is less research on aerial LiDAR data. This thesis investigates the current state-of-the-art deep learning architectures, and how well they perform on LiDAR data acquired by an Unmanned Aerial Vehicle (UAV). It also investigates different training techniques for class imbalanced and limited datasets, which are common challenges for semantic segmentation networks. Lastly, this thesis investigates if pre-training can improve the performance of the models. The LiDAR scans were first projected to range images and then a fully convolutional semantic segmentation network was used. Three different training techniques were evaluated: weighted sampling, data augmentation, and grouping of classes. No improvement was observed by the weighted sampling, neither did grouping of classes have a substantial effect on the performance. Pre-training on the large public dataset SemanticKITTI resulted in a small performance improvement, but the data augmentation seemed to have the largest positive impact. The mIoU of the best model, which was trained with data augmentation, was 63.7% and it performed very well on the classes Ground, Vegetation, and Vehicle. The other classes in the UAV dataset, Person and Structure, had very little data and were challenging for most models to classify correctly. In general, the models trained on UAV data performed similarly as the state-of-the-art models trained on automotive data.
229

Optimierter Einsatz eines 3D-Laserscanners zur Point-Cloud-basierten Kartierung und Lokalisierung im In- und Outdoorbereich

Schubert, Stefan 30 September 2014 (has links)
Die Kartierung und Lokalisierung eines mobilen Roboters in seiner Umgebung ist eine wichtige Voraussetzung für dessen Autonomie. In dieser Arbeit wird der Einsatz eines 3D-Laserscanners zur Erfüllung dieser Aufgaben untersucht. Durch die optimierte Anordnung eines rotierenden 2D-Laserscanners werden hochauflösende Bereiche vorgegeben. Zudem wird mit Hilfe von ICP die Kartierung und Lokalisierung im Stillstand durchgeführt. Bei der Betrachtung zur Verbesserung der Bewegungsschätzung wird auch eine Möglichkeit zur Lokalisierung während der Bewegung mit 3D-Scans vorgestellt. Die vorgestellten Algorithmen werden durch Experimente mit realer Hardware evaluiert.
230

Application development of 3D LiDAR sensor for display computers

Ekstrand, Oskar January 2023 (has links)
A highly accurate sensor for measuring distances, used for creating high-resolution 3D maps of the environment, utilize “Light Detection And Ranging” (LiDAR) technology. This degree project aims to investigate the implementation of 3D LiDAR sensors into off-highway vehicle display computers, called CCpilots. This involves a study of available low-cost 3D LiDAR sensors on the market and development of an application for visualizing real time data graphically, with room for optimization algorithms. The selected LiDAR sensor is “Livox Mid-360”, a hybrid-solid technology and a field of view of 360° horizontally and 59° vertically. The LiDAR application was developed using Livox SDK2 combined with a C++ back-end, in order to visualize data using Qt QML as the Graphical User Interface design tool. A filter was utilized from the Point Cloud Library (PCL), called a voxel grid filter, for optimization purpose. Real time 3D LiDAR sensor data was graphically visualized on the display computer CCpilot X900. The voxel grid filter had a few visual advantages, although it consumed more processor power compared to when no filter was used. Whether a filter was used or not, all points generated by the LiDAR sensor could be processed and visualized by the developed application without any latency.

Page generated in 0.0307 seconds