• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 15
  • Tagged with
  • 34
  • 26
  • 21
  • 21
  • 13
  • 13
  • 11
  • 10
  • 10
  • 9
  • 9
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Image and RADAR fusion for autonomous vehicles / Bild och RADAR för autonoma fordon

de Gibert Duart, Xavier January 2023 (has links)
Robust detection, localization, and tracking of objects are essential for autonomous driving. Computer vision has largely driven development based on camera sensors in recent years, but 3D localization from images is still challenging. Sensors such as LiDAR or RADAR are used to compute depth; each having its own advantages and drawbacks. The main idea of the project is to be able to mix images from the camera and RADAR detections in order to estimate depths for the objects appearing in the images. Fusion strategies can be considered the solution to give a more detailed description of the environment by utilizing both the 3D localization capabilities of range sensors and the higher spatial resolution of image data. The idea is to fuse 3D detections from the RADAR onto the image plane, this requires a high level of synchronization of the sensors and projections of the RADAR data on the required image. / Robust detektering, lokalisering och spårning av objekt är avgörande för autonom körning. Datorseende har till stor del drivit utvecklingen baserad på kamerasensorer de senaste åren, men 3D-lokalisering från bilder är fortfarande utmanande. Sensorer som LiDAR eller RADAR används för att beräkna djup; var och en har sina egna fördelar och nackdelar. Huvudtanken med projektet är att kunna blanda bilder från kameran och RADAR-detektioner för att uppskatta djup för de objekt som förekommer i bilderna. Fusionsstrategier kan anses vara lösningen för att ge en mer detaljerad beskrivning av miljön med både 3D-lokaliseringsförmågan hos avståndssensorer och den högre rumsliga upplösningen av bilddata. Tanken är att smälta samman 3D-detektioner från RADAR till bildplanet, detta kräver en hög nivå av synkronisering av sensorerna och projektioner av RADAR-data på den önskade bilden.
32

Deep Learning Semantic Segmentation of 3D Point Cloud Data from a Photon Counting LiDAR / Djupinlärning för semantisk segmentering av 3D punktmoln från en fotonräknande LiDAR

Süsskind, Caspian January 2022 (has links)
Deep learning has shown to be successful on the task of semantic segmentation of three-dimensional (3D) point clouds, which has many interesting use cases in areas such as autonomous driving and defense applications. A common type of sensor used for collecting 3D point cloud data is Light Detection and Ranging (LiDAR) sensors. In this thesis, a time-correlated single-photon counting (TCSPC) LiDAR is used, which produces very accurate measurements over long distances up to several kilometers. The dataset collected by the TCSPC LiDAR used in the thesis contains two classes, person and other, and it comes with several challenges due to it being limited in terms of size and variation, as well as being extremely class imbalanced. The thesis aims to identify, analyze, and evaluate state-of-the-art deep learning models for semantic segmentation of point clouds produced by the TCSPC sensor. This is achieved by investigating different loss functions, data variations, and data augmentation techniques for a selected state-of-the-art deep learning architecture. The results showed that loss functions tailored for extremely imbalanced datasets performed the best with regard to the metric mean intersection over union (mIoU). Furthermore, an improvement in mIoU could be observed when some combinations of data augmentation techniques were employed. In general, the performance of the models varied heavily, with some achieving promising results and others achieving much worse results.
33

Deep Learning for Semantic Segmentation of 3D Point Clouds from an Airborne LiDAR / Semantisk segmentering av 3D punktmoln från en luftburen LiDAR med djupinlärning

Serra, Sabina January 2020 (has links)
Light Detection and Ranging (LiDAR) sensors have many different application areas, from revealing archaeological structures to aiding navigation of vehicles. However, it is challenging to interpret and fully use the vast amount of unstructured data that LiDARs collect. Automatic classification of LiDAR data would ease the utilization, whether it is for examining structures or aiding vehicles. In recent years, there have been many advances in deep learning for semantic segmentation of automotive LiDAR data, but there is less research on aerial LiDAR data. This thesis investigates the current state-of-the-art deep learning architectures, and how well they perform on LiDAR data acquired by an Unmanned Aerial Vehicle (UAV). It also investigates different training techniques for class imbalanced and limited datasets, which are common challenges for semantic segmentation networks. Lastly, this thesis investigates if pre-training can improve the performance of the models. The LiDAR scans were first projected to range images and then a fully convolutional semantic segmentation network was used. Three different training techniques were evaluated: weighted sampling, data augmentation, and grouping of classes. No improvement was observed by the weighted sampling, neither did grouping of classes have a substantial effect on the performance. Pre-training on the large public dataset SemanticKITTI resulted in a small performance improvement, but the data augmentation seemed to have the largest positive impact. The mIoU of the best model, which was trained with data augmentation, was 63.7% and it performed very well on the classes Ground, Vegetation, and Vehicle. The other classes in the UAV dataset, Person and Structure, had very little data and were challenging for most models to classify correctly. In general, the models trained on UAV data performed similarly as the state-of-the-art models trained on automotive data.
34

Probabilistic Multi-Modal Data Fusion and Precision Coordination for Autonomous Mobile Systems Navigation : A Predictive and Collaborative Approach to Visual-Inertial Odometry in Distributed Sensor Networks using Edge Nodes / Sannolikhetsbaserad fermodig datafusion och precision samordning för spårning av autonoma mobila system : En prediktiv och kant-samarbetande metod för visuell-inertial navigation i distribuerade sensornätverk

Luppi, Isabella January 2023 (has links)
This research proposes a novel approach for improving autonomous mobile system navigation in dynamic and potentially occluded environments. The research introduces a tracking framework that combines data from stationary sensing units and on-board sensors, addressing challenges of computational efficiency, reliability, and scalability. The work innovates by integrating spatially-distributed LiDAR and RGB-D Camera sensors, with the optional inclusion of on-board IMU-based dead-reckoning, forming a robust and efficient coordination framework for autonomous systems. Two key developments are achieved. Firstly, a point cloud object detection technique, "Generalized L-Shape Fitting”, is advanced, enhancing bounding box fitting over point cloud data. Secondly, a new estimation framework, the Distributed Edge Node Switching Filter (DENS-F), is established. The DENS-F optimizes resource utilization and coordination, while minimizing reliance on on-board computation. Furthermore, it incorporates a short-term predictive feature, thanks to the Adaptive-Constant Acceleration motion model, which utilizes behaviour-based control inputs. The findings indicate that the DENS-F substantially improves accuracy and computational efficiency compared to the Kalman Consensus Filter (KCF), particularly when additional inertial data is provided by the vehicle. The type of sensor deployed and the consistency of the vehicle's path are also found to significantly influence the system's performance. The research opens new viewpoints for enhancing autonomous vehicle tracking, highlighting opportunities for future exploration in prediction models, sensor selection, and precision coordination. / Denna forskning föreslår en ny metod för att förbättra autonom mobil systemsnavigering i dynamiska och potentiellt skymda miljöer. Forskningen introducerar ett spårningsramverk som kombinerar data från stationära sensorenheter och ombordssensorer, vilket hanterar utmaningar med beräkningsefektivitet, tillförlitlighet och skalbarhet. Arbetet innoverar genom att integrera spatialt distribuerade LiDAR- och RGB-D-kamerasensorer, med det valfria tillägget av ombord IMU-baserad dödräkning, vilket skapar ett robust och efektivt samordningsramverk för autonoma system. Två nyckelutvecklingar uppnås. För det första avanceras en punktmolnsobjektdetekteringsteknik, “Generaliserad L-formig anpassning”, vilket förbättrar anpassning av inneslutande rutor över punktmolnsdata. För det andra upprättas ett nytt uppskattningssystem, det distribuerade kantnodväxlingsfltret (DENSF). DENS-F optimerar resursanvändning och samordning, samtidigt som det minimerar beroendet av ombordberäkning. Vidare införlivar det en kortsiktig prediktiv funktion, tack vare den adaptiva konstanta accelerationsrörelsemodellen, som använder beteendebaserade styrentréer. Resultaten visar att DENS-F väsentligt förbättrar noggrannhet och beräknings-efektivitet jämfört med Kalman Consensus Filter (KCF), särskilt när ytterligare tröghetsdata tillhandahålls av fordonet. Den typ av sensor som används och fordonets färdvägs konsekvens påverkar också systemets prestanda avsevärt. Forskningen öppnar nya synvinklar för att förbättra spårning av autonoma fordon, och lyfter fram möjligheter för framtida utforskning inom förutsägelsemodeller, sensorval och precisionskoordinering. / Questa ricerca propone un nuovo approccio per migliorare la navigazione dei sistemi mobili autonomi in ambienti dinamici e potenzialmente ostruiti. La ricerca introduce un sistema di tracciamento che combina dati da unità di rilevazione stazionarie e sensori di bordo, afrontando le sfde dell’effcienza computazionale, dell’affdabilità e della scalabilità. Il lavoro innova integrando sensori LiDAR e telecamere RGB-D distribuiti nello spazio, con l’inclusione opzionale di una navigazione inerziale basata su IMU di bordo, formando un robusto ed effciente quadro di coordinamento per i sistemi autonomi. Vengono raggiunti due sviluppi chiave. In primo luogo, viene perfezionata una tecnica di rilevazione di oggetti a nuvola di punti, “Generalized L-Shape Fitting”, migliorando l’adattamento del riquadro di delimitazione sui dati della nuvola di punti. In secondo luogo, viene istituito un nuovo framework di stima, il Distributed Edge Node Switching Filter (DENS-F). Il DENS-F ottimizza l’utilizzo delle risorse e il coordinamento, riducendo al minimo la dipendenza dal calcolo di bordo. Inoltre, incorpora una caratteristica di previsione a breve termine, grazie al modello di movimento Adaptive-Constant Acceleration, che utilizza input di controllo basati sul comportamento del veicolo. I risultati indicano che il DENS-F migliora notevolmente l’accuratezza e l’effcienza computazionale rispetto al Kalman Consensus Filter (KCF), in particolare quando il veicolo fornisce dati inerziali aggiuntivi. Si scopre anche che il tipo di sensore impiegato e la coerenza del percorso del veicolo infuenzano signifcativamente le prestazioni del sistema. La ricerca apre nuovi punti di vista per migliorare il tracciamento dei veicoli autonomi, evidenziando opportunità per future esplorazioni nei modelli di previsione, nella selezione dei sensori e nel coordinamento di precisione.

Page generated in 0.0372 seconds