• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 40
  • 15
  • 11
  • 10
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 279
  • 279
  • 107
  • 65
  • 59
  • 54
  • 51
  • 44
  • 42
  • 40
  • 39
  • 38
  • 37
  • 35
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Terrain Mapping for Autonomous Vehicles / Terrängkartläggning för autonoma fordon

Pedreira Carabel, Carlos Javier January 2015 (has links)
Autonomous vehicles have become the forefront of the automotive industry nowadays, looking to have safer and more efficient transportation systems. One of the main issues for every autonomous vehicle consists in being aware of its position and the presence of obstacles along its path. The current project addresses the pose and terrain mapping problem integrating a visual odometry method and a mapping technique. An RGB-D camera, the Kinect v2 from Microsoft, was chosen as sensor for capturing information from the environment. It was connected to an Intel mini-PC for real-time processing. Both pieces of hardware were mounted on-board of a four-wheeled research concept vehicle (RCV) to test the feasibility of the current solution at outdoor locations. The Robot Operating System (ROS) was used as development environment with C++ as programming language. The visual odometry strategy consisted in a frame registration algorithm called Adaptive Iterative Closest Keypoint (AICK) based on Iterative Closest Point (ICP) using Oriented FAST and Rotated BRIEF (ORB) as image keypoint extractor. A grid-based local costmap rolling window type was implemented to have a two-dimensional representation of the obstacles close to the vehicle within a predefined area, in order to allow further path planning applications. Experiments were performed both offline and in real-time to test the system at indoors and outdoors scenarios. The results confirmed the viability of using the designed framework to keep tracking the pose of the camera and detect objects in indoor environments. However, outdoor environments evidenced the limitations of the features of the RGB-D sensor, making the current system configuration unfeasible for outdoor purposes. / Autonoma fordon har blivit spetsen för bilindustrin i dag i sökandet efter säkrare och effektivare transportsystem. En av de viktigaste sakerna för varje autonomt fordon består i att vara medveten om sin position och närvaron av hinder längs vägen. Det aktuella projektet behandlar position och riktning samt terrängkartläggningsproblemet genom att integrera en visuell distansmätnings och kartläggningsmetod. RGB-D kameran Kinect v2 från Microsoft valdes som sensor för att samla in information från omgivningen. Den var ansluten till en Intel mini PC för realtidsbehandling. Båda komponenterna monterades på ett fyrhjuligt forskningskonceptfordon (RCV) för att testa genomförbarheten av den nuvarande lösningen i utomhusmiljöer. Robotoperativsystemet (ROS) användes som utvecklingsmiljö med C++ som programmeringsspråk. Den visuella distansmätningsstrategin bestod i en bildregistrerings-algoritm som kallas Adaptive Iterative Closest Keypoint (AICK) baserat på Iterative Closest Point (ICP) med hjälp av Oriented FAST och Rotated BRIEF (ORB) som nyckelpunktsutvinning från bilder. En rutnätsbaserad lokalkostnadskarta av rullande-fönster-typ implementerades för att få en tvådimensionell representation av de hinder som befinner sig nära fordonet inom ett fördefinierat område, i syfte att möjliggöra ytterligare applikationer för körvägen. Experiment utfördes både offline och i realtid för att testa systemet i inomhus- och utomhusscenarier. Resultaten bekräftade möjligheten att använda den utvecklade metoden för att spåra position och riktning av kameran samt upptäcka föremål i inomhusmiljöer. Men utomhus visades begränsningar i RGB-D-sensorn som gör att den aktuella systemkonfigurationen är värdelös för utomhusbruk.
252

Automatic map generation from nation-wide data sources using deep learning

Lundberg, Gustav January 2020 (has links)
The last decade has seen great advances within the field of artificial intelligence. One of the most noteworthy areas is that of deep learning, which is nowadays used in everything from self driving cars to automated cancer screening. During the same time, the amount of spatial data encompassing not only two but three dimensions has also grown and whole cities and countries are being scanned. Combining these two technological advances enables the creation of detailed maps with a multitude of applications, civilian as well as military.This thesis aims at combining two data sources covering most of Sweden; laser data from LiDAR scans and surface model from aerial images, with deep learning to create maps of the terrain. The target is to learn a simplified version of orienteering maps as these are created with high precision by experienced map makers, and are a representation of how easy or hard it would be to traverse a given area on foot. The performance on different types of terrain are measured and it is found that open land and larger bodies of water is identified at a high rate, while trails are hard to recognize.It is further researched how the different densities found in the source data affect the performance of the models, and found that some terrain types, trails for instance, benefit from higher density data, Other features of the terrain, like roads and buildings are predicted with higher accuracy by lower density data.Finally, the certainty of the predictions is discussed and visualised by measuring the average entropy of predictions in an area. These visualisations highlight that although the predictions are far from perfect, the models are more certain about their predictions when they are correct than when they are not.
253

Multi-view point cloud fusion for LiDAR based cooperative environment detection

Jähn, Benjamin, Lindner, Philipp, Wanielik, Gerd 11 November 2015 (has links)
A key component for automated driving is 360◦ environment detection. The recognition capabilities of mod- ern sensors are always limited to their direct field of view. In urban areas a lot of objects occlude important areas of in- terest. The information captured by another sensor from an- other perspective could solve such occluded situations. Fur- thermore, the capabilities to detect and classify various ob- jects in the surrounding can be improved by taking multiple views into account. In order to combine the data of two sensors into one co- ordinate system, a rigid transformation matrix has to be de- rived. The accuracy of modern e.g. satellite based relative pose estimation systems is not sufficient to guarantee a suit- able alignment. Therefore, a registration based approach is used in this work which aligns the captured environment data of two sensors from different positions. Thus their relative pose estimation obtained by traditional methods is improved and the data can be fused. To support this we present an approach which utilizes the uncertainty information of modern tracking systems to de- termine the possible field of view of the other sensor. Fur- thermore, it is estimated which parts of the captured data is directly visible to both, taking occlusion and shadowing ef- fects into account. Afterwards a registration method, based on the iterative closest point (ICP) algorithm, is applied to that data in order to get an accurate alignment. The contribution of the presented approch to the achiev- able accuracy is shown with the help of ground truth data from a LiDAR simulation within a 3-D crossroad model. Re- sults show that a two dimensional position and heading esti- mation is sufficient to initialize a successful 3-D registration process. Furthermore it is shown which initial spatial align- ment is necessary to obtain suitable registration results.
254

Data-Driven Process Optimization of Additive Manufacturing Systems

Aboutaleb, Amirmassoud 04 May 2018 (has links)
The goal of the present dissertation is to develop and apply novel and systematic data-driven optimization approaches that can efficiently optimize Additive Manufacturing (AM) systems with respect to targeted properties of final parts. The proposed approaches are capable of achieving sets of process parameters that result in the satisfactory level of part quality in an accelerated manner. First, an Accelerated Process Optimization (APO) methodology is developed to optimize an individual scalar property of parts. The APO leverages data from similar—but non-identical—prior studies to accelerate sequential experimentation for optimizing the AM system in the current study. Using Bayesian updating, the APO characterizes and updates the difference between prior and current experimental studies. The APO accounts for the differences in experimental conditions and utilizes prior data to facilitate the optimization procedure in the current study. The efficiency and robustness of the APO is tested against an extensive simulation studies and a real-world case study for optimizing relative density of stainless steel parts fabricated by a Selective Laser Melting (SLM) system. Then, we extend the idea behind the APO in order to handle multi-objective process optimization problems in which some of the characteristics of the AMabricated parts are uncorrelated. The proposed Multi-objective Process Optimization (m-APO) breaks down the master multi-objective optimization problem into a series of convex combinations of single-objective sub-problems. The m-APO maps and scales experimental data from previous sub-problems to guide remaining sub-problems that improve the solutions while reducing the number of experiments required. The robustness and efficiency of the m-APO is verified by conducting a series of challenging simulation studies and a real-world case study to minimize geometric inaccuracy of parts fabricated by a Fused Filament Fabrication () system. At the end, we apply the proposed m-APO to maximize the mechanical properties of AMabricated parts that show conflicting behavior in the optimal window, namely relative density and elongation-toailure. Numerical studies show that the m-APO can achieve the best trade-off among conflicting mechanical properties while significantly reducing the number of experimental runs compared with existing methods.
255

Submap Correspondences for Bathymetric SLAM Using Deep Neural Networks / Underkarta Korrespondenser för Batymetrisk SLAM med Hjälp av Djupa Neurala Nätverk

Tan, Jiarui January 2022 (has links)
Underwater navigation is a key technology for exploring the oceans and exploiting their resources. For autonomous underwater vehicles (AUVs) to explore the marine environment efficiently and securely, underwater simultaneous localization and mapping (SLAM) systems are often indispensable due to the lack of the global positioning system (GPS). In an underwater SLAM system, an AUV maps its surroundings and estimates its own pose at the same time. The pose of the AUV can be predicted by dead reckoning, but navigation errors accumulate over time. Therefore, sensors are needed to calibrate the state of the AUV. Among various sensors, the multibeam echosounder (MBES) is one of the most popular ones for underwater SLAM since it can acquire bathymetric point clouds with depth information of the surroundings. However, there are difficulties in data association for seabeds without distinct landmarks. Previous studies have focused more on traditional computer vision methods, which have limited performance on bathymetric data. In this thesis, a novel method based on deep learning is proposed to facilitate underwater perception. We conduct two experiments on place recognition and point cloud registration using data collected during a survey. The results show that, compared with the traditional methods, the proposed neural network is able to detect loop closures and register point clouds more efficiently. This work provides a better data association solution for designing underwater SLAM systems. / Undervattensnavigering är en viktig teknik för att utforska haven och utnyttja deras resurser. För att autonoma undervattensfordon (AUV) ska kunna utforska havsmiljön effektivt och säkert är underwater simultaneous localization and mapping (SLAM) system ofta oumbärliga på grund av bristen av det globala positioneringssystemet (GPS). I ett undervattens SLAM-system kartlägger ett AUV sin omgivning och uppskattar samtidigt sin egen position. AUV:s position kan förutsägas med hjälp av dödräkning, men navigeringsfel ackumuleras med tiden. Därför behövs sensorer för att kalibrera AUV:s tillstånd. Bland olika sensorer är multibeam ekolod (MBES) en av de mest populära för undervattens-SLAM eftersom den kan samla in batymetriska punktmoln med djupinformation om omgivningen. Det finns dock svårigheter med dataassociation för havsbottnar utan tydliga landmärken. Tidigare studier har fokuserat mer på traditionella datorvisionsmetoder som har begränsad prestanda för batymetriska data. I den här avhandlingen föreslås en ny metod baserad på djup inlärning för att underlätta undervattensuppfattning. Vi genomför två experiment på punktmolnregistrering med hjälp av data som samlats in under en undersökning. Resultaten visar att jämfört med de traditionella metoderna kan det föreslagna neurala nätverket upptäcka slingförslutningar och registrera punktmoln mer effektivt. Detta arbete ger en bättre lösning för dataassociation för utformning av undervattens SLAM-system.
256

[pt] DESENVOLVIMENTO E VALIDAÇÃO DE SENSOR LIDAR VIRTUAL / [en] DEVELOPMENT AND VALIDATION OF A LIDAR VIRTUAL SENSOR

GUILHERME FERREIRA GUSMAO 25 June 2020 (has links)
[pt] As tecnologias de imageamento em três dimensões (3D) vêm tendo seu uso cada vez mais disseminado no meio acadêmico e no setor industrial, especialmente na forma de nuvens de pontos, uma representação matemática da geometria e superfície de um objeto ou área. No entanto, a obtenção desses dados pode ainda ser cara e demorada, reduzindo a eficiência de muitos procedimentos que são dependentes de um grande conjunto de nuvens de pontos, como a geração de datasets para treinamento de aprendizagem de máquina, cálculo de dossel florestal e inspeção submarina. Uma solução atualmente em voga é a criação de simuladores computacionais de sistemas de imageamento, realizando o escaneamento virtual de um cenário feito a partir de arquivos de objetos 3D. Este trabalho apresenta o desenvolvimento de um simulador de sistema LiDAR (light detection and ranging) baseado em algoritmos de rastreamento de raio com paralelismo (GPU raytracing), com o sensor virtual modelado por parâmetros metrológicos e calibrado por meio de comparação com um sensor real, juntamente com um gerador flexível de cenários virtuais. A combinação destas ferramentas no simulador resultou em uma geração robusta de nuvens de pontos sintéticas em cenários diversos, possibilitando a criação de datasets para uso em testes de conceitos, combinação de dados reais e virtuais, entre outras aplicações. / [en] Three dimensional (3D) imaging technologies have been increasingly used in academia and in the industrial sector, especially in the form of point clouds, a mathematical representation of the geometry and surface of an object or area. However, obtaining this data can still be expensive and time consuming, reducing the efficiency of many procedures dependent on a large set of point clouds, such as the generation of datasets for machine learning training, forest canopy calculation and subsea survey. A trending solution is the development of computer simulators for imaging systems, performing the virtual scanning of a scenario made from 3D object files. At the end of this process, synthetic point clouds are obtained. This work presents the development of a LiDAR system simulator (light detection and ranging) based on parallel ray tracing algorithms (GPU raytracing), with its virtual sensor modeled by metrological parameters. A way of calibrating the sensor is displayed, by comparing it with the measurements of a real LiDAR sensor, in addition to surveying error models to increase the realism of the virtual scan. A flexible scenario creator was also implemented to facilitate interaction with the user. The combination of these tools in the simulator resulted in a robust generation of synthetic point clouds in different scenarios, enabling the creation of datasets for use in concept tests, combining real and virtual data, among other applications.
257

3D YOLO: End-to-End 3D Object Detection Using Point Clouds / 3D YOLO: Objektdetektering i 3D med LiDAR-data

Al Hakim, Ezeddin January 2018 (has links)
For safe and reliable driving, it is essential that an autonomous vehicle can accurately perceive the surrounding environment. Modern sensor technologies used for perception, such as LiDAR and RADAR, deliver a large set of 3D measurement points known as a point cloud. There is a huge need to interpret the point cloud data to detect other road users, such as vehicles and pedestrians. Many research studies have proposed image-based models for 2D object detection. This thesis takes it a step further and aims to develop a LiDAR-based 3D object detection model that operates in real-time, with emphasis on autonomous driving scenarios. We propose 3D YOLO, an extension of YOLO (You Only Look Once), which is one of the fastest state-of-the-art 2D object detectors for images. The proposed model takes point cloud data as input and outputs 3D bounding boxes with class scores in real-time. Most of the existing 3D object detectors use hand-crafted features, while our model follows the end-to-end learning fashion, which removes manual feature engineering. 3D YOLO pipeline consists of two networks: (a) Feature Learning Network, an artificial neural network that transforms the input point cloud to a new feature space; (b) 3DNet, a novel convolutional neural network architecture based on YOLO that learns the shape description of the objects. Our experiments on the KITTI dataset shows that the 3D YOLO has high accuracy and outperforms the state-of-the-art LiDAR-based models in efficiency. This makes it a suitable candidate for deployment in autonomous vehicles. / För att autonoma fordon ska ha en god uppfattning av sin omgivning används moderna sensorer som LiDAR och RADAR. Dessa genererar en stor mängd 3-dimensionella datapunkter som kallas point clouds. Inom utvecklingen av autonoma fordon finns det ett stort behov av att tolka LiDAR-data samt klassificera medtrafikanter. Ett stort antal studier har gjorts om 2D-objektdetektering som analyserar bilder för att upptäcka fordon, men vi är intresserade av 3D-objektdetektering med hjälp av endast LiDAR data. Därför introducerar vi modellen 3D YOLO, som bygger på YOLO (You Only Look Once), som är en av de snabbaste state-of-the-art modellerna inom 2D-objektdetektering för bilder. 3D YOLO tar in ett point cloud och producerar 3D lådor som markerar de olika objekten samt anger objektets kategori. Vi har tränat och evaluerat modellen med den publika träningsdatan KITTI. Våra resultat visar att 3D YOLO är snabbare än dagens state-of-the-art LiDAR-baserade modeller med en hög träffsäkerhet. Detta gör den till en god kandidat för kunna användas av autonoma fordon.
258

Crime scenes in Virtual Reality : A user centered study / Brottsplatser i Virtuell Verklighet : En användarcentrerad studie

Dath, Catrin January 2017 (has links)
A crime scene is a vital part of an investigation. There are however, depending on the situation and crime, issues connected to physically being at the scene; risk of contamination, destruction of evidence or other issues can hinder the criminal investigators to stay, visit or revisit the scene. It is therefore important to visually capture the crime scene and any possible evidence in order to aid the investigation. This thesis aims to, with an initial research question, map out the main visual documentation needs, wishes and challenges that criminal investigators face during an investigation. In addition, with a second research question, it aims to address these in a Virtual Reality (VR) design and, with a third research question, explore however other professions in the investigation process could benefit from it. This was conducted through a literature review, interviews, workshops and iterations with the approach of the Double Diamond Model of Design. The results from the interviews were thematically analyzed and ultimately summarized into five key themes. These, together with various design criteria and principals, acted as design guidelines when creating a high fidelity VR design. The first two research questions were presented through the key themes and the VR design. The results of the third research question indicated that, besides criminal investigators, both prosecutors and criminal scene investigators may benefit from a VR design, although in different ways. A VR design can, in conclusion, address the needs, wishes and challenges of criminal investigators by being developed as a compiled visualization and collaboration tool. / En brottsplats är en vital del av en brottsundersökning. Det finns emellertid, beroende på situation och brott, problem som är kopplade till att fysiskt befinna sig på brottsplatsen. Risk för kontamination, förstörelse av bevis eller andra problem kan hindra brottsutredarna att stanna, besöka eller återvända till brottsplatsen. Det är därför viktigt att visuellt dokumentara brottsplatsen och eventuella bevis för att bistå utredningen. Detta masterarbete ämnar att, med en första forskningsfråga, kartlägga de viktigaste behoven, önskemålen och utmaningarna gällande visuell dokumentation, som brottsutredare möter under en utredning. Vidare ämnar projektet att, med en andra forskningsfråga, möta dessa i en Virtuell Verklighet (VR) -design och, med en tredje forskningsfråga, undersöka hur andra yrkesgrupper i en utredningsprocess skulle kunna dra nytta av den. Detta genomfördes genom en litteraturstudie, intervjuer, workshops och iterationer grundat i tillvägagångssättet Double Diamond Model of Design. Resultaten från intervjuerna analyserades tematiskt och sammanfattades i fem huvudteman. Dessa teman, tillsammans med olika designkriterier och principer, agerade designriktlinjer vid skapandet av en high-fidelity VR-design. De två första frågorna presenterades genom nyckeltemana och VR-designen. Resultaten gällande den tredje forskningsfrågan visar att, utöver brottsutredare, både åklagare och kriminaltekniker kan dra nytta av en VR-design, även om på olika vis. Sammanfattningsvis kan en VRdesign möta utredarnas behov, önskemål och utmaningar gällande visuell dokumentation genom att utvecklas som ett kompilerat visualiserings- och samarbetsverktyg.
259

Point Cloud Data Augmentation for 4D Panoptic Segmentation / Punktmolndataförstärkning för 4D-panoptisk Segmentering

Jin, Wangkang January 2022 (has links)
4D panoptic segmentation is an emerging topic in the field of autonomous driving, which jointly tackles 3D semantic segmentation, 3D instance segmentation, and 3D multi-object tracking based on point cloud data. However, the difficulty of collection limits the size of existing point cloud datasets. Therefore, data augmentation is employed to expand the amount of existing data for better generalization and prediction ability. In this thesis, we built a new point cloud dataset named VCE dataset from scratch. Besides, we adopted a neural network model for the 4D panoptic segmentation task and proposed a simple geometric method based on translation operation. Compared to the baseline model, better results were obtained after augmentation, with an increase of 2.15% in LSTQ. / 4D-panoptisk segmentering är ett framväxande ämne inom området autonom körning, som gemensamt tar itu med semantisk 3D-segmentering, 3D-instanssegmentering och 3D-spårning av flera objekt baserat på punktmolnsdata. Svårigheten att samla in begränsar dock storleken på befintliga punktmolnsdatauppsättningar. Därför används dataökning för att utöka mängden befintliga data för bättre generalisering och förutsägelseförmåga. I det här examensarbetet byggde vi en ny punktmolndatauppsättning med namnet VCE-datauppsättning från grunden. Dessutom antog vi en neural nätverksmodell för 4D-panoptisk segmenteringsuppgift och föreslog en enkel geometrisk metod baserad på översättningsoperation. Jämfört med baslinjemodellen erhölls bättre resultat efter förstärkning, med en ökning på 2.15% i LSTQ.
260

Methods for 3D Structured Light Sensor Calibration and GPU Accelerated Colormap

Kurella, Venu January 2018 (has links)
In manufacturing, metrological inspection is a time-consuming process. The higher the required precision in inspection, the longer the inspection time. This is due to both slow devices that collect measurement data and slow computational methods that process the data. The goal of this work is to propose methods to speed up some of these processes. Conventional measurement devices like Coordinate Measuring Machines (CMMs) have high precision but low measurement speed while new digitizer technologies have high speed but low precision. Using these devices in synergy gives a significant improvement in the measurement speed without loss of precision. The method of synergistic integration of an advanced digitizer with a CMM is discussed. Computational aspects of the inspection process are addressed next. Once a part is measured, measurement data is compared against its model to check for tolerances. This comparison is a time-consuming process on conventional CPUs. We developed and benchmarked some GPU accelerations. Finally, naive data fitting methods can produce misleading results in cases with non-uniform data. Weighted total least-squares methods can compensate for non-uniformity. We show how they can be accelerated with GPUs, using plane fitting as an example. / Thesis / Doctor of Philosophy (PhD)

Page generated in 0.0742 seconds