• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 5
  • Tagged with
  • 43
  • 12
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Terrain Mapping for Autonomous Vehicles / Terrängkartläggning för autonoma fordon

Pedreira Carabel, Carlos Javier January 2015 (has links)
Autonomous vehicles have become the forefront of the automotive industry nowadays, looking to have safer and more efficient transportation systems. One of the main issues for every autonomous vehicle consists in being aware of its position and the presence of obstacles along its path. The current project addresses the pose and terrain mapping problem integrating a visual odometry method and a mapping technique. An RGB-D camera, the Kinect v2 from Microsoft, was chosen as sensor for capturing information from the environment. It was connected to an Intel mini-PC for real-time processing. Both pieces of hardware were mounted on-board of a four-wheeled research concept vehicle (RCV) to test the feasibility of the current solution at outdoor locations. The Robot Operating System (ROS) was used as development environment with C++ as programming language. The visual odometry strategy consisted in a frame registration algorithm called Adaptive Iterative Closest Keypoint (AICK) based on Iterative Closest Point (ICP) using Oriented FAST and Rotated BRIEF (ORB) as image keypoint extractor. A grid-based local costmap rolling window type was implemented to have a two-dimensional representation of the obstacles close to the vehicle within a predefined area, in order to allow further path planning applications. Experiments were performed both offline and in real-time to test the system at indoors and outdoors scenarios. The results confirmed the viability of using the designed framework to keep tracking the pose of the camera and detect objects in indoor environments. However, outdoor environments evidenced the limitations of the features of the RGB-D sensor, making the current system configuration unfeasible for outdoor purposes. / Autonoma fordon har blivit spetsen för bilindustrin i dag i sökandet efter säkrare och effektivare transportsystem. En av de viktigaste sakerna för varje autonomt fordon består i att vara medveten om sin position och närvaron av hinder längs vägen. Det aktuella projektet behandlar position och riktning samt terrängkartläggningsproblemet genom att integrera en visuell distansmätnings och kartläggningsmetod. RGB-D kameran Kinect v2 från Microsoft valdes som sensor för att samla in information från omgivningen. Den var ansluten till en Intel mini PC för realtidsbehandling. Båda komponenterna monterades på ett fyrhjuligt forskningskonceptfordon (RCV) för att testa genomförbarheten av den nuvarande lösningen i utomhusmiljöer. Robotoperativsystemet (ROS) användes som utvecklingsmiljö med C++ som programmeringsspråk. Den visuella distansmätningsstrategin bestod i en bildregistrerings-algoritm som kallas Adaptive Iterative Closest Keypoint (AICK) baserat på Iterative Closest Point (ICP) med hjälp av Oriented FAST och Rotated BRIEF (ORB) som nyckelpunktsutvinning från bilder. En rutnätsbaserad lokalkostnadskarta av rullande-fönster-typ implementerades för att få en tvådimensionell representation av de hinder som befinner sig nära fordonet inom ett fördefinierat område, i syfte att möjliggöra ytterligare applikationer för körvägen. Experiment utfördes både offline och i realtid för att testa systemet i inomhus- och utomhusscenarier. Resultaten bekräftade möjligheten att använda den utvecklade metoden för att spåra position och riktning av kameran samt upptäcka föremål i inomhusmiljöer. Men utomhus visades begränsningar i RGB-D-sensorn som gör att den aktuella systemkonfigurationen är värdelös för utomhusbruk.
42

Consequences of terrestrial invaders for aquatic-riparian linkages

Diesburg, Kristen M. January 2021 (has links)
No description available.
43

Through the Blur with Deep Learning : A Comparative Study Assessing Robustness in Visual Odometry Techniques

Berglund, Alexander January 2023 (has links)
In this thesis, the robustness of deep learning techniques in the field of visual odometry is investigated, with a specific focus on the impact of motion blur. A comparative study is conducted, evaluating the performance of state-of-the-art deep convolutional neural network methods, namely DF-VO and DytanVO, against ORB-SLAM3, a well-established non-deep-learning technique for visual simultaneous localization and mapping. The objective is to quantitatively assess the performance of these models as a function of motion blur. The evaluation is carried out on a custom synthetic dataset, which simulates a camera navigating through a forest environment. The dataset includes trajectories with varying degrees of motion blur, caused by camera translation, and optionally, pitch and yaw rotational noise. The results demonstrate that deep learning-based methods maintained robust performance despite the challenging conditions presented in the test data, while excessive blur lead to tracking failures in the geometric model. This suggests that the ability of deep neural network architectures to automatically learn hierarchical feature representations and capture complex, abstract features may enhance the robustness of deep learning-based visual odometry techniques in challenging conditions, compared to their geometric counterparts.

Page generated in 0.0301 seconds