• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Experiments with Visual Odometry for Hydrobatic Autonomous Underwater Vehicles / Experiment med visuell odometri för hydrobatiska autonoma undervattensfordon

Balaji Suresh Kumar, Somnath January 2023 (has links)
Hydrobatic Autonomous Underwater Vehicles (AUVs) are underactuated robots that can perform agile maneuvers in challenging underwater environments with high efficiency in speed and range. The challenge lies in localizing and navigating these AUVs particularly for performing manipulation tasks because common sensors such as GPS become very unreliable underwater due to their poor accuracy. To address this challenge, Visual Odometry (VO) is a viable technique that estimates the position and orientation of a robot by figuring out the movement of a camera and tracking the changes in the associated camera images taken by one or more cameras. VO is a promising solution for underwater localization as it provides information about egomotion utilizing the visual cues in a robot. This research explores the applicability of VO algorithms on hydrobatic AUVs using a simulated underwater dataset obtained in Stonefish, an advanced open-source simulation tool specifically developed for marine robotics. This work focuses on the feasibility of employing two state-of-the-art feature-based VO frameworks, referred to as ORB-SLAM2 and VISO2 respectively since very little research is available for learning-based VO frameworks in underwater environments. The assessment is performed on a baseline underwater dataset captured by cameras of a hydrobatic AUV using the Stonefish simulator in a simulated algae farm, which is one of the target applications of hydrobatic AUVs. A novel software architecture has also been proposed for hydrobatic AUVs, which can be used for integrating VO with other components as a node stack to ensure robust localization. This study further suggests enhancements, including camera calibration and timestamp synchronization, as a future step to optimize VO accuracy and functionality. ORB-SLAM2 performs well in the baseline scenario but is prone to slight drift when turbidity arises in the simulated underwater environment. VISO2 is recommended for such high turbidity scenarios but it fails to estimate the camera motion accurately due to advanced hardware synchronization issues that are prevalent in the dataset as it is highly sensitive to accurate camera calibration and synchronized time stamps. Despite these limitations, the results show immense potential of both ORB-SLAM2 and VISO2 as feature-based VO methods for future deployment in hydrobatic AUVs with ORB-SLAM2 being preferred for overall localization and mapping of hydrobatic AUVs in low turbidity environments that are less prone to drift and VISO2 preferred for high turbidity environments with highly accurate camera calibration and synchronization. / Hydrobatiskt autonomt undervatten Fordon (AUV) är undermanövrerade robotar som kan utföra smidiga manövrar i utmanande undervattensmiljöer med hög effektivitet i hastighet och räckvidd. Utmaningen ligger i att lokalisera och navigering av dessa AUV:er speciellt för att utföra manipulationsuppgifter eftersom vanliga sensorer som GPS blir mycket opålitliga under vattnet på grund av deras dåliga noggrannhet. För att ta itu med detta utmaning, Visual Odometry (VO) är en användbar teknik som uppskattar positionen och orienteringen av en robot genom att räkna ut en kameras rörelse och spåra ändringarna i den tillhörande kameran bilder tagna med en eller flera kameror. VO är en lovande lösning för undervattenslokalisering som den ger information om egomotion med hjälp av de visuella ledtrådarna i en robot. Denna forskning utforskar tillämpbarheten av VO-algoritmer på hydrobatiska AUV:er med hjälp av en simulerad undervattensdatauppsättning erhållen i Stonefish, ett specifikt avancerat simuleringsverktyg med öppen källkod utvecklad för marin robotik. Detta arbete fokuserar på genomförbarheten av att använda två toppmoderna funktionsbaserade VO-ramverk, kallade ORB-SLAM2 respektive VISO2 sedan mycket lite forskning finns tillgänglig för inlärningsbaserade VO-ramverk i undervattensmiljöer. De bedömning utförs på en baslinje undervattensdatauppsättning fångad av kameror från en hydrobatik AUV med hjälp av Stonefish-simulatorn i en simulerad algfarm, vilket är en av målapplikationerna av hydrobatiska AUV:er. En ny mjukvaruarkitektur har också föreslagits för hydrobatiska AUV, som kan användas för att integrera VO med andra komponenter som en nodstack för att säkerställa robust lokalisering. Denna studie föreslår ytterligare förbättringar, inklusive kamerakalibrering och tidsstämpel synkronisering, som ett framtida steg för att optimera VO-noggrannhet och funktionalitet. ORB-SLAM2 presterar bra i baslinjescenariot men är benägen att avvika något när grumlighet uppstår i den simulerade undervattensmiljön. VISO2 rekommenderas för sådana scenarier med hög grumlighet men den misslyckas med att uppskatta kamerarörelsen korrekt på grund av avancerad hårdvarusynkronisering problem som är vanliga i datasetet eftersom det är mycket känsligt för noggrann kamerakalibrering och synkroniserade tidsstämplar. Trots dessa begränsningar visar resultaten en enorm potential för båda ORB-SLAM2 och VISO2 som funktionsbaserade VO-metoder för framtida användning i hydrobatiska AUV:er med ORB-SLAM2 att föredra för övergripande lokalisering och kartläggning av hydrobatiska AUVs i låg grumlighetsmiljöer som är mindre benägna att driva och VISO2 föredras för hög grumlighet miljöer med mycket noggrann kamerakalibrering och synkronisering.
2

The Interconnectivity Between SLAM and Autonomous Exploration : Investigation Through Integration / Interaktionen mellan SLAM och autonom utforskning : Undersökning genom integration

Ívarsson, Elliði January 2023 (has links)
Two crucial functionalities of a fully autonomous robotic agent are localization and navigation. The problem of enabling an agent to localize itself in an unknown environment is an extensive and widely studied topic. One of the main areas of this topic focuses on Simultaneous Localization and Mapping (SLAM). Many advancements in this field have been made over the years resulting in robust and accurate localization systems. Navigation progress has also improved substantially throughout the years resulting in efficient path planning algorithms and effective exploration strategies. Although an abundance of research exists on these two topics, less so exists about the combination of the two and their effect on each other. Therefore, the aim of this thesis was to integrate two state-of-the-art components from each respective area of research into a functioning system. This was done with the aim of studying the interconnectivity between these components while also documenting the integration process and identifying important considerations for similar future endeavours. Evaluations of the system showed that it performed with surprisingly good accuracy although it was severely lacking in robustness. Integration efforts showed good promise; however, it is clear that the two fields are heavily linked and need to be considered in a mutual context when it comes to a complete integrated system. / Förmågor som lokalisering och navigering är inom robotik förutsättande för att kunna möjliggöra en fullt autonom agent. Att för en agent kunna lokalisera sig i en okänd miljö är ett omfattande och brett studerat ämne, och ett huvudfokus inom ämnet är Simultaneous Localization and Mapping (SLAM) som avser lokalisering som sker parallellt med en aktiv kartläggning av omgivningen. Stora framsteg har gjorts inom detta område genom åren, vilket har resulterat i robusta och exakta system för robotlokalisering. Motsvarande framsteg inom robotnavigering har dessutom möjliggjort effektiva algoritmer och strategier för path planning och autonom utforskning. Trots den stora mängd forskning som existerar inom ämnena lokalisering och navigation var för sig, är samspelet mellan de två områdena samt möjligheten att sammankoppla de två aspekterna mindre studerat. I syfte att undersöka detta var målet med detta examensarbete således att integrera två toppmoderna system från de respektive områdena till ett sammankopplat system. Utöver att förmågorna och prestandan hos det integrerade systemet kunde studeras, genomfördes studien med avsikten att möjliggöra dokumentering av integrationsprocessen samt att viktiga insikter kring integrationen kunde identifieras i syfte att främja framtida studier inom samspelet mellan områdena lokalisering och navigation. Utvärderingar av det integrerade systemet påvisade en högre nivå av noggrannhet än förväntat, men fann en markant avsaknad av robusthet. Resultaten från integrationsarbetet anses lovande, och belyser framförallt att finns ett starkt samband mellan de två områdena samt att de bör beaktas i ett gemensamt kontext när de avses användas i ett komplett integrerat system.
3

Visual SLAM using sparse maps based on feature points

Brunnegård, Oliver, Wikestad, Daniel January 2017 (has links)
Visual Simultaneous Localisation And Mapping is a useful tool forcreating 3D environments with feature points. These visual systemscould be very valuable in autonomous vehicles to improve the localisation.Cameras being a fairly cheap sensor with the capabilityto gather a large amount of data. More efficient algorithms are stillneeded to better interpret the most valuable information. This paperanalyses how much a feature based map can be reduced without losingsignificant accuracy during localising. Semantic segmentation created by a deep neural network is used toclassify the features used to create the map, the map is reduced by removingcertain classes. The results show that feature based maps cansignificantly be reduced without losing accuracy. The use of classesresulted in promising results, large amounts of feature were removedbut the system could still localise accurately. Removing some classesgave the same results or even better in certain weather conditionscompared to localisation with a full-scale map.
4

OCTREE 3D VISUALIZATION MAPPING BASED ON CAMERA INFORMATION

Benhao Wang (8803199) 07 May 2020 (has links)
<p>Today, computer science and robotics have been highly developed. Simultaneous Localization and Mapping (SLAM) is widely used in mobile robot navigation, game design, and autonomous vehicles. It can be said that in the future, most scenarios where mobile robots are applied will require localization and mapping. Among them, the construction of three-dimensional(3D) maps is particularly important for environment visualization which is the focus of this research.</p> <p>In this project, the data used for visualization was collected using a vision sensor. The data collected by the vision sensor is processed by ORB-SLAM2 to generate the 3D cloud point maps of the environment. Because, there are a lot of noise in the map points cloud, filters are used to remove the noise. The generated map points are processed by the straight-through filter to cut off the points out of the specific range. Statistical filters are then used to remove sparse outlier noise. Thereafter, in order to improve the calculation efficiency and retain the necessary terrain details, a voxel filter is used for downsampling. In order to improve the composition effect, it is necessary to appropriately increase the sampling amount to increase surface smoothness. Finally, the processed map points are visualized using Octomap. The implementation utilizes the services provided by the Robot Operating System (ROS). The powerful Rviz software on the ROS platform is used. The processed map points as cloud data are published in ROS and visualized using Octomap. </p> <p>Simulation results confirm that Octomap can show the terrain details well in the 3D visualization of the environment. After the simulations, visualization experiments for two environments of different complexity are performed. The experimental results show that the approach can mitigate the influence of noise on the visualization results to a certain extent. It is shown that for static high-precision point clouds, Octomap provides a good visualization. The simulation and experimental results demonstrate the applicably of the approach to visualize 3D map points for the purpose of autonomous navigation.</p><br>
5

Systém pro autonomní mapování závodní dráhy / System for autonomous racetrack mapping

Soboňa, Tomáš January 2021 (has links)
The focus of this thesis is to theoretically design, describe, implement and verify thefunctionality of the selected concept for race track mapping. The theoretical part ofthe thesis describes the ORB-SLAM2 algorithm for vehicle localization. It then furtherdescribes the format of the map - occupancy grid and the method of its creation. Suchmap should be in a suitable format for use by other trajectory planning systems. Severalcameras, as well as computer units, are described in this part, and based on parametersand tests, the most suitable ones are selected. The thesis also proposes the architectureof the mapping system, it describes the individual units that make up the system, aswell as what is exchanged between the units, and in what format the system output issent. The individual parts of the system are first tested separately and subsequently thesystem is tested as a whole. Finally, the achieved results are evaluated as well as thepossibilities for further expansion.

Page generated in 0.0243 seconds