• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Deep Reinforcement Learning for Mapless Mobile Robot Navigation

Hamza, Ameer January 2022 (has links)
Navigation is the fundamental capability of mobile robots which allows them to move fromone point to another without any human interference. The autonomous operation of theserobots is depended on reliable, robust, and intelligent navigation system. With the recenttechnological progress, autonomous mobile robots are being deployed and used in differentareas and scenarios. Conventional navigation approaches depend on predefined accurateobstacle maps and costly high-end precise laser sensors. These maps are difficult and expensiveto acquire and degrade due changes in the environment. This limits the overall use of mobilerobots in dynamic settings. In this research, we investigate the end-to-end learning-basedapproach using vision and ranging sensors while using Deep Reinforcement Learning formobile robot navigation for indoor environments. Different state-of-the-art DRL algorithms were trained and compared in 3D-simulation in termsof sample efficiency and cumulative reward. Next, extensive experiments were carried outusing 10-dimensional sparse distance data from vision and ranging sensor. The trained modelswere evaluated in different environments of varying complexity to analyze the strength andgeneralizability of the learnt policies. Our results showed that ranging sensor approach was able to learn a robust navigation policywhich was able to generalize in unseen virtual environments without any additional trainingwith a high success rate. Whereas vision-based approach performed poorly due to insufficientinformation and hardware constraints. Moreover, all the experiment were carried out only insimulation. However, they should be directly transferable to an actual robot since abstractobservation space was used.

Page generated in 0.0314 seconds