Return to search

Vision Approach for Position Estimation Using Moiré Patterns and Convolutional Neural Networks

In order for a robot to operate autonomously in an environment, it must be able to locate itself within it. A robot's position and orientation cannot be directly measured by physical sensors, so estimating it is a non-trivial problem. Some sensors provide this information, such as the Global Navigation Satellite System (GNSS) and Motion capture (Mo-cap). Nevertheless, these sensors are expensive to set up, or they are not useful in environments where autonomous vehicles are often deployed.
Our proposal explores a new approach to sensing for relative motion and position estimation. It consists of one vision sensor and a marker that utilizes moiré phenomenon to estimate the position of the vision sensor by using Convolutional Neural Networks (CNN) trained to estimate the position from the pattern shown on the marker. We share the process of data collection and training of the network and share the hyperparameter search method used to optimize the structure of the network. We test the trained network in a setup to evaluate its ability in estimating position. The system achieved an average absolute error of 1 cm, showcasing a method that could be used to overcome the current limitations of vision approaches in pose estimation.

Identiferoai:union.ndltd.org:kaust.edu.sa/oai:repository.kaust.edu.sa:10754/679806
Date05 1900
CreatorsAlotaibi, Nawaf
ContributorsFeron, Eric, Physical Science and Engineering (PSE) Division, Magnotti, Gaetano, Park, Shinkyu
Source SetsKing Abdullah University of Science and Technology
LanguageEnglish
Detected LanguageEnglish
TypeThesis
Rights2023-07-24, At the time of archiving, the student author of this thesis opted to temporarily restrict access to it. The full text of this thesis will become available to the public after the expiration of the embargo on 2023-07-24.

Page generated in 0.0051 seconds