Return to search

Resilient visual perception for multiagent systems

There has been an increasing interest in visual sensors and vision-based solutions for single and multi-robot systems. Vision-based sensors, e.g., traditional RGB cameras, grant rich semantic information and accurate directional measurements at a relatively low cost; however, such sensors have two major drawbacks. They do not generally provide reliable depth estimates, and typically have a limited field of view. These limitations considerably increase the complexity of controlling multiagent systems. This thesis studies some of the underlying problems in vision-based multiagent control and mapping.

The first contribution of this thesis is a method for restoring bearing rigidity in non-rigid networks of robots. We introduce means to determine which bearing measurements can improve bearing rigidity in non-rigid graphs and provide a greedy algorithm that restores rigidity in 2D with a minimum number of added edges.

The focus of the second part is on the formation control problem using only bearing measurements. We address the control problem for consensus and formation control through non-smooth Lyapunov functions and differential inclusion. We provide a stability analysis for undirected graphs and investigate the derived controllers for directed graphs. We also introduce a newer notion of bearing persistence for pure bearing-based control in directed graphs.

The third part is concerned with the bearing-only visual homing problem with a limited field of view sensor. In essence, this problem is a special case of the formation control problem where there is a single moving agent with fixed neighbors. We introduce a navigational vector field composed of two orthogonal vector fields that converges to the goal position and does not violate the field of view constraints. Our method does not require the landmarks' locations and is robust to the landmarks' tracking loss.

The last part of this dissertation considers outlier detection in pose graphs for Structure from Motion (SfM) and Simultaneous Localization and Mapping (SLAM) problems. We propose a method for detecting incorrect orientation measurements before pose graph optimization by checking their geometric consistency in cycles. We use Expectation-Maximization to fine-tune the noise's distribution parameters and propose a new approximate graph inference procedure specifically designed to take advantage of evidence on cycles with better performance than standard approaches.

These works will help enable multi-robot systems to overcome visual sensors' limitations in collaborative tasks such as navigation and mapping.

Identiferoai:union.ndltd.org:bu.edu/oai:open.bu.edu:2144/42591
Date15 May 2021
CreatorsKarimian, Arman
ContributorsTron, Roberto
Source SetsBoston University
Languageen_US
Detected LanguageEnglish
TypeThesis/Dissertation

Page generated in 0.0021 seconds