Return to search

Applying inter-layer conflict resolution to hybrid robot control architectures

In this document, we propose and examine the novel use of a learning mechanism between the reactive and deliberative layers of a hybrid robot control architecture. Balancing the need to achieve complex goals and meet real-time constraints, many modern mobile robot navigation control systems make use of a hybrid deliberative-reactive architecture. In this paradigm, a high-level deliberative layer plans routes or actions toward a known goal, based on accumulated world knowledge. A low-level reactive layer selects motor commands based on current sensor data and the deliberative layer's plan. The desired system-level effect of this architecture is that the robot is able to combine complex reasoning toward global objectives with quick reaction to local constraints.
Implicit in this type of architecture, is the assumption that both layers are using the same model of the robot's capabilities and constraints. It may happen, for example, due to differences in representation of the robot's kinematic constraints, that the deliberative layer creates a plan that the reactive layer cannot follow. This sort of conflict may cause a degradation in system-level performance, if not complete navigational deadlock. Traditionally, it has been the task of the robot designer to ensure that the layers operate in a compatible manner. However, this is a complex, empirical task.
Working to improve system-level performance and navigational robustness, we propose introducing a learning mechanism between the reactive layer and the deliberative layer, allowing the deliberative layer to learn a model of the reactive layer's execution of its plans. First, we focus on detecting this inter-layer conflict, and acting based on a corrected model. This is demonstrated on a physical robotic platform in an unstructured outdoor environment. Next, we focus on learning a model to predict instances of inter-layer conflict, and planning to act with respect to this model. This is demonstrated using supervised learning in a physics-based simulation environment. Results and algorithms are presented.

Identiferoai:union.ndltd.org:GATECH/oai:smartech.gatech.edu:1853/33979
Date20 January 2010
CreatorsPowers, Matthew D.
PublisherGeorgia Institute of Technology
Source SetsGeorgia Tech Electronic Thesis and Dissertation Archive
Detected LanguageEnglish
TypeDissertation

Page generated in 0.0015 seconds