In robotics, we need robots which have the ability to understand what a human wants and
can respond accordingly. On the other hand, robot should be able to provide information that is
easy for a human to understand. Therefore, we developed the robot Self Agent to provide these
abilities. The main purpose of the Self Agent is to be the center of the robot. It provides every
basic need that a robot should have. For example, translating high-level command into basic
commands so that a robot can understand and execute them; Self-monitoring of the robots
performance; and having an intelligent action selection that will make good decisions.
Since we needed to have a good decision mechanism, we created a new approach which
consists of two algorithms. The first algorithm is Spreading Activation Network (SAN) that
provides the basis for selecting appropriate behaviors (Action Selection) for completing a given task.
To perform well, parameters of the Spreading Activation Network must be manually tuned. The
second algorithm is the Reinforcement Learning (RL) technique that enables a robot to
automatically learn multiple policies. This research will show we trained an ATRV-Jr robot, called
Scooter to learn policies and automatically adapt to unexpected variables.
Identifer | oai:union.ndltd.org:VANDERBILT/oai:VANDERBILTETD:etd-0329102-161914 |
Date | 16 April 2003 |
Creators | Kusumalnukool, Kanok |
Contributors | D. mitch Wilkes, Richard Alan Peter II |
Publisher | VANDERBILT |
Source Sets | Vanderbilt University Theses |
Language | English |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | http://etd.library.vanderbilt.edu/available/etd-0329102-161914/ |
Rights | unrestricted |
Page generated in 0.0019 seconds