Return to search

Vision-Based Reinforcement Learning Using A Consolidated Actor-Critic Model

Vision-based machine learning agents are tasked with making decisions based on high-dimensional, noisy input, placing a heavy load on available resources. Moreover, observations typically provide only partial information with respect to the environment state, necessitating robust state inference by the agent. Reinforcement learning provides a framework for decision making with the goal of maximizing long-term reward. This thesis introduces a novel approach to vision-based reinforce- ment learning through the use of a consolidated actor-critic model (CACM). The approach takes advantage of artificial neural networks as non-linear function approximators and the reduced com- putational requirements of the CACM scheme to yield a scalable vision-based control system. In this thesis, a comparison between the actor-critic and CACM is made. Additionally, the affect of observation prediction and correlated exploration has on the agent's performance is investigated.

Identiferoai:union.ndltd.org:UTENN/oai:trace.tennessee.edu:utk_gradthes-1582
Date01 December 2009
CreatorsNiedzwiedz, Christopher Allen
PublisherTrace: Tennessee Research and Creative Exchange
Source SetsUniversity of Tennessee Libraries
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceMasters Theses

Page generated in 0.0019 seconds