Vision-based machine learning agents are tasked with making decisions based on high-dimensional, noisy input, placing a heavy load on available resources. Moreover, observations typically provide only partial information with respect to the environment state, necessitating robust state inference by the agent. Reinforcement learning provides a framework for decision making with the goal of maximizing long-term reward. This thesis introduces a novel approach to vision-based reinforce- ment learning through the use of a consolidated actor-critic model (CACM). The approach takes advantage of artificial neural networks as non-linear function approximators and the reduced com- putational requirements of the CACM scheme to yield a scalable vision-based control system. In this thesis, a comparison between the actor-critic and CACM is made. Additionally, the affect of observation prediction and correlated exploration has on the agent's performance is investigated.
Identifer | oai:union.ndltd.org:UTENN/oai:trace.tennessee.edu:utk_gradthes-1582 |
Date | 01 December 2009 |
Creators | Niedzwiedz, Christopher Allen |
Publisher | Trace: Tennessee Research and Creative Exchange |
Source Sets | University of Tennessee Libraries |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Masters Theses |
Page generated in 0.0019 seconds