Return to search

Importance Sampling for Reinforcement Learning with Multiple Objectives

This thesis considers three complications that arise from applying reinforcement learning to a real-world application. In the process of using reinforcement learning to build an adaptive electronic market-maker, we find the sparsity of data, the partial observability of the domain, and the multiple objectives of the agent to cause serious problems for existing reinforcement learning algorithms. We employ importance sampling (likelihood ratios) to achieve good performance in partially observable Markov decision processes with few data. Our importance sampling estimator requires no knowledge about the environment and places few restrictions on the method of collecting data. It can be used efficiently with reactive controllers, finite-state controllers, or policies with function approximation. We present theoretical analyses of the estimator and incorporate it into a reinforcement learning algorithm. Additionally, this method provides a complete return surface which can be used to balance multiple objectives dynamically. We demonstrate the need for multiple goals in a variety of applications and natural solutions based on our sampling method. The thesis concludes with example results from employing our algorithm to the domain of automated electronic market-making.

Identiferoai:union.ndltd.org:MIT/oai:dspace.mit.edu:1721.1/5568
Date01 August 2001
CreatorsShelton, Christian Robert
Source SetsM.I.T. Theses and Dissertation
Languageen_US
Detected LanguageEnglish
Format108 p., 10551422 bytes, 1268632 bytes, application/postscript, application/pdf
RelationAITR-2001-003, CBCL-204

Page generated in 0.002 seconds