Return to search

Reinforcement learning in commercial computer games

The goal of this thesis is to explore the use of reinforcement learning (RL) in commercial computer games. Although RL has been applied with success to many types of board games and non-game simulated environments, there has been little work in applying RL to the most popular genres of games: first-person shooters, role-playing games, and real-time strategies. In this thesis we use a first-person shooter environment to create computer players, or bots, that learn to play the game using reinforcement learning techniques. / We have created three experimental bots: ChaserBot, ItemBot and HybridBot. The two first bots each focus on a different aspect of the first-person shooter genre, and learn using basic RL. ChaserBot learns to chase down and shoot an enemy player. ItemBot, on the other hand, learns how to pick up the items---weapons, ammunition, armor---that are available, scattered on the ground, for the players to improve their arsenal. Both of these bots become reasonably proficient at their assigned task. Our goal for the third bot, HybridBot, was to create a bot that both chases and shoots an enemy player and goes after the items in the environment. Unlike the two previous bots which only have primitive actions available (strafing right or left, moving forward or backward, etc.), HybridBot uses options. At any state, it may choose either the player chasing option or the item gathering option. These options' internal policies are determined by the data learned by ChaserBot and ItemBot. HybridBot uses reinforcement learning to learn which option to pick at a given state. / Each bot learns to perform its given tasks. We compare the three bots' ability to gather items, and ChaserBot's and HybridBot's ability to chase their opponent. HybridBot's results are of particular interest as it outperforms ItemBot at picking up items by a large amount. However, none of our experiments yielded bots that are competitive with human players. We discuss the reasons for this and suggest improvements for future work that could lead to competitive reinforcement learning bots.

Identiferoai:union.ndltd.org:LACETR/oai:collectionscanada.gc.ca:QMM.112391
Date January 2008
CreatorsCoggan, Melanie.
PublisherMcGill University
Source SetsLibrary and Archives Canada ETDs Repository / Centre d'archives des thèses électroniques de Bibliothèque et Archives Canada
LanguageEnglish
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Formatapplication/pdf
CoverageMaster of Science (School of Computer Science.)
RightsAll items in eScholarship@McGill are protected by copyright with all rights reserved unless otherwise indicated.
Relationalephsysno: 002769832, proquestno: AAIMR51081, Theses scanned by UMI/ProQuest.

Page generated in 0.0022 seconds