Return to search

Hybrid modular reinforcement learning for game agent control

Intelligent virtual characters play an important role in creating an engaging player experience in modern computer games. Imbuing these characters with learning capabilities curtails the need to define every nuance of their behaviour during development. For this reason, there is growing interest in introducing machine learning techniques into game agent control architectures. However, computer game environments tend to be highly complex and dynamic, which necessitates the use of large state spaces to define effective agent behaviour. Traditional learning strategies are not suited to operating under these circumstances due to the "Curse of Dimensionality". Therefore, in order for their learning to be effective, agents require architectures that can handle complex dynamic state spaces. Within this thesis it is demonstrated that modular reinforcement learning and reactive / deliberative hybridisation techniques present a powerful combination for the implementation of effective game agent architectures. A novel approach to modular reinforcement learning is presented that utilises a fine granularity of modules. This new approach necessitated the development and evaluation of new methods for action selection, reward distribution and exploration. Furthermore, a new method of hybridising modular architectures with deliberative mechanisms is proposed and evaluated. Results demonstrate that agents implemented with hybrid reactive / deliberative architecture can outperform purely reactive and purely deliberative agents, particularly in resource constrained applications.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:553862
Date January 2011
CreatorsHanna, Christopher J.
PublisherUniversity of Ulster
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation

Page generated in 0.0743 seconds