41 |
Modelling motivation for experience-based attention focus in reinforcement learningMerrick, Kathryn January 2007 (has links)
Doctor of Philosophy / Computational models of motivation are software reasoning processes designed to direct, activate or organise the behaviour of artificial agents. Models of motivation inspired by psychological motivation theories permit the design of agents with a key reasoning characteristic of natural systems: experience-based attention focus. The ability to focus attention is critical for agent behaviour in complex or dynamic environments where only small amounts of available information is relevant at a particular time. Furthermore, experience-based attention focus enables adaptive behaviour that focuses on different tasks at different times in response to an agent’s experiences in its environment. This thesis is concerned with the synthesis of motivation and reinforcement learning in artificial agents. This extends reinforcement learning to adaptive, multi-task learning in complex, dynamic environments. Reinforcement learning algorithms are computational approaches to learning characterised by the use of reward or punishment to direct learning. The focus of much existing reinforcement learning research has been on the design of the learning component. In contrast, the focus of this thesis is on the design of computational models of motivation as approaches to the reinforcement component that generates reward or punishment. The primary aim of this thesis is to develop computational models of motivation that extend reinforcement learning with three key aspects of attention focus: rhythmic behavioural cycles, adaptive behaviour and multi-task learning in complex, dynamic environments. This is achieved by representing such environments using context-free grammars, modelling maintenance tasks as observations of these environments and modelling achievement tasks as events in these environments. Motivation is modelled by processes for task selection, the computation of experience-based reward signals for different tasks and arbitration between reward signals to produce a motivation signal. Two specific models of motivation based on the experience-oriented psychological concepts of interest and competence are designed within this framework. The first models motivation as a function of environmental experiences while the second models motivation as an introspective process. This thesis synthesises motivation and reinforcement learning as motivated reinforcement learning agents. Three models of motivated reinforcement learning are presented to explore the combination of motivation with three existing reinforcement learning components. The first model combines motivation with flat reinforcement learning for highly adaptive learning of behaviours for performing multiple tasks. The second model facilitates the recall of learned behaviours by combining motivation with multi-option reinforcement learning. In the third model, motivation is combined with an hierarchical reinforcement learning component to allow both the recall of learned behaviours and the reuse of these behaviours as abstract actions for future learning. Because motivated reinforcement learning agents have capabilities beyond those of existing reinforcement learning approaches, new techniques are required to measure their performance. The secondary aim of this thesis is to develop metrics for measuring the performance of different computational models of motivation with respect to the adaptive, multi-task learning they motivate. This is achieved by analysing the behaviour of motivated reinforcement learning agents incorporating different motivation functions with different learning components. Two new metrics are introduced that evaluate the behaviour learned by motivated reinforcement learning agents in terms of the variety of tasks learned and the complexity of those tasks. Persistent, multi-player computer game worlds are used as the primary example of complex, dynamic environments in this thesis. Motivated reinforcement learning agents are applied to control the non-player characters in games. Simulated game environments are used for evaluating and comparing motivated reinforcement learning agents using different motivation and learning components. The performance and scalability of these agents are analysed in a series of empirical studies in dynamic environments and environments of progressively increasing complexity. Game environments simulating two types of complexity increase are studied: environments with increasing numbers of potential learning tasks and environments with learning tasks that require behavioural cycles comprising more actions. A number of key conclusions can be drawn from the empirical studies, concerning both different computational models of motivation and their combination with different reinforcement learning components. Experimental results confirm that rhythmic behavioural cycles, adaptive behaviour and multi-task learning can be achieved using computational models of motivation as an experience-based reward signal for reinforcement learning. In dynamic environments, motivated reinforcement learning agents incorporating introspective competence motivation adapt more rapidly to change than agents motivated by interest alone. Agents incorporating competence motivation also scale to environments of greater complexity than agents motivated by interest alone. Motivated reinforcement learning agents combining motivation with flat reinforcement learning are the most adaptive in dynamic environments and exhibit scalable behavioural variety and complexity as the number of potential learning tasks is increased. However, when tasks require behavioural cycles comprising more actions, motivated reinforcement learning agents using a multi-option learning component exhibit greater scalability. Motivated multi-option reinforcement learning also provides a more scalable approach to recall than motivated hierarchical reinforcement learning. In summary, this thesis makes contributions in two key areas. Computational models of motivation and motivated reinforcement learning extend reinforcement learning to adaptive, multi-task learning in complex, dynamic environments. Motivated reinforcement learning agents allow the design of non-player characters for computer games that can progressively adapt their behaviour in response to changes in their environment.
|
42 |
Case-injected genetic algorithms in computer strategy gamesMiles, Christopher Eoin. January 2006 (has links)
Thesis (M.S.)--University of Nevada, Reno, 2006. / "May, 2006." Includes bibliographical references (leaves 70-72). Online version available on the World Wide Web.
|
43 |
Through the Looking Glass into the World of Computer GamesHedin, Ellen January 2009 (has links)
<p>A qualitative study about the culture evolving around computer games and its users.</p>
|
44 |
Through the Looking Glass into the World of Computer GamesHedin, Ellen January 2009 (has links)
A qualitative study about the culture evolving around computer games and its users.
|
45 |
How Male Gamers Perceive Games with Non-sexualized Female Protagonists : Swedish Males Aged 18 and AboveLiepa, Marcis January 2013 (has links)
Some game publishers and developers do not think that games with non-sexualized female protagonists are worth making because they would not sell. With close to a half of gamers being female (47%) it is a bit puzzling that they are not catered to. This could be because publishers think that male gamers, who they perceive as their main demographic, would not buy games with non-sexualized female protagonists. A study was done to see if that statement is correct. By questioning 91 Swedish male gamers, aged 18 and above, it was found that in games where one can choose the character’s sex 46% of the subjects play as a female character at least half of the time and in total 90% of the 91 subjects have at some point chosen to play as a female character. No one of the subjects have any negative thoughts about there being more games with non-sexualized female protagonists who are heroic in their own right and not a trophy, when asked 47% said it would be “Very good”, 24% said it would be “Good” and 29% said that they have “No opinion” in the matter.
|
46 |
Game on the impact of game features in computer-based training /DeRouin-Jessen, Renée E. January 2008 (has links)
Thesis (Ph.D.)--University of Central Florida, 2008. / Adviser: Barbara A. Fritzsche. Includes bibliographical references (p. 136-150).
|
47 |
Real benefits from virtual experiences how four avid video gamers used gaming as a resource in their literate activity /Abrams, Sandra Schamroth, January 2009 (has links)
Thesis (Ph. D.)--Rutgers University, 2009. / "Graduate Program in Education." Includes bibliographical references (p. 209-216).
|
48 |
The use of massive multiplayer online games to evaluate C4I systems /Juve, Kambra. January 2004 (has links) (PDF)
Thesis (M.S. in Systems Technology)--Naval Postgraduate School, March 2004. / Thesis advisor(s): William Kemple. Includes bibliographical references (p. 55-57). Also available online.
|
49 |
Dynamic Strategy Generation in Computer Games using Artificial Immune SystemsSlocket, John 23 January 2012 (has links)
This thesis investigates the use of an Artificial Immune System as a method for dynamically creating computer game strategies in a non deterministic environment
|
50 |
AN ENHANCED SOLVER FOR THE GAME OF AMAZONSSong, Jiaxing Unknown Date
No description available.
|
Page generated in 0.073 seconds