We create pure strategy versions of Robert Axelrod's well known norms and metanorms games. To analyze the evolutionary behaviour of these games, we utilize replicator dynamics complemented with agent based model simulations. Our findings show that the only evolutionarily stable strategy in the norms game is one in which a player defects and is lenient. The metanorms game, however, has two evolutionarily stable strategies. The first is a repeat from the norms game, that is, a player defects and is always lenient. The other is one in which a player follows the norm and punishes those who are lenient and those who defect.
We also introduce the concept of providing an incentive for players to play a certain strategy in our controlled norms game. This particular game has two evolutionarily stable strategies. In the first, a player follows the norm, while in the second, a player does not. We wish to transition the population of players from a state in which the majority of players initially do not follow the norm to one in which the majority of players do. During this transition, we look to minimize the total use of our incentive. We also utilize agent based model simulations to explore the effect of imposing simple network connections and heterogeneity onto a population of agents playing these games.
Identifer | oai:union.ndltd.org:LACETR/oai:collectionscanada.gc.ca:OGU.10214/5251 |
Date | 08 January 2013 |
Creators | Andrews, Michael |
Contributors | Cojocaru, Monica, Edward, Thommes |
Source Sets | Library and Archives Canada ETDs Repository / Centre d'archives des thèses électroniques de Bibliothèque et Archives Canada |
Language | English |
Detected Language | English |
Type | Thesis |
Rights | http://creativecommons.org/licenses/by/2.5/ca/ |
Page generated in 0.002 seconds