Spelling suggestions: "subject:"selfaanvaarding preference"" "subject:"feelingsregarding preference""
1 |
Ultimatum Game with RobotsHsieh, Ju Tsun 08 August 2006 (has links)
Experimental implementations of the Ultimatum Game are some of the most well studied economic experiments of the last twenty years. There are two popular explanations for why Proposers offer substantially more than the smallest positive amount of the pie. One is that the Proposers have other-regarding preferences and the other explanation is that Proposers are selfish, but fear rejection by Responders who will reject low offers. Most experiments that attempt to discriminate between these two explanations contrast behavior in the Ultimatum Game to behavior in the Dictator Game. The Dictator Game removes strategic concerns from the Ultimatum Game without substantially changing the predicted behavior of a selfish Proposer. Researchers thus believe that subtracting Dictator Game offers from Ultimatum Game offers isolates the fraction of average offers in the Ultimatum Game motivated by other-regarding preferences. In most Dictator Game experiments, Proposers offer less than they do in Ultimatum Games, but they still offer non-trivial positive amounts. This result has led analysts to posit that Proposer behavior in the Ultimatum Game is motivated in part by other-regarding preferences. There are, however, potential problems in drawing inferences about Proposer behavior in the Ultimatum Game from observations of Proposer behavior in the Dictator Game. First, it is well known that objectively irrelevant contextual details in experiments can affect subject behavior in systematic ways. Second, altruistic motivations are less costly to satisfy per monetary unit in the Ultimatum Game because each monetary unit offered to the Responder reduces the probability of rejection. Thus strategic motivations may be sufficient to explain behavior in the Ultimatum Game. In other words, a Proposer with altruistic preferences may offer the same amount of money as an identical Proposer who differs only in his lack of such preferences. In contrast to previous approaches that remove the strategic incentives in the Ultimatum Game, we remove the incentives for expressing other-regarding preferences. We do so through a treatment in which humans are paired with robots that, for each choice in the Proposer’s decision space, reject with the same frequency as humans in previous experiments. Proposers are aware they are playing with automata that are programmed to reject and accept as humans have done in previous implementations of the experiment. Under the mild assumption that humans do not express other-regarding preferences for fictional automata, this treatment presents an Ultimatum Game with only strategic motives operative. Note also that unlike previous attempts that use a different game to make inferences about behavior in the Ultimatum Game, we are able to measure the effects of strategic and other-regarding motives without changing the fundamental structure of the Ultimatum Game. Moreover, previous analyses do not formally include decision error as an important motivation for non-SPNE offers. To test for misunderstanding of the strategic environment, we develop a second treatment in which subjects play the Ultimatum Game with a robot Responder that rejects or accepts every offer with equal probability. If Proposers are truly thinking about Responder rejection rates in formulating their offers, they should offer $0 in this treatment.
|
Page generated in 0.0848 seconds