This dissertation presents three independent essays in microeconomic theory. Chapter 1 suggests an alternative to the common prior assumption, in which agents form beliefs by learning from data, possibly interpreting the data in different ways. In the limit as agents observe increasing quantities of data, the model returns strict solutions of a limiting complete information game, but predictions may diverge substantially for small quantities of data. Chapter 2 (with Jon Kleinberg and Sendhil Mullainathan) proposes use of machine learning algorithms to construct benchmarks for “achievable" predictive accuracy. The paper illustrates this approach for the problem of predicting human-generated random sequences. We find that leading models explain approximately 10-15% of predictable variation in the problem. Chapter 3 considers the problem of how to interpret inconsistent choice data, when the observed departures from the standard model (perfect maximization of a single preference) may emerge either from context-dependencies in preference or from stochastic choice error. I show that if preferences are “simple" in the sense that they consist only of a small number of context-dependencies, then the analyst can use a proposed optimization problem to recover the true number of underlying context-dependent preferences. / Economics
Identifer | oai:union.ndltd.org:harvard.edu/oai:dash.harvard.edu:1/33493561 |
Date | 25 July 2017 |
Creators | Liang, Annie |
Publisher | Harvard University |
Source Sets | Harvard University |
Language | English |
Detected Language | English |
Type | Thesis or Dissertation, text |
Format | application/pdf |
Rights | open |
Page generated in 0.0025 seconds