This dissertation studies three models of sequential social learning, each of which has implications for the impact of the internet and social media on political discourse. I take three features of online political discussion, and consider in what ways they interfere with or assist learning.In Chapter 1, I consider agents who engage in motivated reasoning, which is a belief-formation procedure in which agents trade-off a desire to form accurate beliefs against a desire to hold ideologically congenial beliefs. Taking a model of motivated reasoning in which agents can reject social signals that provide too strong evidence against their preferred state, I analyse under which conditions we can expect asymptotic consensus, where all agents choose the same action, and learning, in which Bayesian agents choose the correct state with probability 1. I find that learning requires much more connected observation networks than is the case with Bayesian agents. Furthermore, I find that increasing the precision of agents’ private signals can actually break consensus, providing an explanation for the advance of factual polarisation despite the greater access to information that the internet provides.
In Chapter 2, I evalute the importance of timidity. In the presence of agents who prefer not to be caught in error publicly, and can choose to keep their views to themselves given this, insufficiently confident individuals may choose not to participate in online debate. Studying social learning in this setting, I discover an unravelling mechanism by which non-partisan agents drop out of online political discourse. This leads to an exaggerated online presence for partisans, which can cause even more Bayesian agents to drop out. I consider the possibility of introducing partially anonymous commenting, how this could prevent such unravelling, and what restrictions on such commenting would be desirable.
In Chapter 3, my focus moves on to considering rational inattention, and how this interacts with the glut of information the internet has produced. I set out a model that incorporates the costly observation of private and social information, and derive conditions under which we should expect learning to obtain despite these costs. I find that expanding access to cheap information can actually damage learning: giving all agents Blackwell-preferred signals or cheaper observations of all their neighbors can reduce the asymptotic probability with which they match the state. Furthermore, the highly connected networks social media produces can generate a public good problem in investigate journalism, damaging the ‘information ecosystem’ further still.
Identifer | oai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/pe6k-0v53 |
Date | January 2024 |
Creators | Cremin, John Walter Edward |
Source Sets | Columbia University |
Language | English |
Detected Language | English |
Type | Theses |
Page generated in 0.0021 seconds