Return to search

Algorithmic Fidelity and the use of Large Language Models in Social Science Research

In this dissertation, we argue that large language models (LLMs) exhibit a considerable amount of \textit{algorithmic fidelity}, a property where they have modeled ideas, behaviors, and attitudes of the population who generated their training data. This has important implications for social science, as this fidelity theoretically allows for the use of LLMs as effective proxies for human beings in experiments and research. We demonstrate this empirically in various social science domains (political partisanship, demographic surveying, voting behavior, hot-button policy issues, news media, populism, congressional summaries), in various applications (replicating social science survey findings, assisting in coding of text datasets, inferring demographics, automating interventions to improve conversations about divisive topics), and at various levels of granularity (from findings about the entire U.S. population down to specific demographics and down to the individual level). It is intrinsically interesting that LLMs could learn such behaviors on the unsupervised objective whereby they are trained. It is also strategically useful to establish where and to what extent they have, so that we can study them in cheaper and formerly impossible ways. This work serves as a preliminary study on these phenomena and an early, demonstrative methodology for drawing out the algorithmic fidelity from large language models.

Identiferoai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-11400
Date23 May 2023
CreatorsRytting, Christopher Michael
PublisherBYU ScholarsArchive
Source SetsBrigham Young University
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceTheses and Dissertations
Rightshttps://lib.byu.edu/about/copyright/

Page generated in 0.0023 seconds