Return to search

Belief, Values, Bias, and Agency: Development of and Entanglement with "Artificial Intelligence"

Contemporary research into the values, bias, and prejudices within "Artificial Intelligence" tends to operate in a crux of scholarship in computer science and engineering, sociology, philosophy, and science and technology studies (STS). Even so, getting the STEM fields to recognize and accept the importance of certain kinds of knowledge— the social, experiential kinds of knowledge— remains an ongoing struggle. Similarly, religious scholarship is still very often missing from these conversations because many in the STEM fields and the general public feel that religion and technoscientific investigations are and should be separate fields of inquiry. Here I demonstrate that experiential knowledge and religious, even occult beliefs are always already embedded within and crucial to understanding the sociotechnical imaginaries animating many technologies, particularly in the areas of "AI." In fact, it is precisely the unwillingness of many to confront these facts which allow for both the problems of prejudice embedded in algorithmic systems, and for the hype-laden marketing of the corporations and agencies developing them. This same hype then intentionally obfuscates the actions of both the systems and the people who create them, while confounding and oppressing those most often made subject to them. Further, I highlight a crucial continuity between bigotry and systemic social projects (eugenics, transhumanism, and "supercrip" narratives), revealing their foundation in white supremacist colonialist myths of whose and which kinds of lives count as "truly human." We will examine how these myths become embedded into the religious practices, technologies, and social frameworks in and out of which "AI" and algorithms are developed, employing a composite theoretical lens made from tools such as intersectionality, ritual theory, intersubjectivity, daemonology, postphenomenology, standpoint epistemology, and more. This theoretical apparatus recontextualizes our understanding of how mythologies and rituals of professionalization, disciplinarity, and dominant epistemological hierarchies animate concepts such as knowledge formation, expertise, and even what counts as knowledge. This recontextualization is then deployed to suggest remedies for research, public policy, and general paths forward in "AI." By engaging in both the magico-religious valences and the lived experiential expertise of marginalized people, these systems can be better understood, and their harms anticipated and curtailed. / Doctor of Philosophy / The twenty-first century has been increasingly full of conversations about how human values, biases, and prejudices make their way into what is usually referred to as "Artificial Intelligence." These conversations have increasingly involved experts from not just science, technology, engineering, and math (STEM), but also sociology, philosophy, and science and technology studies (STS). Even so, it's still often difficult to get the STEM fields to accept the importance of certain kinds experience and knowledge— especially that of marginalized people. Additionally, religious scholarship is often excluded from these conversations because many in the STEM fields (and the general public) feel that religion and science and technology should be separate fields of study. Here, I demonstrate that knowledge developed from lived experience and religious, even occult beliefs have always already been part of how we think about and understand many technologies, especially "AI." In addition, I show how people's unwillingness to accept the importance of our experience and beliefs is what leads to the prejudice embedded in algorithmic systems, and the hype-laden marketing of the people developing them. This same hype obscures the mechanisms of actions of both the systems themselves and the people who create them, and that obscurity makes it harder for the people most often oppressed by the systems to do anything about it. I highlight a line of connection between bigotry and large-scale social programs like eugenics, transhumanism, and the idea of the "supercrip," to reveal how they all stem from white supremacist colonialist myths about which kinds of lives count as "really human." These myths became part of the religious practice, scientific education, and social fabric from which "AI" and algorithms are developed. I combine tools from multiple fields to help show how mythologies and rituals of education, notions of what it means to "be a professional," and dominant cultural beliefs about knowledge all animate concepts such as learning, expertise, and even what counts as knowledge. By considering both the magical/religious elements and the lived experiences of marginalized people, we can chart new paths for research and public policy, toward making more ethical and just "AI."

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/111528
Date15 August 2022
CreatorsWilliams, Damien Patrick
ContributorsScience and Technology Studies, Heflin, Ashley Shew, Johnson, Sylvester A., Labuski, Christine, Hester, Rebecca, Collier, James H.
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeDissertation
FormatETD, application/pdf, application/pdf
RightsCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International, http://creativecommons.org/licenses/by-nc-sa/4.0/

Page generated in 0.0021 seconds