This research addresses the growing applications and impacts of AI-generated digital human avatars from software suites like HeyGen. By exploring the role of trust in mediating user interaction with such technology, this study establishes a basic hierarchical model which supports some foundational theories of human-computer interaction, while also calling into question some more recent theories and models previously used to evaluate avatar technology. By modeling user behavior and user preference through the lens of trust, this study is able to demonstrate how this emerging technology is similar to its predecessors and their relevant theories, while also establishing this technology as something distinctly new and largely untested. This research serves as an exploratory study, using notions of social presence, anthropomorphic design, social trust, technological trust, and human source-bias to separate this generation of AI Avatar technology from its predecessors, and determine what theories and models govern the use of this new technology. The findings from this study and their impacts on use-cases are then applied, speculating on prosocial as well as potentially unethical uses of such technology. Finally, this study problematizes the loss of “primary trust” that this technology may afford, highlighting the importance not only of continued research, but also rapid oversight in the deployment of this emerging technology.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-531841 |
Date | January 2024 |
Creators | McTaggart, Christopher |
Publisher | Uppsala universitet, Medier och kommunikation |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.002 seconds