Return to search

Evaluating the Impact of Hallucinations on User Trust and Satisfaction in LLM-based Systems

Hallucinations in LLMs refer to instances where the models generateoutputs that are unrelated, incorrect, or misleading based on the inputprovided. This thesis investigates the impact of hallucinations in largelanguage model (LLM)-based systems on user trust and satisfaction, a criticalissue as AI becomes increasingly integrated into everyday applications.Hallucinations in LLMs—instances where the model generates incorrect ormisleading information—pose significant challenges for user reliability andoverall system effectiveness. Given the expanding role of AI in sectorsrequiring high trust levels, such as healthcare and finance, understanding andmitigating these errors is paramount. To address this issue, a controlled experiment was designed tosystematically assess how hallucinations affect user trust and satisfaction.Participants interacted with an AI system designed to exhibit varying levelsof hallucinatory behavior. Quantitative measures of trust and satisfactionwere collected through standardized questionnaires pre- and post-interaction,accompanied by statistical analyses to evaluate changes in user perception. The results clearly demonstrate that hallucinations significantly diminishuser trust and satisfaction, confirming the hypothesis that the accuracy of AIoutputs is crucial for user reliance. These findings not only contribute to theacademic discourse on human-AI interaction, but also have practicalimplications for AI developers and policymakers focusing on creating andregulating reliable AI technologies. This study bridges a crucial knowledge gap and provides a foundation forfuture research aimed at developing more robust and trustworthy AI systems.Readers engaged in AI development, implementation, and policymaking willfind the insights particularly relevant, encouraging further exploration intostrategies that could enhance user trust in AI technologies.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:lnu-130539
Date January 2024
CreatorsOelschlager, Richard
PublisherLinnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM)
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0021 seconds