ChatGPT has become a popular technology among people and gained a considerable user base, because of its power to effectively generate responses to users requests. However, as ChatGPT’s popularity has grown and as other natural language processing systems (NLPs) are being developed and adopted, several concerns have been raised about the technology that could have implications on user trust. Because trust plays a central role in user willingness to adopt artificial intelligence (AI) systems and there is no consensus in research on what facilitates trust, it is important to conduct more research to identify the factors that affect user trust in artificial intelligence systems, especially modern technologies such as NLPs. The aim of the study was therefore to identify the factors that affect user trust in NLPs. The findings from the literature within trust and artificial intelligence indicated that there may exist a relationship between trust and transparency, explainability, accuracy, reliability, automation, augmentation, anthropomorphism and data privacy. These factors were quantitatively studied together in order to uncover what affects user trust in NLPs. The result from the study indicated that transparency, accuracy, reliability, automation, augmentation, anthropomorphism and data privacy all have a positive impact on user trust in NLPs, which both supported and opposed previous findings from literature.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-532416 |
Date | January 2024 |
Creators | Aronsson Bünger, Morgan |
Publisher | Uppsala universitet, Informationssystem |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0024 seconds