Return to search

Secure Interactions with Large Language Models in Financial Services : A Study on Implementing Safeguards for Large Language Models

This thesis examines the use of Large Language Models (LLMs) in the financial sector, highlighting the risks and necessary safety measures for their application in financial services. As these models become more common in various financial tools, they bring both new opportunities and significant challenges, such as potential errors in financial advice and privacy issues. This work introduces a detailed safeguarded framework designed to improve the reliability, security, and ethical use of LLMs in financial applications. The framework includes specific safety features like checking user inputs, detecting incorrect information, and preventing security breaches to tackle these challenges effectively. Using quantitative testing benchmarks and case studies with a financial chatbot, this thesis shows that this framework helps reduce operational risks and increases trust among users. The results show that while LLMs already have some built-in safety features, adding tailored security measures greatly strengthens these systems against complex threats. This study advances the discussion on AI safety in financial settings and provides a practical guide for implementing strong safety measures that ensure reliable and ethical financial services.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-532635
Date January 2024
CreatorsFredrikson, Gustav
PublisherUppsala universitet, Avdelningen för beräkningsvetenskap
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess
RelationUPTEC F, 1401-5757 ; 24019

Page generated in 0.002 seconds