This thesis examines the use of Large Language Models (LLMs) in the financial sector, highlighting the risks and necessary safety measures for their application in financial services. As these models become more common in various financial tools, they bring both new opportunities and significant challenges, such as potential errors in financial advice and privacy issues. This work introduces a detailed safeguarded framework designed to improve the reliability, security, and ethical use of LLMs in financial applications. The framework includes specific safety features like checking user inputs, detecting incorrect information, and preventing security breaches to tackle these challenges effectively. Using quantitative testing benchmarks and case studies with a financial chatbot, this thesis shows that this framework helps reduce operational risks and increases trust among users. The results show that while LLMs already have some built-in safety features, adding tailored security measures greatly strengthens these systems against complex threats. This study advances the discussion on AI safety in financial settings and provides a practical guide for implementing strong safety measures that ensure reliable and ethical financial services.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-532635 |
Date | January 2024 |
Creators | Fredrikson, Gustav |
Publisher | Uppsala universitet, Avdelningen för beräkningsvetenskap |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Relation | UPTEC F, 1401-5757 ; 24019 |
Page generated in 0.0025 seconds