Return to search

Exploring the impact of varying prompts on the accuracy of database querying with an LLM

Large Language Models (LLM) and their abilities of text-to-SQL are today a very relevant topic, as utilizing an LLM as a database interface would facilitate easy access to the data in the database without any prior knowledge of SQL. What is being studied in this thesis, is how to best structure a prompt to increase the accuracy of an LLM on a text-to-SQL task. The methods of experimentation used in the study were experimentation with 5 different prompts, and a total of 22 questions asked about the database with the questions being of difficulties varying from easy to extra hard. The results showed that a simpler, less descriptive prompt performed better on the easy and medium questions, while a more descriptive prompt performed better on the hard and extra hard questions. The f indings did not fully align with the hypothesis that more descriptive prompts would have the most correct outputs. In conclusion, it seemed that prompts that contained less ”clutter” and were more straightforward were more effective on easy questions, while on harder questions a prompt with a better description and examples had a better impact.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:umu-226806
Date January 2024
CreatorsLövlund, Pontus
PublisherUmeå universitet, Institutionen för datavetenskap
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess
RelationUMNAD ; 1478

Page generated in 0.0013 seconds