Large Language Models (LLM) and their abilities of text-to-SQL are today a very relevant topic, as utilizing an LLM as a database interface would facilitate easy access to the data in the database without any prior knowledge of SQL. What is being studied in this thesis, is how to best structure a prompt to increase the accuracy of an LLM on a text-to-SQL task. The methods of experimentation used in the study were experimentation with 5 different prompts, and a total of 22 questions asked about the database with the questions being of difficulties varying from easy to extra hard. The results showed that a simpler, less descriptive prompt performed better on the easy and medium questions, while a more descriptive prompt performed better on the hard and extra hard questions. The f indings did not fully align with the hypothesis that more descriptive prompts would have the most correct outputs. In conclusion, it seemed that prompts that contained less ”clutter” and were more straightforward were more effective on easy questions, while on harder questions a prompt with a better description and examples had a better impact.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:umu-226806 |
Date | January 2024 |
Creators | Lövlund, Pontus |
Publisher | Umeå universitet, Institutionen för datavetenskap |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Relation | UMNAD ; 1478 |
Page generated in 0.0019 seconds