This thesis investigates the application of Large Language Models (LLMs) in auto-mated test generation for software development, focusing on their challenges, bene-fits, and suitability for businesses. The study employs a mixed-methods approach, combining a literature review with empirical evaluations through surveys, interviews, and focus groups involving software developers and testers. Key findings indicate that LLMs enhance the efficiency and speed of test case generation, offering substantial improvements in test coverage and reducing development costs. However, the integration of LLMs poses several challenges, including technical complexities, the need for extensive customization, and concerns about the quality and reliability of the generated test cases. Additionally, ethical issues such as data biases and the potential impact on job roles were highlighted. The results show that while LLMs excel in generating test cases for routine tasks, their effectiveness diminishes in complex scenarios requiring deep domain knowledge and intricate system interactions. The study concludes that with proper training, continuous feedback, and iterative refinement, LLMs can be effectively integrated into existing workflows to complement traditional testing methods.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:bth-26519 |
Date | January 2024 |
Creators | Hurani, Muaz, Idris, Hamzeh |
Publisher | Blekinge Tekniska Högskola, Institutionen för programvaruteknik |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0017 seconds