Return to search

Enhancing Software Maintenance with Large Language Models : A comprehensive study

This study investigates the potential of Large Language Models (LLMs) to automate and enhance software maintenance tasks, focusing on bug detection and code refactoring. Traditional software maintenance, which includes debugging and code optimization, is time-consuming and prone to human error. With advancements in artificial intelligence, LLMs like ChatGPT and Copilot offer promising capabilities for automating these tasks. Through a series of quasi-experiments, we evaluate the effectiveness of ChatGPT 3.5, ChatGPT 4 (Grimoire GPT), and GitHub Copilot. Each model was tested on various code snippets to measure their ability to identify and correct bugs and refactor code while maintaining its original functionality. The results indicatethat ChatGPT 4 (Grimoire GPT) outperforms the other models, demonstrating superior accuracy and effectiveness, with success percentages of 87.5% and 75% in bug detection and code refactoring respectively. This research highlights the potential of advanced LLMs to significantly reduce the time and cost associated with software maintenance, though human oversight is still necessary to ensure code integrity. The findings contribute to the understanding of LLM capabilities in real-world software engineering tasks and pave the way for more intelligent and efficient software maintenance practices.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:lnu-130254
Date January 2024
CreatorsYounes, Youssef, Nassrallah, Tareq
PublisherLinnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM)
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0507 seconds