• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

AI Supported Software Development: Moving Beyond Code Completion

Pudari, Rohith 30 August 2022 (has links)
AI-supported programming has arrived, as shown by the introduction and successes of large language models for code, such as Copilot/Codex (Github/OpenAI) and AlphaCode (DeepMind). Above-average human performance on programming challenges is now possible. However, software development is much more than solving programming contests. Moving beyond code completion to AI-supported software development will require an AI system that can, among other things, understand how to avoid code smells, follow language idioms, and eventually (maybe!) propose rational software designs. In this study, we explore the current limitations of Copilot and offer a simple taxonomy for understanding the classification of AI-supported code completion tools in this space. We first perform an exploratory study on Copilot’s code suggestions for language idioms and code smells. Copilot does not follow language idioms and avoid code smells in most of our test scenarios. We then conduct additional investigation to determine the current boundaries of Copilot by introducing a taxonomy of software abstraction hierarchies where ‘basic programming functionality’ such as code compilation and syntax checking is at the least abstract level, software architecture analysis and design are at the most abstract level. We conclude by providing a discussion on challenges for future development of AI-supported code completion tools to reach the design level of abstraction in our taxonomy. / Graduate
2

Chattbotar inom mjukvaruutveckling

Friström, Alex, Wallén, Daniel January 2023 (has links)
This work examines the utilization of chatbots in programming and their effects ondeveloper productivity, code quality, and problem-solving. The surge in AI technologyand the popularity of chatbots has been remarkable since the end of 2022, whenOpenAI introduced ChatGPT, capable of providing rapid and accurate responses toinquiries. This introduces novel opportunities for information accessibility withouthuman interactions.Previous research within this domain has explored the usability of earlier chatbots indesign-related professions, revealing a certain degree of utility. Now, with the advancementof AI, new prospects arise for investigating their utility. Emerging technologiesoften imbue functionalities that facilitate or simplify specific tasks. Therefore,the aim of this study is to explore and analyze how chatbots such as ChatGPTand GitHub Copilot can function as interactive aids to streamline programming andsystems development.Conducted as a qualitative study within the realms of programming and systems development,this work employs interviews as its primary methodology. Semi-structuredqualitative interviews are employed for data collection. To analyze the informationgathered from these interviews, a thematic analysis approach is adopted, facilitatingthe identification of commonalities and disparities in the responses.The findings of this study demonstrate that AI tools have proven to be effective andbeneficial in areas like information retrieval or fundamental programming tasks, yetexhibit limitations in advanced programming endeavors and complex problem-solving.The study encompasses respondents who have employed these tools in theirwork, possessing the expertise and experience to offer insights into developers' utilizationof these tools in software development.
3

Problem Solving Using Automatically Generated Code / Problemlösning med automatiskt genererad kod

Catir, Emir, Claesson, Robin January 2023 (has links)
Usage of natural language processing tools to generate code is increasing together with the advances in artificial intelligence. These tools could improve the efficiency of software development, if the generated code can be shown to be trustworthy enough to solve a given problem. This thesis examines what problems can be solved using automatically generated code such that the results can be trusted. A set of six problems were chosen to be used for testing two automatic code generators and the accuracy of their generated code. The problems were chosen to span a range from introductory programming assignments to complex problems with no known efficient algorithm. The problems also varied in how direct their descriptions were, with some describing exactly what should be done, while others described a real-world scenario with a desired result. The problems were used as prompts to the automatic code generators to generate code in three different programming languages. A testing framework was built that could execute the generated code, feed problem instances to the processes, and then verify the solutions that were outputted from them. The data from these tests were then used to calculate the accuracy of the generated code, based on how many of the problem instances were correctly solved. The experimental results show that most solutions to the problems either got all outputs correct, or had few or no correct outputs. Problems with direct explanations, or simple and well known algorithms, such as sorting, resulted in code with high accuracy. For problems that were wrapped in a scenario, the accuracy was the lowest. Hence, we believe that identifying the underlying problem before resorting to code generators should possibly increase the accuracy of the code. / Användningen av verktyg som bygger på språkteknologi för att generera kod har ökat i takt med framstegen inom artificiell intelligens. Dessa verktyg kan användas för att öka effektiviten inom mjukvaruutveckling, om den genererade koden kan visas tillförlitlig nog för att lösa ett givet problem. Denna avhandling utforskar vilka problem som kan lösas med automatiskt genererad kod på en nivå sådan att resultaten kan dömas tillförlitliga. En mängd på sex olika problem valdes för att testa två olika kodgenererande verktygs noggrannhet. De utvalda problemen valdes för att täcka ett stort span av programmeringsproblem. Från grundläggande programmeringsproblem till komplexa problem utan kända effektiva algoritmer. Problemen hade även olika nivåer av tydlighet i deras beskrivning. Vissa problem var tydligt formulerade med ett efterfrågat tillvägagångssätt, andra var mindre tydliga med sitt respektive förväntade resultat inbakat i problembeskrivningen. De utvalda kodgenererade verktygen uppmanades lösa problem enligt sex problembeskrivningar på tre olika programmeringsspråk. Ett ramverk byggdes som skapade probleminstanser, exekverade den genererade koden och verifierade den utmatade lösningen. Resultaten användes för att beräkna den genererade kodens noggrannhet, baserat på hur många av de givna instanserna som lösts korrekt. Resultaten från testerna visar att de flesta av de genererade lösningarna fick antingen alla eller inga instanser korrekt lösta. Problem med tydliga beskrivningar och enkla välkända algoritmer så som sortering, resulterade i kod med hög noggrannhet. För de mindre tydliga problemen, som resulterade i lägst noggrannhet, bör identifiering av det underliggande problemet öka kodens noggrannhet.
4

Enhancing Software Maintenance with Large Language Models : A comprehensive study

Younes, Youssef, Nassrallah, Tareq January 2024 (has links)
This study investigates the potential of Large Language Models (LLMs) to automate and enhance software maintenance tasks, focusing on bug detection and code refactoring. Traditional software maintenance, which includes debugging and code optimization, is time-consuming and prone to human error. With advancements in artificial intelligence, LLMs like ChatGPT and Copilot offer promising capabilities for automating these tasks. Through a series of quasi-experiments, we evaluate the effectiveness of ChatGPT 3.5, ChatGPT 4 (Grimoire GPT), and GitHub Copilot. Each model was tested on various code snippets to measure their ability to identify and correct bugs and refactor code while maintaining its original functionality. The results indicatethat ChatGPT 4 (Grimoire GPT) outperforms the other models, demonstrating superior accuracy and effectiveness, with success percentages of 87.5% and 75% in bug detection and code refactoring respectively. This research highlights the potential of advanced LLMs to significantly reduce the time and cost associated with software maintenance, though human oversight is still necessary to ensure code integrity. The findings contribute to the understanding of LLM capabilities in real-world software engineering tasks and pave the way for more intelligent and efficient software maintenance practices.
5

AI i systemutveckling: En undersökning av användarupplevelser : En kvalitativ undersökning på ett svenskt universitet / AI in System Development: An Investigation of User Experiences : A Qualitative Study at a Swedish University

Söderholm, Leo, Tönnesen, Douglas January 2024 (has links)
The development of generative AI has made great strides. More and more organizations are looking into implementing this new technology to increase productivity and efficiency. One of these new AI system- development tools is GitHub Copilot. The tool has shown great promise by offering functions such as automatic code generation, but this does not come without faults, as the generated code may be lacking in quality. How system developers within organizations experience this new technology is unknown, nor is it a worthwhile investment for the organizations in question. A qualitative study with semi-structured interviews has been carried out to capture the experiences of system developers concerning GitHub Copilot. The study was based on the theoretical framework Technology Acceptance Model 2 (TAM 2), in which some selected factors were used to describe the intention to use the system. A study was conducted to identify factors that cause an increase and/or a decrease in user acceptance.  We believe this would provide insights into what context GitHub Copilot would lead to increased productivity and efficiency. Based on the four factors studied, perceived usefulness, perceived ease of use, job relevance, and output quality, the study concludes with factors that affect a user’s intention to use GitHub Copilot. The study reveals that system developers perceive the usage of GitHub Copilot as positive. They believe that it has the potential to increase both productivity and efficiency. They perceive the tool as easy to get started with and easy to use. The quality of the generated code is perceived as somewhat lacking, but this did not affect their acceptance of the system.

Page generated in 0.0503 seconds