• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 2
  • Tagged with
  • 19
  • 10
  • 10
  • 8
  • 8
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Prompt engineering and its usability to improve modern psychology chatbots / Prompt engineering och dess användbarhet för att förbättra psykologichatbottar

Nordgren, Isak, E. Svensson, Gustaf January 2023 (has links)
As advancements in chatbots and Large Language Models (LLMs) such as GPT-3.5 and GPT-4 continue, their applications in diverse fields, including psychology, expand. This study investigates the effectiveness of LLMs optimized through prompt engineering, aiming to enhance their performance in psychological applications. To this end, two distinct versions of a GPT-3.5-based chatbot were developed: a version similar to the base model, and a version equipped with a more extensive system prompt detailing expected behavior. A panel of professional psychologists evaluated these models based on a predetermined set of questions, providing insight into their potential future use as psychological tools. Our results indicate that an overly prescriptive system prompt can unintentionally limit the versatility of the chatbot, making a careful balance in instruction specificity essential. Furthermore, while our study suggests that current LLMs such as GPT-3.5 are not capable of fully replacing human psychologists, they can provide valuable assistance in tasks such as basic question answering, consolation and validation, and triage. These findings provide a foundation for future research into the effective integration of LLMs in psychology and contribute valuable insights into the promising field of AI-assisted psychological services. / I takt med att framstegen inom chatbots och stora språkmodeller (LLMs) som GPT-3.5 och GPT-4 fortsätter utvidgas deras potentiella tillämpningar inom olika områden, inklusive psykologi. Denna studie undersöker effektiviteten av LLMs optimerade genom prompt engineering, med målet att förbättra deras prestanda inom psykologiska tillämpningar. I detta syfte utvecklades två distinkta versioner av en chatbot baserad på GPT-3.5: en version som liknar bas-modellen, och en version utrustad med en mer omfattande systemprompt som detaljerar förväntat beteende. En panel av professionella psykologer utvärderade dessa modeller baserat på en förbestämd uppsättning frågor, vilket ger inblick i deras potentiella framtida användning som psykologiska verktyg. Våra resultat tyder på att en överdrivet beskrivande systemprompt kan ofrivilligt begränsa chatbotens mångsidighet, vilket kräver en noggrann balans i specificiteten av prompten. Vidare antyder vår studie att nuvarande LLMs som GPT-3.5 inte kan ersätta mänskliga psykologer helt och hållet, men att de kan ge värdefull hjälp i uppgifter som grundläggande frågebesvaring, tröst och bekräftelse, samt triage. Dessa resultat ger en grund för framtida forskning om effektiv integration av LLMs inom psykologi och bidrar med värdefulla insikter till det lovande fältet av AI-assisterade psykologtjänster.
12

Large language models as an interface to interact with API tools in natural language

Tesfagiorgis, Yohannes Gebreyohannes, Monteiro Silva, Bruno Miguel January 2023 (has links)
In this research project, we aim to explore the use of Large Language Models (LLMs) as an interface to interact with API tools in natural language. Bubeck et al. [1] shed some light on how LLMs could be used to interact with API tools. Since then, new versions of LLMs have been launched and the question of how reliable a LLM can be in this task remains unanswered. The main goal of our thesis is to investigate the designs of the available system prompts for LLMs, identify the best-performing prompts, and evaluate the reliability of different LLMs when using the best-identified prompts. We will employ a multiple-stage controlled experiment: A literature review where we reveal the available system prompts used in the scientific community and open-source projects; then, using F1-score as a metric we will analyse the precision and recall of the system prompts aiming to select the best-performing system prompts in interacting with API tools; and in a latter stage, we compare a selection of LLMs with the best-performing prompts identified earlier. From these experiences, we realize that AI-generated system prompts perform better than the current prompts used in open-source and literature with GPT-4, zero-shot prompts have better performance in this specific task with GPT-4 and that a good system prompt in one model does not generalize well into other models.
13

Går det att lita på ChatGPT? En kvalitativ studie om studenters förtroende för ChatGPT i lärandesammanhang

Härnström, Alexandra, Bergh, Isak Eljas January 2023 (has links)
Världens tekniska utveckling går framåt i snabb takt, inte minst när det kommer till ”smarta” maskiner och algoritmer med förmågan att anpassa sig efter sin omgivning. Detta delvis på grund av den enorma mängd data som finns tillgänglig och delvis tack vare en ökad lagringskapacitet. I november 2022 släpptes ett av de senaste AI-baserade programmen; chatboten ChatGPT. Inom två månader hade ChatGPT fått över 100 miljoner användare. Denna webbaserade mjukvara kan i realtid konversera med användare genom att besvara textbaserade frågor. Genom att snabbt och ofta korrekt besvara användarnas frågor på ett mänskligt och övertygande sätt, har tjänsten på kort tid genererat mycket uppmärksamhet. Det finns flera studier som visar på hur ett stort antal människor saknar ett generellt förtroende för AI. Vissa studier menar att de svar som ChatGPT genererar inte alltid kan antas vara helt korrekta och därför bör följas upp med en omfattande kontroll av faktan, eftersom de annars kan bidra till spridandet av falsk information. Eftersom förtroende för AI har visat sig vara en viktig del i hur väl teknologin utvecklas och integreras, kan brist på förtroende för sådana tjänster, såsom ChatGPT, vara ett hinder för en välfungerande användning. Trots att man sett på ökad produktivitet vid införandet av AI-teknologi hos företag så har det inom högre utbildning, som ett hjälpmedel för studenter, inte integrerats i samma utsträckning. Genom att ta reda på vilket förtroende studenter har för ChatGPT i lärandesammanhang, kan man erhålla information som kan vara till hjälp för integrationen av sådan AI-teknik. Dock saknas det specifik forskning kring studenters förtroende för ChatGPT i lärandesammanhang. Därför syftar denna studie till att fylla denna kunskapslucka, genom att utföra en kartläggning. Vår frågeställning är: ” Vilket förtroende har studenter för ChatGPT i lärandesammanhang?”. Kartläggningen utfördes med semistrukturerade intervjuer av åtta studenter som använt ChatGPT i lärandesammanhang. Intervjuerna genererade kvalitativa data som analyserades med tematisk analys, och resultatet visade på att studenters förtroende för ChatGPT i lärandesammanhang beror på en rad faktorer. Under analysen identifierade vi sex teman som ansågs vara relevanta för att besvara frågeställningen: ● Erfarenheter ● Användning ● ChatGPT:s karaktär ● Yttre påverkan ● Organisationer ● Framtida förtroende / The world's technological development is advancing rapidly, especially when it comes to "smart" machines and algorithms with the ability to adapt to their surroundings. This is partly due to the enormous amount of available data and partly thanks to increased storage capacity. In November 2022, one of the latest AI-based programs was released; the chatbot ChatGPT. This web-based software can engage in real-time conversations with users by answering text-based questions. By quickly, and often accurately, answering users' questions in a human-like and convincing manner, the service has generated a lot of attention in a short period of time. Within two months, ChatGPT had over 100 million users. There are several studies that show how a large number of people lack a general trust in AI. Some studies argue that the responses generated by ChatGPT may not always be assumed to be completely accurate and should therefore be followed up with extensive fact-checking, as otherwise they may contribute to the spreading of false information. Since trust in AI has been shown to be an important part of how well the technology develops and integrates, a lack of trust in services like ChatGPT can be a hindrance to effective usage. Despite the increased productivity observed in the implementation of AI technology in companies, it has not been integrated to the same extent within higher education as an aid for students. By determining the level of trust that students have in ChatGPT in an educational context, valuable information can be obtained to assist in the integration of such AI technology. However, there is a lack of specific research on students' trust in ChatGPT in an educational context. Therefore, this study aims to fill this knowledge gap by conducting a survey. Our research question is: “What trust do students have in ChatGPT in a learning context?”. The survey was conducted through semi-structured interviews with eight students who have used ChatGPT in an educational context. The interviews generated qualitative data that was analyzed using thematic analysis, and the results showed that students' trust in ChatGPT in an educational context depends on several factors. During the analysis, six themes were identified as relevant for answering the research question: • Experiences • Usage • ChatGPT’s character • Influences • Organizations • Future trust
14

The future of IT Project Management & Delivery: NLP AI opportunities & challenges

Viznerova, Ester January 2023 (has links)
This thesis explores the opportunities and challenges of integrating recent Natural Language Processing (NLP) Artificial Intelligence (AI) advancements into IT project management and delivery (PM&D). Using a qualitative design through hermeneutic phenomenology strategy, the study employs a semi-systematic literature review and semi-structured interviews to delve into NLP AI's potential impacts in IT PM&D, from both theoretical and practical standpoints. The results revealed numerous opportunities for NLP AI application across Project Performance Domains, enhancing areas such as stakeholder engagement, team productivity, project planning, performance measurement, project work, delivery, and risk management. However, challenges were identified in areas including system integration, value definition, team and stakeholder-related issues, environmental considerations, and ethical concerns. In-house and third-party model usage also presented their unique set of challenges, emphasizing cost implications, data privacy and security, result quality, and dependence issues. The research concludes the immense potential of NLP AI in IT PM&D is tempered by these challenges, and calls for robust strategies, sound ethics, comprehensive training, new ROI evaluation frameworks, and responsible AI usage to effectively manage these issues. This thesis provides valuable insights to academics, practitioners, and decision-makers navigating the rapidly evolving landscape of NLP AI in IT PM&D.
15

Large Language Models : Bedömning av ChatGPT:s potential som verktyg för kommentering av kod / Large Language Models : Assessment of ChatGPT's Potential as a Tool for Code Commenting

Svensson, Tom, Vuk, Dennis January 2023 (has links)
Användningen av Artificiell Intelligens (AI) är utbredd bland verksamma företag idag, likväl privatpersoner. Det har blivit en integrerad del av vårt samhälle som ofta går obemärkt förbi. Allt från face recognition, självkörande bilar och automatisering inom arbetsrelaterade områden, har AI onekligen påverkat omvärlden. I takt med att AI-modeller fortsätter att utvecklas tillkommer även farhågor om dess påverkan på jobb, tillhörande säkerhetsrisker och etiska dilemman. Uppsatsens litteratur hjälper till att skildra AI historiskt, i nutid, men även ge en uppfattning om vart den är på väg. Den AI-modell som i nuläget har väckt störst uppmärksamhet är ChatGPT. Dess potential tycks inte ha några gränser, därmed uppstod relevansen för att öka kunskapen kring AI-modellen. Vidare gjordes en avgränsning, där fokusområdet var att undersöka hur ChatGPT kan generera kodkommentarer och potentiellt agera som ett hjälpmedel vid kommentering av källkod. I samband med avgränsningen och fokusområdet bildades även forskningsfrågan: Large Language Models: Bedömning av ChatGPT:s potential som verktyg för kommentering av kod För att besvara forskningsfrågan har avhandlingen varit baserat på en kvalitativ ansats, där urvalet av respondenter har varit programmerare. Den primära datainsamlingen har genomförts via två semistrukturerade intervjuer, varav den inledande innefattade initiala känslor kring ChatGPT och övergripande fakta om respektive intervjuobjekt. Vidare gjordes det en observation för att få en inblick i hur AI-modellen används av programmerare, för att avslutningsvis göra en uppföljande intervju post-observation i syfte att samla tankarna från intervjuobjekten efter användning av ChatGPT för att generera kodkommentarer. Baserat på den insamlade empirin kunde studien konkludera vissa begränsningar i den nuvarande modellen, inte minst behovet av tydliga instruktioner. Trots brister visar ChatGPTs framställning potential att vara en betydande resurs för kommentering av kod i framtiden. Resultaten indikerar att modellen kan generera relativt passande kommentarer i de analyserade kodkodstycken. Emellertid uttryckte deltagarna under de avslutande intervjuerna generellt sett att kommentarerna var redundanta och saknade betydande värde för att öka förståelsen av källkoden. Respondenterna diskuterade dock möjligheterna att använda ChatGPT i framtiden, men underströk behovet av förbättringar för att göra det till en tillförlitlig metod inom arbetsrelaterade situationer. / The usage of Artificial Intelligence (AI) is widespread among both companies and individuals today. It has become an integrated part of our society, often going unnoticed. From face recognition and self-driving cars to automation in work-related areas, AI has undeniably impacted the world. As AI models continue to evolve, concerns about their impact on jobs, associated security risks, and ethical dilemmas arise. The literature in this essay helps portray AI historically, in the present, and provides an insight into its future direction. The AI model that has currently garnered the most attention is ChatGPT. Its potential seems limitless, which prompted the relevance of increasing knowledge about the AI model. Furthermore, a delimitation was made, where the focus area was to investigate how ChatGPT can generate code comments and potentially act as a tool for commenting source code. As part of the research focus and scope, the research question was formulated: "Large Language Models: Assessment of ChatGPT's Potential as a Tool for Code Commenting." To answer the research question, the thesis adopted a qualitative approach, with programmers as the selected respondents. The primary data collection was conducted through two semi-structured interviews, where the initial interview involved capturing initial impressions of ChatGPT and gathering general information about the interviewees. Additionally, an observation was carried out to gain insights into how programmers utilize the AI model, followed by a post-observation interview to gather the interviewees' thoughts after using ChatGPT to generate code comments. Based on the collected empirical data, the study was able to conclude certain limitations in the current model, particularly the need for clear instructions. Despite these limitations, ChatGPT's performance demonstrates the potential to be a significant resource for code commenting in the future. The results indicate that the model can generate relatively suitable comments in the analyzed code snippets. However, during the concluding interviews, participants generally expressed that the comments were redundant and lacked significant value in enhancing the understanding of the source code. Nevertheless, the respondents 2 discussed the possibilities of using ChatGPT in the future, while emphasizing the need for improvements to establish it as a reliable method in work-related situations.
16

An initial investigation of Automatic Program Repair for Solidity Smart Contracts with Large Language Models / En första undersökning av automatisk lagning av solidity smarta kontrakt med stora språkmodeller

Cruz, Erik January 2023 (has links)
This thesis investigates how Large Language Models can be used to repair Solidity Smart Contracts automatically through the main contribution of this thesis, the Transformative Repair Tool. The Transformative Repair Tool achieves similar results to current state-of-the-art tools on the Smartbugs Curated Dataset and is the first published tool that uses Large Language Models to repair Solidity Smart Contracts. Moreover, the thesis explores different prompt strategies to repair Smart Contracts and assess their performance. / Detta masterexamensarbete undersöker hur stora språkmodeller kan användas för att automatisk laga solidity smarta kontrakt genom verktyget Transformative Repair Tool, som är detta masterexamensarbete huvudsakliga bidrag. Transformative Repair Tool presterar liknande som dagens bästa verktyg inom automatisk lagning av smarta kontrakt på Smartbugs Curated datasettet och är det första publicerade verktyget som just använder stora språkmodeller för att reparera solidity smarta kontrakt. Dessutom så utforskar denna rapport olika textprompts och dess prestanda för att laga smarta kontrakt
17

DEEP LEARNING BASED METHODS FOR AUTOMATIC EXTRACTION OF SYNTACTIC PATTERNS AND THEIR APPLICATION FOR KNOWLEDGE DISCOVERY

Mdahsanul Kabir (16501281) 03 January 2024 (has links)
<p dir="ltr">Semantic pairs, which consist of related entities or concepts, serve as the foundation for comprehending the meaning of language in both written and spoken forms. These pairs enable to grasp the nuances of relationships between words, phrases, or ideas, forming the basis for more advanced language tasks like entity recognition, sentiment analysis, machine translation, and question answering. They allow to infer causality, identify hierarchies, and connect ideas within a text, ultimately enhancing the depth and accuracy of automated language processing.</p><p dir="ltr">Nevertheless, the task of extracting semantic pairs from sentences poses a significant challenge, necessitating the relevance of syntactic dependency patterns (SDPs). Thankfully, semantic relationships exhibit adherence to distinct SDPs when connecting pairs of entities. Recognizing this fact underscores the critical importance of extracting these SDPs, particularly for specific semantic relationships like hyponym-hypernym, meronym-holonym, and cause-effect associations. The automated extraction of such SDPs carries substantial advantages for various downstream applications, including entity extraction, ontology development, and question answering. Unfortunately, this pivotal facet of pattern extraction has remained relatively overlooked by researchers in the domains of natural language processing (NLP) and information retrieval.</p><p dir="ltr">To address this gap, I introduce an attention-based supervised deep learning model, ASPER. ASPER is designed to extract SDPs that denote semantic relationships between entities within a given sentential context. I rigorously evaluate the performance of ASPER across three distinct semantic relations: hyponym-hypernym, cause-effect, and meronym-holonym, utilizing six datasets. My experimental findings demonstrate ASPER's ability to automatically identify an array of SDPs that mirror the presence of these semantic relationships within sentences, outperforming existing pattern extraction methods by a substantial margin.</p><p dir="ltr">Second, I want to use the SDPs to extract semantic pairs from sentences. I choose to extract cause-effect entities from medical literature. This task is instrumental in compiling various causality relationships, such as those between diseases and symptoms, medications and side effects, and genes and diseases. Existing solutions excel in sentences where cause and effect phrases are straightforward, such as named entities, single-word nouns, or short noun phrases. However, in the complex landscape of medical literature, cause and effect expressions often extend over several words, stumping existing methods, resulting in incomplete extractions that provide low-quality, non-informative, and at times, conflicting information. To overcome this challenge, I introduce an innovative unsupervised method for extracting cause and effect phrases, PatternCausality tailored explicitly for medical literature. PatternCausality employs a set of cause-effect dependency patterns as templates to identify the key terms within cause and effect phrases. It then utilizes a novel phrase extraction technique to produce comprehensive and meaningful cause and effect expressions from sentences. Experiments conducted on a dataset constructed from PubMed articles reveal that PatternCausality significantly outperforms existing methods, achieving a remarkable order of magnitude improvement in the F-score metric over the best-performing alternatives. I also develop various PatternCausality variants that utilize diverse phrase extraction methods, all of which surpass existing approaches. PatternCausality and its variants exhibit notable performance improvements in extracting cause and effect entities in a domain-neutral benchmark dataset, wherein cause and effect entities are confined to single-word nouns or noun phrases of one to two words.</p><p dir="ltr">Nevertheless, PatternCausality operates within an unsupervised framework and relies heavily on SDPs, motivating me to explore the development of a supervised approach. Although SDPs play a pivotal role in semantic relation extraction, pattern-based methodologies remain unsupervised, and the multitude of potential patterns within a language can be overwhelming. Furthermore, patterns do not consistently capture the broader context of a sentence, leading to the extraction of false-positive semantic pairs. As an illustration, consider the hyponym-hypernym pattern <i>the w of u</i> which can correctly extract semantic pairs for a sentence like <i>the village of Aasu</i> but fails to do so for the phrase <i>the moment of impact</i>. The root cause of this limitation lies in the pattern's inability to capture the nuanced meaning of words and phrases in a sentence and their contextual significance. These observations have spurred my exploration of a third model, DepBERT which constitutes a dependency-aware supervised transformer model. DepBERT's primary contribution lies in introducing the underlying dependency structure of sentences to a language model with the aim of enhancing token classification performance. To achieve this, I must first reframe the task of semantic pair extraction as a token classification problem. The DepBERT model can harness both the tree-like structure of dependency patterns and the masked language architecture of transformers, marking a significant milestone, as most large language models (LLMs) predominantly focus on semantics and word co-occurrence while neglecting the crucial role of dependency architecture.</p><p dir="ltr">In summary, my overarching contributions in this thesis are threefold. First, I validate the significance of the dependency architecture within various components of sentences and publish SDPs that incorporate these dependency relationships. Subsequently, I employ these SDPs in a practical medical domain to extract vital cause-effect pairs from sentences. Finally, my third contribution distinguishes this thesis by integrating dependency relations into a deep learning model, enhancing the understanding of language and the extraction of valuable semantic associations.</p>
18

<b>Leveraging Advanced Large Language Models To Optimize Network Device Configuration</b>

Mark Bogdanov (18429435) 24 April 2024 (has links)
<p dir="ltr">Recent advancements in large language models such as ChatGPT and AU Large allow for the effective integration and application of LLMs into network devices such as switches and routers in terms of the ability to play a role in configuration and management. The given devices are an essential part of every network infrastructure, and the nature of physical networking topologies is complex, which leads to the need to ensure optimal network efficiency and security via meticulous and precise configurations.</p><p dir="ltr">The research explores the potential of an AI-driven interface that utilizes AU Large to streamline, enhance, and automate the configuration process of network devices while ensuring that the security of the whole process is guaranteed by running the entire system on-premise. Three core areas are of primary concern in the given study: the effectiveness of integrating the AU Large into network management systems, the impact on efficiency, accuracy, and error rates in network configurations, and the scalability and adaptability to more complex requirements and growing network environments.</p><p dir="ltr">The key performance metrics evaluated are the error rate in the generated configurations, scalability by looking at the performance as more network devices are added, and the ability to generate incredibly complex configurations accurately. The high-level results of the critical performance metrics show an evident correlation between increased device count and increased prompt complexity with a degradation in the performance of the AU Large model from Mistral AI.</p><p dir="ltr">This research has significant potential to alter preset network management practices by applying AI to make network configuration more efficient, reduce the scope for human error, and create an adaptable tool for diverse and complex networking environments. This research contributes to both AI and network management fields by highlighting a path toward the “future of network management.”</p>
19

Introducing Generative Artificial Intelligence in Tech Organizations : Developing and Evaluating a Proof of Concept for Data Management powered by a Retrieval Augmented Generation Model in a Large Language Model for Small and Medium-sized Enterprises in Tech / Introducering av Generativ Artificiell Intelligens i Tech Organisationer : Utveckling och utvärdering av ett Proof of Concept för datahantering förstärkt av en Retrieval Augmented Generation Model tillsammans med en Large Language Model för små och medelstora företag inom Tech

Lithman, Harald, Nilsson, Anders January 2024 (has links)
In recent years, generative AI has made significant strides, likely leaving an irreversible mark on contemporary society. The launch of OpenAI's ChatGPT 3.5 in 2022 manifested the greatness of the innovative technology, highlighting its performance and accessibility. This has led to a demand for implementation solutions across various industries and companies eager to leverage these new opportunities generative AI brings. This thesis explores the common operational challenges faced by a small-scale Tech Enterprise and, with these challenges identified, examines the opportunities that contemporary generative AI solutions may offer. Furthermore, the thesis investigates what type of generative technology is suitable for adoption and how it can be implemented responsibly and sustainably. The authors approach this topic through 14 interviews involving several AI researchers and the employees and executives of a small-scale Tech Enterprise, which served as a case company, combined with a literature review.  The information was processed using multiple inductive thematic analyses to establish a solid foundation for the investigation, which led to the development of a Proof of Concept. The findings and conclusions of the authors emphasize the high relevance of having a clear purpose for the implementation of generative technology. Moreover, the authors predict that a sustainable and responsible implementation can create the conditions necessary for the specified small-scale company to grow.  When the authors investigated potential operational challenges at the case company it was made clear that the most significant issue arose from unstructured and partially absent documentation. The conclusion reached by the authors is that a data management system powered by a Retrieval model in a LLM presents a potential path forward for significant value creation, as this solution enables data retrieval functionality from unstructured project data and also mitigates a major inherent issue with the technology, namely, hallucinations. Furthermore, in terms of implementation circumstances, both empirical and theoretical findings suggest that responsible use of generative technology requires training; hence, the authors have developed an educational framework named "KLART".  Moving forward, the authors describe that sustainable implementation necessitates transparent systems, as this increases understanding, which in turn affects trust and secure use. The findings also indicate that sustainability is strongly linked to the user-friendliness of the AI service, leading the authors to emphasize the importance of HCD while developing and maintaining AI services. Finally, the authors argue for the value of automation, as it allows for continuous data and system updates that potentially can reduce maintenance.  In summary, this thesis aims to contribute to an understanding of how small-scale Tech Enterprises can implement generative AI technology sustainably to enhance their competitive edge through innovation and data-driven decision-making.

Page generated in 0.0178 seconds