• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 4
  • 1
  • Tagged with
  • 23
  • 23
  • 23
  • 11
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Large language models as an interface to interact with API tools in natural language

Tesfagiorgis, Yohannes Gebreyohannes, Monteiro Silva, Bruno Miguel January 2023 (has links)
In this research project, we aim to explore the use of Large Language Models (LLMs) as an interface to interact with API tools in natural language. Bubeck et al. [1] shed some light on how LLMs could be used to interact with API tools. Since then, new versions of LLMs have been launched and the question of how reliable a LLM can be in this task remains unanswered. The main goal of our thesis is to investigate the designs of the available system prompts for LLMs, identify the best-performing prompts, and evaluate the reliability of different LLMs when using the best-identified prompts. We will employ a multiple-stage controlled experiment: A literature review where we reveal the available system prompts used in the scientific community and open-source projects; then, using F1-score as a metric we will analyse the precision and recall of the system prompts aiming to select the best-performing system prompts in interacting with API tools; and in a latter stage, we compare a selection of LLMs with the best-performing prompts identified earlier. From these experiences, we realize that AI-generated system prompts perform better than the current prompts used in open-source and literature with GPT-4, zero-shot prompts have better performance in this specific task with GPT-4 and that a good system prompt in one model does not generalize well into other models.
12

Efficient Sentiment Analysis and Topic Modeling in NLP using Knowledge Distillation and Transfer Learning / Effektiv sentimentanalys och ämnesmodellering inom NLP med användning av kunskapsdestillation och överföringsinlärning

Malki, George January 2023 (has links)
This abstract presents a study in which knowledge distillation techniques were applied to a Large Language Model (LLM) to create smaller, more efficient models without sacrificing performance. Three configurations of the RoBERTa model were selected as ”student” models to gain knowledge from a pre-trained ”teacher” model. Multiple steps were used to improve the knowledge distillation process, such as copying some weights from the teacher to the student model and defining a custom loss function. The selected task for the knowledge distillation process was sentiment analysis on Amazon Reviews for Sentiment Analysis dataset. The resulting student models showed promising performance on the sentiment analysis task capturing sentiment-related information from text. The smallest of the student models managed to obtain 98% of the performance of the teacher model while being 45% lighter and taking less than a third of the time to analyze an entire the entire IMDB Dataset of 50K Movie Reviews dataset. However, the student models struggled to produce meaningful results on the topic modeling task. These results were consistent with the topic modeling results from the teacher model. In conclusion, the study showcases the efficacy of knowledge distillation techniques in enhancing the performance of LLMs on specific downstream tasks. While the model excelled in sentiment analysis, further improvements are needed to achieve desirable outcomes in topic modeling. These findings highlight the complexity of language understanding tasks and emphasize the importance of ongoing research and development to further advance the capabilities of NLP models. / Denna sammanfattning presenterar en studie där kunskapsdestilleringstekniker tillämpades på en stor språkmodell (Large Language Model, LLM) för att skapa mindre och mer effektiva modeller utan att kompremissa på prestandan. Tre konfigurationer av RoBERTa-modellen valdes som ”student”-modeller för att inhämta kunskap från en förtränad ”teacher”-modell. Studien mäter även modellernas prestanda på två ”DOWNSTREAM” uppgifter, sentimentanalys och ämnesmodellering. Flera steg användes för att förbättra kunskapsdestilleringsprocessen, såsom att kopiera vissa vikter från lärarmodellen till studentmodellen och definiera en anpassad förlustfunktion. Uppgiften som valdes för kunskapsdestilleringen var sentimentanalys på datamängden Amazon Reviews for Sentiment Analysis. De resulterande studentmodellerna visade lovande prestanda på sentimentanalysuppgiften genom att fånga upp information relaterad till sentiment från texten. Den minsta av studentmodellerna lyckades erhålla 98% av prestandan hos lärarmodellen samtidigt som den var 45% lättare och tog mindre än en tredjedel av tiden att analysera hela IMDB Dataset of 50K Movie Reviews datasettet.Dock hade studentmodellerna svårt att producera meningsfulla resultat på ämnesmodelleringsuppgiften. Dessa resultat överensstämde med ämnesmodelleringsresultaten från lärarmodellen. Dock hade studentmodellerna svårt att producera meningsfulla resultat på ämnesmodelleringsuppgiften. Dessa resultat överensstämde med ämnesmodelleringsresultaten från lärarmodellen.
13

ChatGPT som socialt disruptiv teknologi : En fallstudie om studierektorers inställning till ChatGPT och dess påverkan på utbildning

Back, Hampus, Fischer, Fredrik January 2023 (has links)
Teknologiutvecklingen av stora språkmodeller har på senaste tiden blivit uppmärksammad genom lanseringen av OpenAI:s ChatGPT. Det har förekommit diskussioner om vad detta innebär för samhället i stort men också hur utbildningen på lärosäten påverkas. Syftet med denna studie var att studera hur stor påverkan dessa verktyg har på utbildningen på Uppsala universitet. Fem studierektorer från olika institutioner har intervjuats. Datan analyserades sedan med hjälp av teorin för socialt disruptiva teknologier för att undersöka hur stor påverkansgraden är. Resultatet visar att det främst är examinationer som har påverkats, där vissa studierektorer har behövt ta bort eller kommer att ta bort hemuppgifter som konsekvens av ChatGPT. Skillnader i förändringsarbetet finns mellan olika institutioner, vilket tycks delvis grunda sig i brist på riktlinjer, men även i utbildningsstruktur och personligt engagemang. Det går dock inte att fastslå några systematiska skillnader mellan universitetets olika delar. Vidare har det diskuterats bredare frågor om studenternas lärande och hur man som studierektor kan förhålla sig till utvecklingen. / The technology development of large language models has recently received attention through the launch of OpenAI’s ChatGPT. There have been discussions of what this means for society overall, but also how the education at universities is affected. The purpose of this study was to study how much impact these tools have on education at Uppsala University. Five directors of studies from different departments have been interviewed. The data was then analyzed using the theory of socially disruptive technologies to investigate the degree of impact. The result shows that it is mainly examinations that have been affected, where some principals have had to remove or will remove homework assignments as a consequence of ChatGPT. Differences in change management exist between different institutions, which seem to be partly due to the lack of guidelines, but also due to educational structure and personal commitment. However, no systematic differences can be determined between the different parts of the university. Furthermore, there have been discussions about broader questions about the students' learning and how one should relate to the development as a director of studies.
14

Bridging Language & Data : Optimizing Text-to-SQL Generation in Large Language Models / Från ord till SQL : Optimering av text-till-SQL-generering i stora språkmodeller

Wretblad, Niklas, Gordh Riseby, Fredrik January 2024 (has links)
Text-to-SQL, which involves translating natural language into Structured Query Language (SQL), is crucial for enabling broad access to structured databases without expert knowledge. However, designing models for such tasks is challenging due to numerous factors, including the presence of ’noise,’ such as ambiguous questions and syntactical errors. This thesis provides an in-depth analysis of the distribution and types of noise in the widely used BIRD-Bench benchmark and the impact of noise on models. While BIRD-Bench was created to model dirty and noisy database values, it was not created to contain noise and errors in the questions and gold queries. We found after a manual evaluation that noise in questions and gold queries are highly prevalent in the financial domain of the dataset, and a further analysis of the other domains indicate the presence of noise in other parts as well. The presence of incorrect gold SQL queries, which then generate incorrect gold answers, has a significant impact on the benchmark’s reliability. Surprisingly, when evaluating models on corrected SQL queries, zero-shot baselines surpassed the performance of state-of-the-art prompting methods. The thesis then introduces the concept of classifying noise in natural language questions, aiming to prevent the entry of noisy questions into text-to-SQL models and to annotate noise in existing datasets. Experiments using GPT-3.5 and GPT-4 on a manually annotated dataset demonstrated the viability of this approach, with classifiers achieving up to 0.81 recall and 80% accuracy. Additionally, the thesis explored the use of LLMs for automatically correcting faulty SQL queries. This showed a 100% success rate for specific query corrections, highlighting the potential for LLMs in improving dataset quality. We conclude that informative noise labels and reliable benchmarks are crucial to developing new Text-to-SQL methods that can handle varying types of noise.
15

The future of IT Project Management & Delivery: NLP AI opportunities & challenges

Viznerova, Ester January 2023 (has links)
This thesis explores the opportunities and challenges of integrating recent Natural Language Processing (NLP) Artificial Intelligence (AI) advancements into IT project management and delivery (PM&D). Using a qualitative design through hermeneutic phenomenology strategy, the study employs a semi-systematic literature review and semi-structured interviews to delve into NLP AI's potential impacts in IT PM&D, from both theoretical and practical standpoints. The results revealed numerous opportunities for NLP AI application across Project Performance Domains, enhancing areas such as stakeholder engagement, team productivity, project planning, performance measurement, project work, delivery, and risk management. However, challenges were identified in areas including system integration, value definition, team and stakeholder-related issues, environmental considerations, and ethical concerns. In-house and third-party model usage also presented their unique set of challenges, emphasizing cost implications, data privacy and security, result quality, and dependence issues. The research concludes the immense potential of NLP AI in IT PM&D is tempered by these challenges, and calls for robust strategies, sound ethics, comprehensive training, new ROI evaluation frameworks, and responsible AI usage to effectively manage these issues. This thesis provides valuable insights to academics, practitioners, and decision-makers navigating the rapidly evolving landscape of NLP AI in IT PM&D.
16

Java Unit Testing with AI: An AI-Driven Prototype for Unit Test Generation / Enhetstestning i Java med hjälp av AI: En AI-baserad prototyp för generering av enhetstester

Kahur, Katrin, Su, Jennifer January 2023 (has links)
In recent years, artificial intelligence (AI) has become increasingly popular. An area where AI technology is used and has received much attention during the past year is chatbots. They can simulate an understanding of human language and form text responses to questions asked. Apart from generating text responses, they can also generate programming code, making them useful for tasks such as testing. Although testing is considered a crucial part of software development, many find it tedious and time-consuming. There are currently limited AI-powered tools for generating unit tests in general and even fewer for the programming language Java. The thesis tackles the problem of the lack of tools for generating unit tests in Java that explore the capabilities of AI, and a research question is introduced thereafter. The purpose of this thesis is to address the issue by creating a prototype for generating unit tests in Java based on the AI model, GPT-3.5-Turbo. The goal is to provide a basis for other professionals to create tools for generating unit tests, which was done by experimenting with different prompts and values of a randomness parameter and then suggesting the prototype JUTAI. A quantitative research method with an experimental and comparative approach was used to evaluate the results. A comparison model with three criteria was brought forward to evaluate the results. The findings reveal that JUTAI outperformed the general-purpose AI tool, ChatGPT, across all three criteria and indicate that the goal of this thesis is achieved and the research question answered. / Intresset för artificiell intelligens (AI) har ökat de senaste åren. Ett område där AI- teknologi används och som har fått mycket uppmärksamhet under det senaste året är chattbottar. De kan simulera en förståelse för mänskligt språk och svara på frågor i textformat. Utöver det kan de även generera programkod. Tack vare förmågan att generera kod kan de användas för testning. Även om testning anses vara en viktig del av mjukvaruutveckling, tycker många att det är tråkigt och tidskrävande. För närvarande finns det ett begränsat antal verktyg som kan generera enhetstester, och det finns ännu färre verktyg som kan göra detta i Java. Detta examensarbete tog sig an problemet med bristen på AI-verktyg för enhetstestning i Java genom att besvara på forskningsfrågan som ställdes. Syftet med examensarbetet är att föreslå en lösning på problemet genom att utveckla en prototyp som använder sig av AI- modellen GPT-3.5-Turbo för att generera enhetstester i Java. Målet är att ge en grund för andra yrkesverksamma att skapa verktyg för att generera enhetstester, vilket gjordes genom att experimentera med olika instruktionstrukturer och värden för en slumpmässighetsparameter, och sedan föreslå protypen JUTAI. En kvantitativ forskningsmetod tillsammans med en experimentell och jämförande ansats användes för att utvärdera resultaten. En jämförelsemodell med tre kriterier togs fram för att utvärdera resultaten. Resultaten visar att JUTAI presterade bättre än AI-verktyget ChatGPT i de tre kriterierna och indikerar att målet med detta examensarbete uppnåddes och forskningsfrågan besvarades.
17

[pt] GERAÇÃO DE DESCRIÇÕES DE PRODUTOS A PARTIR DE AVALIAÇÕES DE USUÁRIOS USANDO UM LLM / [en] PRODUCT DESCRIPTION GENERATION FROM USER REVIEWS USING A LLM

BRUNO FREDERICO MACIEL GUTIERREZ 04 June 2024 (has links)
[pt] No contexto de comércio eletrônico, descrições de produtos exercem grande influência na experiência de compra. Descrições bem feitas devem idealmente informar um potencial consumidor sobre detalhes relevantes do produto, esclarecendo potenciais dúvidas e facilitando a compra. Gerar boas descrições, entretanto, é uma atividade custosa, que tradicionalmente exige esforço humano. Ao mesmo tempo, existe uma grande quantidade de produtos sendo lançados a cada dia. Nesse contexto, este trabalho apresenta uma nova metodologia para a geração automatizada de descrições de produtos, usando as avaliações deixadas por usuários como fonte de informações. O método proposto é composto por três etapas: (i) a extração de sentenças adequadas para uma descrição a partir das avaliações (ii) a seleção de sentenças dentre as candidatas (iii) a geração da descrição de produto a partir das sentenças selecionadas usando um Large Language Model (LLM) de forma zero-shot. Avaliamos a qualidade das descrições geradas pelo nosso método comparando-as com descrições de produto reais postadas pelos próprios anunciantes. Nessa avaliação, contamos com a colaboração de 30 avaliadores, e verificamos que nossas descrições são preferidas mais vezes do que as descrições originais, sendo consideradas mais informativas, legíveis e relevantes. Além disso, nessa mesma avaliação replicamos um método da literatura recente e executamos um teste estatístico comparando seus resultados com o nosso método, e dessa comparação verificamos que nosso método gera descrições mais informativas e preferidas no geral. / [en] In the context of e-commerce, product descriptions have a great influence on the shopping experience. Well-made descriptions should ideally inform a potential consumer about relevant product details, clarifying potential doubt sand facilitating the purchase. Generating good descriptions, however, is a costly activity, which traditionally requires human effort. At the same time, there are a large number of products being launched every day. In this context, this work presents a new methodology for the automated generation of product descriptions, using reviews left by users as a source of information. The proposed method consists of three steps: (i) the extraction of suitable sentences for a description from the reviews (ii) the selection of sentences among the candidates (iii) the generation of the product description from the selected sentences using a Large Language Model (LLM) in a zero-shot way. We evaluate the quality of descriptions generated by our method by comparing them to real product descriptions posted by sellers themselves. In this evaluation, we had the collaboration of 30 evaluators, and we verified that our descriptions are preferred more often than the original descriptions, being considered more informative, readable and relevant. Furthermore, in this same evaluation we replicated a method from recent literature and performed a statistical test comparing its results with our method, and from this comparison we verified that our method generates more informative and preferred descriptions overall.
18

Preventing Health Data from Leaking in a Machine Learning System : Implementing code analysis with LLM and model privacy evaluation testing / Förhindra att Hälsodata Läcker ut i ett Maskininlärnings System : Implementering av kod analys med stor språk-modell och modell integritets testning

Janryd, Balder, Johansson, Tim January 2024 (has links)
Sensitive data leaking from a system can have tremendous negative consequences, such as discrimination, social stigma, and fraudulent economic consequences for those whose data has been leaked. Therefore, it’s of utmost importance that sensitive data is not leaked from a system. This thesis investigated different methods to prevent sensitive patient data from leaking in a machine learning system. Various methods have been investigated and evaluated based on previous research; the methods used in this thesis are a large language model (LLM) for code analysis and a membership inference attack on models to test their privacy level. The LLM code analysis results show that the Llama 3 (an LLM) model had an accuracy of 90% in identifying malicious code that attempts to steal sensitive patient data. The model analysis can evaluate and determine membership inference of sensitive patient data used for training in machine learning models, which is essential for determining data leakage a machine learning model can pose in machine learning systems. Further studies in increasing the deterministic and formatting of the LLM‘s responses must be investigated to ensure the robustness of the security system that utilizes LLMs before it can be deployed in a production environment. Further studies of the model analysis can apply a wider variety of evaluations, such as increased size of machine learning model types and increased range of attack testing types of machine learning models, which can be implemented into machine learning systems. / Känsliga data som läcker från ett system kan ha enorma negativa konsekvenser, såsom diskriminering, social stigmatisering och negativa ekonomiska konsekvenser för dem vars data har läckt ut. Därför är det av yttersta vikt att känsliga data inte läcker från ett system. Denna avhandling undersökte olika metoder för att förhindra att känsliga patientdata läcker ut ur ett maskininlärningssystem. Olika metoder har undersökts och utvärderats baserat på tidigare forskning; metoderna som användes i denna avhandling är en stor språkmodell (LLM) för kodanalys och en medlemskapsinfiltrationsattack på maskininlärnings (ML) modeller för att testa modellernas integritetsnivå. Kodanalysresultaten från LLM visar att modellen Llama 3 hade en noggrannhet på 90% i att identifiera skadlig kod som försöker stjäla känsliga patientdata. Modellanalysen kan utvärdera och bestämma medlemskap av känsliga patientdata som används för träning i maskininlärningsmodeller, vilket är avgörande för att bestämma den dataläckage som en maskininlärningsmodell kan exponera. Ytterligare studier för att öka determinismen och formateringen av LLM:s svar måste undersökas för att säkerställa robustheten i säkerhetssystemet som använder LLM:er innan det kan driftsättas i en produktionsmiljö. Vidare studier av modellanalysen kan tillämpa ytterligare bredd av utvärderingar, såsom ökad storlek på maskininlärningsmodelltyper och ökat utbud av attacktesttyper av maskininlärningsmodeller som kan implementeras i maskininlärningssystem.
19

Stora språkmodeller för bedömning av applikationsrecensioner : Implementering och undersökning av stora språkmodeller för att sammanfatta, extrahera och analysera nyckelinformation från användarrecensioner / Large Language Models for application review data : Implementation survey of Large Language Models (LLM) to summarize, extract, and analyze key information from user reviews

von Reybekiel, Algot, Wennström, Emil January 2024 (has links)
Manuell granskning av användarrecensioner för att extrahera relevant informationkan vara en tidskrävande process. Denna rapport har undersökt om stora språkmodeller kan användas för att sammanfatta, extrahera och analysera nyckelinformation från recensioner, samt hur en sådan applikation kan konstrueras.  Det visade sig att olika modeller presterade olika bra beroende på mätvärden ochviktning mellan recall och precision. Vidare visade det sig att fine-tuning av språkmodeller som Llama 3 förbättrade prestationen vid klassifikation av användbara recensioner och ledde, enligt vissa mätvärden, till högre prestation än större språkmodeller som Chat-Bison. För engelskt översatta recensioner hade Llama 3:8b:Instruct, Chat-Bison samt den fine-tunade versionen av Llama 3:8b ett F4-makro-score på 0.89, 0.90 och 0.91 respektive. Ytterligare ett resultat är att de större modellerna Chat-Bison, Text-Bison och Gemini, presterade bättre i fallet för generering av sammanfattande texter, än de mindre modeller som testades vid inmatning av flertalet recensioner åt gången.  Generellt sett presterade språkmodellerna också bättre om recensioner först översattes till engelska innan bearbetning, snarare än då recensionerna var skrivna i originalspråk där de majoriteten av recensionerna var skrivna på svenska. En annan lärdom från förbearbetning av recensioner är att antal anrop till dessa språkmodeller kan minimeras genom att filtrera utifrån ordlängd och betyg.  Utöver språkmodeller visade resultaten att användningen av vektordatabaser och embeddings kan ge en större överblick över användbara recensioner genom vektordatabasers inbyggda förmåga att hitta semantiska likheter och samla liknande recensioner i kluster. / Manually reviewing user reviews to extract relevant information can be a time consuming process. This report investigates if large language models can be used to summarize, extract, and analyze key information from reviews, and how such anapplication can be constructed.  It was discovered that different models exhibit varying degrees of performance depending on the metrics and the weighting between recall and precision. Furthermore, fine-tuning of language models such as Llama 3 was found to improve performance in classifying useful reviews and, according to some metrics, led to higher performance than larger language models like Chat-bison. Specifically, for English translated reviews, Llama 3:8b:Instruct, Chat-bison, and Llama 3:8b fine-tuned had an F4 macro score 0.89, 0.90, 0.91 respectively. A further finding is that the larger models, Chat-Bison, Text-Bison, and Gemini performed better than the smaller models that was tested, when inputting multiple reviews at a time in the case of summary text generation.  In general, language models performed better if reviews were first translated into English before processing rather than when reviews were written in the original language where most reviews were written in Swedish. Additionally, another insight from the pre-processing phase, is that the number of API-calls to these language models can be minimized by filtering based on word length and rating. In addition to findings related to language models, the results also demonstrated that the use of vector databases and embeddings can provide a greater overview of reviews by leveraging the databases’ built-in ability to identify semantic similarities and cluster similar reviews together.
20

A Method for Automated Assessment of Large Language Model Chatbots : Exploring LLM-as-a-Judge in Educational Question-Answering Tasks

Duan, Yuyao, Lundborg, Vilgot January 2024 (has links)
This study introduces an automated evaluation method for large language model (LLM) based chatbots in educational settings, utilizing LLM-as-a-Judge to assess their performance. Our results demonstrate the efficacy of this approach in evaluating the accuracy of three LLM-based chatbots (Llama 3 70B, ChatGPT 4, Gemini Advanced) across two subjects: history and biology. The analysis reveals promising performance across different subjects. On a scale from 1 to 5 describing the correctness of the judge itself, the LLM judge’s average scores for correctness when evaluating each chatbot on history related questions are 3.92 (Llama 3 70B), 4.20 (ChatGPT 4), 4.51 (Gemini Advanced); for biology related questions, the average scores are 4.04 (Llama 3 70B), 4.28 (ChatGPT 4), 4.09 (Gemini Advanced). This underscores the potential of leveraging the LLM-as-a-judge strategy to evaluate the correctness of responses from other LLMs.

Page generated in 0.0435 seconds