• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 6
  • Tagged with
  • 17
  • 17
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Generative Language Models for Automated Programming Feedback

Hedberg Segeholm, Lea, Gustafsson, Erik January 2023 (has links)
In recent years, Generative Language Models have exploded into the mainstream with household names like BERT and ChatGPT, proving that text generation could have the potential to solve a variety of tasks. As the number of students enrolled into programming classes has increased significantly, providing adequate feedback for everyone has become a pressing logistical issue. In this work, we evaluate the ability of near state-of-the-art Generative Language Models to provide said feedback on an automated basis. Our results show that the latest publicly available model GPT-3.5 has a significant aptitude for finding errors in code while the older GPT-3 is noticeably more uneven in its analysis. It is our hope that future, potentially fine-tuned models could help fill the role of providing early feedback for beginners, thus significantly alleviating the pressure put upon instructors.
12

Förenkla nyhetssammanfattning med hjälp av AI : En analys av GPT-3 modellers förmåga och begränsningar / Simplify news summary using AI

Pålsmark, Josefhina, A. Viklund, Teodor January 2023 (has links)
Everyday we are flooded with news from all around the world and this information can be overwhelming. In our study we analyze the possibilities to implement GPT-3 models in the work of news summarization in swedish and automize this process. In this study we also regard the ethic point of view, meaning if we can trust these GPT-3 models and  give them the responsibility to make news summarizations. We studied three different GPT-3 models: ChatGPT, Megatron and GPT-SW3. We used a quantitative survey method where the participants got to rate the news summarizations made by the GPT-3 models. The participants got to rate the news summarizations based on the criterias language, contents and structure. We then took the mean value of the ratings from the survey to see the results. The results showed that ChatGPT was significantly the best of all the three GPT-models on all three criterias, and Megatron and GPT-SW3 performed significantly worse. This shows that these models still need some development to get to the same levels as ChatGPT. Despite ChatGPT being the best performing GPT-3 model it still had its weak sides. We noticed this through one article that had alot of factors included which meant alot of information for the GPT-3 models to condense. Through this study we could confirm that GPT-3 models who are further in their development, like ChatGPT can be used in the work of news summarization but should be used with cautioun of what articles it gets to summarize. This means that GPT-3 models still require human supervision for articles with too much information to condense.
13

AI – banbrytande möjlighet, eller hotfull fara? : En studie av den svenska nyhetsrapporteringen av AI efter lansering av Chat GPT-3.

Rogström, Sanna, Nilsson, Ann-Sofie January 2023 (has links)
Syftet med denna studie är att undersöka hur AI framställs i de två största kvällstidningarnadå ChatGPT-3 lanserades i november 2022 fram till mars 2023. Studien syftar till att ta reda på om framställningen av ämnet AI har varit övervägande negativt eller positivt genom att undersöka 50 publicerade artiklar från vardera tidning samt vidare analys av två artiklar.  Som metod har vi utgått från kvantitativ innehållsanalys samt Faircloughs tredimensionella kritiska diskursanalys där fokus har varit på textanalys. Som huvudresultat har vi kommit fram till att majoriteten av de analyserade artiklarna har övervägande negativ framställning och att tidningarnas rubriker, genom starka ordval tenderar att vara kraftfulla. I många av artiklarna förekom ord som var djupt förknippade med negativa associationer såsom dödlig, kriminella, risk mot mänskligheten som exempel.
14

Personalization of Automotive Human Machine Interface(HMI) using Machine Learning Algorithms

Rastogi, Utkarsh 30 October 2023 (has links)
In this thesis, a context-aware, personalized virtual assistant for use in automobiles is presented. With the increasing use of technology in automobiles, there is a growing need for safer and more practical ways for drivers to access information and perform tasks while driving. Voice-based interfaces, such as natural language processing, provide a solution to this problem as they do not require visual or manual input. In this thesis, a fine-tuned model of GPT-3 is used to understand user intentions and identify the user’s needs. The voice assistant is trained to understand the environment and the actions it can perform. The use of triggers such as drowsiness detection is also implemented to make the virtual assistant proactive in ensuring the user’s safety. User testing and evaluation was conducted to demonstrate the effectiveness of the context-aware, personalized virtual assistant in improving the driving experience and promoting safe driving practices.
15

Går det att lita på ChatGPT? En kvalitativ studie om studenters förtroende för ChatGPT i lärandesammanhang

Härnström, Alexandra, Bergh, Isak Eljas January 2023 (has links)
Världens tekniska utveckling går framåt i snabb takt, inte minst när det kommer till ”smarta” maskiner och algoritmer med förmågan att anpassa sig efter sin omgivning. Detta delvis på grund av den enorma mängd data som finns tillgänglig och delvis tack vare en ökad lagringskapacitet. I november 2022 släpptes ett av de senaste AI-baserade programmen; chatboten ChatGPT. Inom två månader hade ChatGPT fått över 100 miljoner användare. Denna webbaserade mjukvara kan i realtid konversera med användare genom att besvara textbaserade frågor. Genom att snabbt och ofta korrekt besvara användarnas frågor på ett mänskligt och övertygande sätt, har tjänsten på kort tid genererat mycket uppmärksamhet. Det finns flera studier som visar på hur ett stort antal människor saknar ett generellt förtroende för AI. Vissa studier menar att de svar som ChatGPT genererar inte alltid kan antas vara helt korrekta och därför bör följas upp med en omfattande kontroll av faktan, eftersom de annars kan bidra till spridandet av falsk information. Eftersom förtroende för AI har visat sig vara en viktig del i hur väl teknologin utvecklas och integreras, kan brist på förtroende för sådana tjänster, såsom ChatGPT, vara ett hinder för en välfungerande användning. Trots att man sett på ökad produktivitet vid införandet av AI-teknologi hos företag så har det inom högre utbildning, som ett hjälpmedel för studenter, inte integrerats i samma utsträckning. Genom att ta reda på vilket förtroende studenter har för ChatGPT i lärandesammanhang, kan man erhålla information som kan vara till hjälp för integrationen av sådan AI-teknik. Dock saknas det specifik forskning kring studenters förtroende för ChatGPT i lärandesammanhang. Därför syftar denna studie till att fylla denna kunskapslucka, genom att utföra en kartläggning. Vår frågeställning är: ” Vilket förtroende har studenter för ChatGPT i lärandesammanhang?”. Kartläggningen utfördes med semistrukturerade intervjuer av åtta studenter som använt ChatGPT i lärandesammanhang. Intervjuerna genererade kvalitativa data som analyserades med tematisk analys, och resultatet visade på att studenters förtroende för ChatGPT i lärandesammanhang beror på en rad faktorer. Under analysen identifierade vi sex teman som ansågs vara relevanta för att besvara frågeställningen: ● Erfarenheter ● Användning ● ChatGPT:s karaktär ● Yttre påverkan ● Organisationer ● Framtida förtroende / The world's technological development is advancing rapidly, especially when it comes to "smart" machines and algorithms with the ability to adapt to their surroundings. This is partly due to the enormous amount of available data and partly thanks to increased storage capacity. In November 2022, one of the latest AI-based programs was released; the chatbot ChatGPT. This web-based software can engage in real-time conversations with users by answering text-based questions. By quickly, and often accurately, answering users' questions in a human-like and convincing manner, the service has generated a lot of attention in a short period of time. Within two months, ChatGPT had over 100 million users. There are several studies that show how a large number of people lack a general trust in AI. Some studies argue that the responses generated by ChatGPT may not always be assumed to be completely accurate and should therefore be followed up with extensive fact-checking, as otherwise they may contribute to the spreading of false information. Since trust in AI has been shown to be an important part of how well the technology develops and integrates, a lack of trust in services like ChatGPT can be a hindrance to effective usage. Despite the increased productivity observed in the implementation of AI technology in companies, it has not been integrated to the same extent within higher education as an aid for students. By determining the level of trust that students have in ChatGPT in an educational context, valuable information can be obtained to assist in the integration of such AI technology. However, there is a lack of specific research on students' trust in ChatGPT in an educational context. Therefore, this study aims to fill this knowledge gap by conducting a survey. Our research question is: “What trust do students have in ChatGPT in a learning context?”. The survey was conducted through semi-structured interviews with eight students who have used ChatGPT in an educational context. The interviews generated qualitative data that was analyzed using thematic analysis, and the results showed that students' trust in ChatGPT in an educational context depends on several factors. During the analysis, six themes were identified as relevant for answering the research question: • Experiences • Usage • ChatGPT’s character • Influences • Organizations • Future trust
16

Contextual short-term memory for LLM-based chatbot / Kontextuellt korttidsminne för en LLM-baserad chatbot

Lauri Aleksi Törnwall, Mikael January 2023 (has links)
The evolution of Language Models (LMs) has enabled building chatbot systems that are capable of human-like dialogues without the need for fine-tuning the chatbot for a specific task. LMs are stateless, which means that a LM-based chatbot does not have a recollection of the past conversation unless it is explicitly included in the input prompt. LMs have limitations in the length of the input prompt, and longer input prompts require more computational and monetary resources, so for longer conversations, it is often infeasible to include the whole conversation history in the input prompt. In this project a short-term memory module is designed and implemented to provide the chatbot context of the past conversation. We are introducing two methods, LimContext method and FullContext method, for producing an abstractive summary of the conversation history, which encompasses much of the relevant conversation history in a compact form that can then be supplied with the input prompt in a resource-effective way. To test these short-term memory implementations in practice, a user study is conducted where these two methods are introduced to 9 participants. Data is collected during the user study and each participant answers a survey after the conversation. These results are analyzed to assess the user experience of the two methods and the user experience between the two methods, and to assess the effectiveness of the prompt design for both answer generation and abstractive summarization tasks. According to the statistical analysis, the FullContext method method produced a better user experience, and this finding was in line with the user feedback. / Utvecklingen av LMs har gjort det möjligt att bygga chatbotsystem kapabla till mänskliga dialoger utan behov av att finjustera chatboten för ett specifikt uppdrag. LMs är stateless, vilket betyder att en chatbot baserad på en LM inte sparar tidigare delar av konversationen om de inte uttryckligen ingår i prompten. LMs begränsar längden av prompten, och längre prompter kräver mer beräknings- och monetära resurser. Således är det ofta omöjligt att inkludera hela konversationshistoriken i prompten. I detta projekt utarbetas och implementeras en korttidsminnesmodul, vars syfte är att tillhandahålla chatboten kontexten av den tidigare konversationen. Vi introducerar två metoder, LimContext metod och FullContext metod, för att ta fram en abstrakt sammanfattning av konversationshistoriken. Sammanfattningen omfattar mycket av det relevanta samtalet i en kompakt form, och kan sedan resurseffektivt förses med den påföljande prompten. För att testa dessa korttidsminnesimplementationer i praktiken genomförs en användarstudie där de två metoderna introduceras för 9-deltagare. Data samlas in under användarstudier. Varje deltagare svarar på en enkät efter samtalet. Resultaten analyseras för att bedöma användarupplevelsen av de två metoderna och användarupplevelsen mellan de två metoderna, och för att bedöma effektiviteten av den snabba designen för både svarsgenerering och abstrakta summeringsuppgifter. Enligt den statistiska analysen gav metoden FullContext metod en bättre användarupplevelse. Detta fynd var även i linje med användarnas feedback.
17

Improving customer support efficiency through decision support powered by machine learning

Boman, Simon January 2023 (has links)
More and more aspects of today’s healthcare are becoming integrated with medical technology and dependent on medical IT systems, which consequently puts stricter re-quirements on the companies delivering these solutions. As a result, companies delivering medical technology solutions need to spend a lot of resources maintaining high-quality, responsive customer support. In this report, possible ways of increasing customer support efficiency using machine learning and NLP is examined at Sectra, a medical technology company. This is done through a qualitative case study, where empirical data collection methods are used to elicit requirements and find ways of adding decision support. Next, a prototype is built featuring a ticket recommendation system powered by GPT-3 and based on 65 000 available support tickets, which is integrated with the customer supports workflow. Lastly, this is evaluated by having six end users test the prototype for five weeks, followed by a qualitative evaluation consisting of interviews, and a quantitative measurement of the user-perceivedusability of the proposed prototype. The results show some support that machine learning can be used to create decision support in a customer support context, as six out of six test users believed that their long-term efficiency could improve using the prototype in terms of reducing the average ticket resolution time. However, one out of the six test users expressed some skepticism towards the relevance of the recommendations generated by the system, indicating that improvements to the model must be made. The study also indicates that the use of state-of-the-art NLP models for semantic textual similarity can possibly outperform keyword searches.

Page generated in 0.0262 seconds