81 |
ChatGPT in English Class : Perspectives of students and teachers from Swedish Upper Secondary schoolsZeng, Yuchen, Mahmud, Tanzima January 2023 (has links)
Studien utforskade användningen av den Artificiell Intelligens chatbot, ChatGPT, i undervisningen av engelska (ELT) och hur elever och lärare på svenska gymnasieskolor uppfattade användningen av ChatGPT i engelskundervisningen. Studien har samlat båda kvantitativa data från 63 gymnasieelever genom en online-enkät och kvalitativa data från intervjuer med två engelsklärare på gymnasienivå. Forskningen undersökte i vilken utsträckning och för vilka syften elever använde ChatGPT, förändringarna i undervisningsmetoder inom ELT, samt fördelar och utmaningar med ChatGPT ur lärarnas perspektiv. Studien använde teoretiska ramverk som The Unified Technology Acceptance and Use of Technology (UTAUT), Language teacher cognition och Learner Autonomy. Resultaten indikerar att elever huvudsakligen använder ChatGPT för idegenerering och inspiration. Dock har anvädningen av ChatGPT för engelskinlärning inte blivit populär bland eleverna. Förändringar i undervisningsmetoder märks främst i klassrum bedömningar, aktiviteter, och hjälp med lektionsplanering och materialförberedelse. Fördelar med ChatGPT inkluderar idegenerering, främjande av Learner Autonomy, medan utmaningar inkluderar oro för tillförlitlighet, begränsad inlärning, och frågor om akademisk ohederlighet. Detta understryker behovet av noggrant övervägande vid inkluderingen av ChatGPT i pedagogiska sammanhang. / The study explored the application of artificial intelligence chatbot, ChatGPT, in English language teaching (ELT) and learning, exploring how Swedish upper secondary school students’ and teachers’ perceived ChatGPT in English class. The study collected quantitative data consisting of 63 upper secondary school students’ through an online questionnaire, and qualitative data from interviews with two upper-secondary ELT Teachers. The research explores the extent and purposes of students’ use of ChatGPT, the changes in ELT instructional practices, and the affordances and challenges of ChatGPT from teacher’s perspectives. This study adopts the unified technology acceptance and use of technology theory (UTAUT), Language teacher cognition and Learning autonomy as theoretical frameworks. The results indicate that students primarily use ChatGPT for brainstorming and inspiration, however, using ChatGPT for English learning has not become popular among students. Changes in instructional practices are noticeable in in-class assessments, activities, and assistance with lesson planning and material preparation. The affordances of ChatGPT are brainstorming, promoting learner autonomy, and the challenges include reliability concerns, limited learning, and issues of academic dishonesty. This emphasises the need for careful consideration when including ChatGPT in pedagogical implications.
|
82 |
Utveckling av AI-verktyg för textgenerering: Ingresser och produktbeskrivningar / Development of an AI Tool for Text Generation: Intros and Product DescriptionsFalkman, Hugo, Sturesson, William January 2024 (has links)
Detta arbete syftar till att utvärdera potentialen hos en GPT-modell för att effektivisera redaktörers arbete med att generera textinnehåll för olika produkter. Den huvudsakliga frågeställningen är: ”Går det att integrera en GPT-modell i redigeringsplattformen TinyMCE för att effektivisera redaktörers arbete med att generera textinnehåll för olika produkter inom ett e-handelsföretag?” Arbetet fokuserar på att underlätta redigeringsarbetet för redaktörer genom att erbjuda en integrerad lösning för textgenerering, vilket förväntas öka produktiviteten och kvaliteten på de genererade texterna. Arbetet resulterade i utvecklingen av ett AI-verktyg som integrerats med redigeringsplattformen TinyMCE, där GPT-modellen fungerar som motor för textgenereringen. Resultatet visar att det utvecklade verktyget effektivt kan producera textinnehåll med god kvalitet och relevans. Genom att erbjuda en användarvänlig och integrerad lösning för redaktörer förväntas verktyget bidra till ökad produktivitet och effektivitet i redigeringsprocessen. Eftersom GPT-modellen tenderar att utforma generaliseringar och dra egna slutsatser när den tillhandahållna informationen är otydlig, bör det noteras att verktyget inte är autonomt. Det är av yttersta vikt att redaktörerna noggrant granskar resultatet för att säkerställa att det återgivna textinnehållet är korrekt. / This research aims to evaluate the potential of a GPT-model to streamline editors work in generating textual content for various products. The main research question is: ”Is it possible to integrate a GPT model into the TinyMCE editing platform to streamline editors work in generating text content for various products within an e-commerce company?” The focus is on facilitating the editing process for editors by providing an integrated solution for text generation, which is expected to increase productivity and the quality of the generated texts. The work resulted in the development of an AI tool integrated with the TinyMCE editing platform, where the GPT-model serves as the engine for text generation. The findings demonstrate that the developed tool can effectively produce satisfactory quality and relevant textual content. By offering a userfriendly and integrated solution for editors, the tool is expected to contribute to increased productivity and efficiency in the editing process. However, it should be noted that since the GPT-model tends to generalize and draw its own conclusions when the provided information is insufficiently clear, the tool is not autonomous. It is crucial for editors to carefully review the output to ensure the accuracy and truthfulness of the rendered textual content.
|
83 |
ChatGPT - Möjligheter och utmaningar : En studie om ChatGPT:s påverkan inom skolans värld / ChatGPT: Opportunities and challenges : A study on the impact of ChatGPT in the world of educationYousif Harut, Eva January 2024 (has links)
This study examines both the opportunities and challenges of including ChatGPT in education from the perspective of middle and high school teachers. The main objective is to highlight not only the challenges posed by AI but also the potential benefits of AI in education, as negative opinions about ChatGPT are more commonly heard. The study applies the epistemological perspectives of sociocultural and pragmatism to gain a deeper understanding of teachers’ views on the opportunities and challenges that ChatGPT can bring to education. The results show that ChatGPT offers several opportunities as well as challenges, which are explained in more detail in the study. The most interesting finding is that critical thinking emerges as a common factor in the teachers’ opinions, the ability to critically evaluate information has become more important with the introduction of ChatGPT. Teachers believe that ChatGPT can serve as a valuable complement in education, provided that students learn to use the tool in a critical and responsible manner. This can enhance students’ critical thinking and analytical skills, which are crucial for their development.
|
84 |
Decoding Minds: Mentalistic Inference in Autism Spectrum Disorders and ChatGPT ModelsAlbergo, Dalila 01 March 2024 (has links)
Mentalistic inference, the process of deducing others’ mental states from behaviour, is a key element of social interactions, especially when challenges arise. Just by observing an action or listening to a verbal description of it, adults and infants are able to make robust and rapid inferences about an agent’s intentions, desires, and beliefs. This thesis considers perspectives from Autism Spectrum Disorders (ASDs) and large language models, specifically GPT models.
Individuals with ASDs struggle to read intentions from movements, but the mechanisms underlying these difficulties remain unknown. In a set of experiments, we employed combined motion tracking, psychophysics, and computational analyses to examine intention reading in ASDs with single-trial resolution. Single-trial analyses revealed that challenges in intention reading arise from both differences in kinematics between typically developing individuals and those with ASD, and a diminished sensitivity in reading intentions to variations in movement kinematics. This aligns with the idea that internal readout models are tuned to specific action kinematics, supporting the role of sensorimotor processes in shaping cognitive understanding and emphasizing motor resonance, a key aspect of embodied cognition. Targeted trainings may enhance and improve this ability.
In a second set of experiments, we compared Theory of Mind, a core feature of mentalistic inference, in GPT models and a large sample of human participants. We found that GPT models exhibited human-level abilities in detecting indirect requests, false beliefs, and misdirection, but failed on faux pas. Rigorous hypothesis testing enabled us to show that this failure was apparent and was linked to a cautious approach in drawing conclusions rather than from an inference deficit.
Collectively, the results presented in this thesis suggest that the convergence of insights from clinical research and advancements in technology is essential for fostering a more inclusive understanding of mentalistic inferences.
|
85 |
Secure Coding Practice in Java: Automatic Detection, Repair, and Vulnerability DemonstrationZhang, Ying 12 October 2023 (has links)
The Java platform and third-party open-source libraries provide various Application Programming Interfaces (APIs) to facilitate secure coding. However, using these APIs securely is challenging for developers who lack cybersecurity training. Prior studies show that many developers use APIs insecurely, thereby introducing vulnerabilities in their software. Despite the availability of various tools designed to identify API insecure usage, their effectiveness in helping developers with secure coding practices remains unclear. This dissertation focuses on two main objectives: (1) exploring the strengths and weaknesses of the existing automated detection tools for API-related vulnerabilities, and (2) creating better tools that detect, repair, and demonstrate these vulnerabilities.
Our research started with investigating the effectiveness of current tools in helping with developers' secure coding practices. We systematically explored the strengths and weaknesses of existing automated tools for detecting API-related vulnerabilities. Through comprehensive analysis, we observed that most existing tools merely report misuses, without suggesting any customized fixes. Moreover, developers often rejected tool-generated vulnerability reports due to their concerns on the correctness of detection, and the exploitability of the reported issues. To address these limitations, the second work proposed SEADER, an example-based approach to detect and repair security-API misuses. Given an exemplar ⟨insecure, secure⟩ code pair, SEADER compares the snippets to infer any API-misuse template and corresponding fixing edit. Based on the inferred information, given a program, SEADER performs inter-procedural static analysis to search for security-API misuses and to propose customized fixes. The third work leverages ChatGPT-4.0 to automatically generate security test cases. These test cases can demonstrate how vulnerable API usage facilitates supply chain attacks on specific software applications. By running such test cases during software development and maintenance, developers can gain more relevant information about exposed vulnerabilities, and may better create secure-by-design and secure-by-default software. / Doctor of Philosophy / The Java platform and third-party open-source libraries provide various Application Pro- gramming Interfaces (APIs) to facilitate secure coding. However, using these APIs securely can be challenging, especially for developers who aren't trained in cybersecurity. Prior work shows that many developers use APIs insecurely, consequently introducing vulnerabilities in their software. Despite the availability of various tools designed to identify API insecure usage, it is still unclear how well they help developers with secure coding practices.
This dissertation focuses on (1) exploring the strengths and weaknesses of the existing au- tomated detection tools for API-related vulnerabilities, and (2) creating better tools that detect, repair, and demonstrate these vulnerabilities. We first systematically evaluated the strengths and weaknesses of the existing automated API-related vulnerability detection tools. We observed that most existing tools merely report misuses, without suggesting any cus- tomized fixes. Additionally, developers often reject tool-generated vulnerability reports due to their concerns about the correctness of detection, and whether the reported vulnerabil- ities are truly exploitable. To address the limitations found in our study, the second work proposed a novel example-based approach, SEADER, to detect and repair API insecure usage. The third work leverages ChatGPT-4.0 to automatically generate security test cases, and to demonstrate how vulnerable API usage facilitates the supply chain attacks to given software applications.
|
86 |
Artificiell Intelligens i arbetet : En kvalitativ studie av HR-anställdas upplevelser och erfarenheter av verktyget ChatGPT / Artificial Intelligence in the Workplace : A Qualitative Study of HR Employees' Experiences with ChatGPTElfström, Sarah, Nergård, Lova January 2024 (has links)
Den snabba utvecklingen av Artificiell Intelligens (AI) har lett till att organisationer och företag i en högre grad väljer att integrera AI-drivna lösningar i sin verksamhet i syfte att förbli konkurrenskraftiga och öka produktiviteten. Framsteg inom AI har haft en betydande inverkan på HR-branschen vilken tidigare forskning har visat att AI har potential att effektivisera särskilda HR-processer. Mot denna bakgrund var syftet med studien att undersöka HR-anställdas upplevelser av att använda det AI-drivna verktyget ChatGPT i sitt arbete. Studien genomfördes genom kvalitativ datainsamling med semistrukturerade intervjuer. Resultatet analyserades genom en tematisk analys som resulterade i fyra teman: effektivitet och tidsbesparing, kvalitet, ChatGPT som interaktiv resurs och oro för AI-utvecklingen. Både positiva och negativa upplevelser av att använda ChatGPT inom HR framkom i resultatet, inklusive AI:s potential att effektivisera och spara tid, samt riskerna med verktygets partiskhet, integritetsskydd och eventuell förlust av den mänskliga aspekten i HR. Studien bidrar till en bredare förståelse för AI:s roll i HR med de potentiella möjligheter och utmaningar som följer med dess implementering.
|
87 |
Usage of Generative AI Based Plugin in Unit Testing : Evaluating the Trustworthiness of Generated Test Cases by Codiumate, an IDE Plugin Powered by GPT-3.5 & 4Nazari, Ali Reza, Nannicha Thunell, Bow January 2024 (has links)
Background: Unit testing is essential in software development, ensuring the functionality of individual components like functions and classes. However, manual creation of unit test cases is time-consuming and tedious, impacting testing efficiency and reliability. Problem: Automated unit test generation tools such as EvoSuite and Randoop have addressed some challenges, but they’re limited by language specificity and predefined algorithms. Generative AI tools like ChatGPT and GitHub Copilot powered by OpenAI’sGPT-3.5/4 offer alternatives, but face limitations like user input reliance and operational inconveniences. Solution: CodiumAI’s Codiumate IDE plugin aims to mitigate these limitations, making code quality assurance easier for developers. This study evaluates Codiumate’s trustworthiness in generating unit tests for the Python functions. Method: We randomly selected thirty functions from OpenAI’s HumanEval dataset, and wrote selection criteria for relevant test cases based on each function’s doc string to evaluate Codiumate’s trustworthiness using metrics such as Relevance Score, false positive rate, and result consistency rate. Result: Among all the suggested test cases by Codiumate, 208 unit tests, which consists of 48% of suggested test cases that were relevant. 70% of assertions from these test cases strictly meet selection criteria, while the other 30% while relevant were selected due to our basis and experience in software testing. The average false positive rate is15%. Function groups that have higher Relevance Scores are non-mathematical nature, and simple dependencies. High false positives arise in functions with string and float parameters. All generated unit tests are syntax-error-free, with 20% fail and 80% passed in all five test execution. Conclusion: Codiumate demonstrates potential in automating unit test generation, offering a convenient means to support developers. However, it is not yet fully reliable for critical applications without developer oversight. Continued refinement and exploration of its capabilities are essential for Codiumate to become an indispensable asset in unit test generation, enhancing its trustworthiness and effectiveness in the software development process.
|
88 |
Användning av ChatGPT i matematikundervisning : En studie om möjligheter och utmaningar med ChatGPT / Use of ChatGPT in Mathematics Education : A Study on Opportunities and Challenges with ChatGPTHussain, Leon January 2024 (has links)
Detta examensarbete undersöker användningen av ChatGPT i matematikundervisning, med fokus på att kartlägga de möjligheter och utmaningar uppstår. Genom en kvalitativ litteraturstudie analyserades en mängd olika artiklar i syfte att identifiera hur integration av ChatGPT går till och vilka konsekvenser det kan medföra för både lärare och elever. Studien visar att ChatGPT har potentialen att erbjuda skräddarsydd feedback som kan leda till en mer effektiv inlärning för elever. AI som ChatGPT kan underlätta lektionsplanering, skapa anpassade övningsproblem och hjälpa elever att utveckla problemlösningsförmåga och analytiskt tänkande. Arbetet identifierade dessutom att ChatGPT kan bidra till en ökad förståelse för matematikämnet genom exempelvis samarbetsinlärning. Utöver dessa möjligheter identifierades även ett flertal utmaningar. Dessa inkluderade en risk för elever att bli beroende av AI-hjälpmedel, vilket kan leda till ett minskat självständigt tänkande. det kan även innebära en risk att ChatGPT ger felaktiga svar och kan behöva kompletteras med handledning från en lärare. Sammanfattningsvis visar detta examensarbete att ChatGPT kan fungera som ett värdefullt verktyg för både elever och lärare i matematikundervisningen, men att det är viktigt att balansera dess användning med traditionella undervisningsstrategier.
|
89 |
ChatGPT, lärarens dröm eller mardröm? : En studie om ChatGPT:s möjligheter och risker inom svenskämnetBerndtsson, Mathias, Utne, Dante January 2024 (has links)
Syftet med studien är att undersöka integreringen av artificiell intelligens (AI) inom svenskämnet med fokus på verktyget ChatGPT. Genom en kvalitativ metod, som innefattar semistrukturerade intervjuer med nio verksamma lärare, utforskar denna studie både möjligheter och utmaningar med att integrera ChatGPT i undervisningen. För att analysera resultaten och förstå deras pedagogiska möjligheter och risker använder studien TPACK-modellen som sitt teoretiska ramverk. Resultaten av studien visar att lärarna identifierar flera fördelar med användningen av ChatGPT, såsom möjligheten till individanpassad undervisning och realtidsbaserad formativ bedömning. Samtidigt framhåller de också flera potentiella risker, inklusive risken för akademiskt oärligt beteende och minskad personlig handledning i klassrummet. Studien betonar vikten av tydliga riktlinjer och policyer för att reglera användningen av ChatGPT i skolan och förespråkar behovet av pedagogisk utbildning för att främja ett etiskt och effektivt utnyttjande av AI-verktyg i undervisningen. Slutligen visar studien betydelsen av lärarnas deltagande och pedagogiska kunskaper i bedömningsprocessen, även med användning av AI-verktyg. Genom att förstå både de positiva och negativa aspekterna av att integrera ChatGPT i undervisningen kan skolsystemet bättre hantera de utmaningar och möjligheter som följer med AI-teknologins framväxt.
|
90 |
Getting the general public to create phishing emails : A study on the persuasiveness of AI-generated phishing emails versus human methodsEkekihl, Elias January 2024 (has links)
Artificial Intelligence (AI) is ever increasingly becoming more and more widespread, and is available, for the most part freely to anyone. While AI can be used for both good and bad, the potential for misuse exists. This study focuses on the intersection of AI and cybersecurity, with a focus on AI-generated phishing emails. In this study a mixed-method approach was applied and, an experiment, interviews, and a survey were conducted. Experiments and interviews were conducted with 9 participants with various backgrounds, but novices in phishing. In the experiment, phishing emails were created in three distinct ways: Human-Crafted, Internet-aided, and AI-generated. Emails were evaluated during semi-structured interviews, and each participant reviewed six emails in total, where two of these, were real phishing emails. The results from the interviews indicate that AI-generated phishing emails are as persuasive as those created in the Human-Crafted task. On the contrary, in the survey, participants ranked the AI-generated phishing email as the most persuasive, followed by Human-Crafted. The survey was answered by 100 participants. Familiarity plays a crucial part in both persuasiveness and also willingness to go along with the requests in the phishing emails, this was highlighted during interviews and the survey. Urgency was seen as very negative by both the respondents and interviewees. The results from the study highlight the potential for misuse, specifically with the creation of AI-generated phishing emails, research into protection measures should not be overlooked. Adversaries have the potential to use AI, as it is right now, to their advantage.
|
Page generated in 0.0216 seconds