• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 30
  • Tagged with
  • 84
  • 84
  • 41
  • 31
  • 31
  • 27
  • 25
  • 24
  • 19
  • 19
  • 17
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

ChatGPT as a Software Development Tool : The Future of Development

Hörnemalm, Adam January 2023 (has links)
The purpose of this master’s thesis was to research and evaluate how ChatGPT can be used as a tool in software developers’ daily work activities. In order to do this, the thesis was conducted in two phases, the initial exploration phase and the data collection phase. In the initial exploration phase, five senior-level developers were interviewed about their day-to-day work, opinions of generative AI, and the profession of software developers as a whole. From these interviews, a theoretical foundation for software development was formed, categorizing the daily work tasks of a software developer into either coding, communication, or planning. This theoretical foundation was then used as the basis for the tasks and interviews used during the data collection phase. In the data collection phase, seven developers, ranging from students to industry veterans, were asked to complete a set of representative tasks with the help of ChatGPT and afterward participate in an interview. The tasks were based upon the theoretical foundation of software development and aimed to serve as representative tasks that software developers have to do in their day-to-day work. Based on the tasks and interviews it was found that the use of ChatGPT did in fact help make software developers more effective when it came to coding and planning-based tasks, but not without risk since it was shown that junior developers trusted and relied more on the answers given by ChatGPT. Although ChatGPT showed a positive effect, the tooling still needs improvement, since the developers had trouble with the text formatting when completing communication-based tasks, as well as them expressing a desire for the tooling to be more integrated. However, this desire was not unexpected, since all of the developers involved showed interest in working with generative AI tooling for work-related tasks in the future.
22

Generativ AI & kommunikatörer : En kvalitativ analys: om ny teknologi och hur förutsättningar förändras / Generative AI & communicators : A qualitative study: how new technology change conditions

Palomaa, Anton, Berggren, Lukas January 2024 (has links)
This study examines how new technologies, particularly generative artificial intelligence (AI), have become an innovative tool for communicators and how it affects their productivity and creativity. Through a combination of literature review, theoretical frameworks, and empirical research, we analyze how communicators integrate generative AI into their work process and how this affects their workflow and conditions. The study is based on the following questions: How are software-based text chat robots used by communicators in their professional role; To what extent do communicators perceive that there is an impact on creativity and productivity when co-writing between human and machine; What opportunities and challenges do communicators imagine that software-based text chat robots can contribute to? The findings indicate that generative AI has the potential to transform the communications industry by increasing efficiency and freeing up time for more strategic thinking and creativity. Communicators report increased productivity and that generative AI has the ability to help communicators manage large text bases in an agile way. At the same time, the study also identifies challenges and potential risks with the use of generative AI. Among these challenges are issues related to ethics, quality assurance and the need to maintain human control and creative input in the creation process. Communicators are aware of these challenges and emphasize the importance of balancing automation with human skills and insight. Finally, the study highlights the opportunities and challenges of the use of generative AI for communicators and identifies areas for future research and development. By understanding the potential benefits and limitations of this technology, communicators can develop strategies to maximize its positive effects and manage its challenges effectively.
23

Generative ai and eu copyright law: Exploring Exceptions and the Derivative Works Concept

Danda, Clemens 28 November 2023 (has links)
The text explores the challenges that generative AI poses to EU copyright law, focusing on two main issues: the use of copyrighted materials in developing AI models and the publication of generated digital content. The inquiry assesses the applicability of existing copyright exceptions for tasks like data mining, temporary reproduction, and database rights during the development of AI models. For the publication of generated content, the focus is on determining conditions for legal recognition as a derivative work. The text argues that generative AI falls under the flexible concepts of Arts. 3 and 4 CDSMD, with potential support for AI models generating marketing or entertainment content. However, existing exceptions do not fully support the generative AI development process. Commercial deployment of generated output may not be covered by exceptions, and its classification as a lawful derivative work depends on further clarification from the EU legislator or CJEU. The text suggests that non-authorial output should be allowed as derivative works, considering the low threshold for originality and recognizability criteria. To be lawful, derivative AI works should incorporate original parts that fade into the background, with personal style not protected by copyright but considered in an adapted derivatives test. Fair remuneration is proposed for generative AI services to address economic impacts on creatives.
24

An In-Depth study on the Utilization of Large Language Models for Test Case Generation

Johnsson, Nicole January 2024 (has links)
This study investigates the utilization of Large Language Models for Test Case Generation. The study uses the Large Language model and Embedding model provided by Llama, specifically Llama2 of size 7B, to generate test cases given a defined input. The study involves an implementation that uses customization techniques called Retrieval Augmented Generation (RAG) and Prompt Engineering. RAG is a method that in this study, stores organisation information locally, which is used to create test cases. This stored data is used as complementary data apart from the pre-trained data that the large language model has already trained on. By using this method, the implementation can gather specific organisation data and therefore have a greater understanding of the required domains. The objective of the study is to investigate how AI-driven test case generation impacts the overall software quality and development efficiency. This is evaluated by comparing the output of the AI-based system, to manually created test cases, as this is the company standard at the time of the study. The AI-driven test cases are analyzed mainly in the form of coverage and time, meaning that we compare to which degree the AI system can generate test cases compared to the manually created test case. Likewise, time is taken into consideration to understand how the development efficiency is affected. The results reveal that by using Retrieval Augmented Generationin combination with Prompt Engineering, the system is able to identify test cases to a certain degree. The results show that 66.67% of a specific project was identified using the AI, however, minor noise could appear and results might differ depending on the project’s complexity. Overall the results revealed how the system can positively impact the development efficiency and could also be argued to have a positive effect on the software quality. However, it is important to understand that the implementation as its current stage, is not sufficient enough to be used independently, but should rather be used as a tool to more efficiently create test cases.
25

Exploring artificial intelligence bias : a comparative study of societal bias patterns in leading AI-powered chatbots.

Udała, Katarzyna Agnieszka January 2023 (has links)
The development of artificial intelligence (AI) has revolutionised the way we interact with technology and each other, both in society and in professional careers. Although they come with great potential for productivity and automation, AI systems have been found to exhibit biases that reflect and perpetuate existing societal inequalities. With the recent rise of artificial intelligence tools exploiting the large language model (LLM) technology, such as ChatGPT, Bing Chat and Bard AI, this research project aims to investigate the extent of AI bias in said tools and explore its ethical implications. By reviewing and analysing responses to carefully crafted prompts generated by three different AI chatbot tools, the author will intend to determine whether the content generated by these tools indeed exhibits patterns of bias related to various social identities, as well as compare the extent to which such bias is present across all three tools. This study will contribute to the growing body of literature on AI ethics and inform efforts to develop more equitable and inclusive AI systems. By exploring the ethical dimensions of AI bias in selected LLMs, this research will shed light on the broader societal implications of AI and the role of technology in shaping our future.
26

Co-creating Futures for Integrating Generative AI into the Designers’ Workflow / Samskapa framtider för att integrera generativ AI i designers arbetsflöde

Popova, Victoria January 2023 (has links)
In recent years Generative AI tools have become increasingly ubiquitous and have given rise to much discussion concerning their impact on jobs, both in personal use and corporate settings. Despite Generative AI being a rapidly growing field, there is currently an existing research gap regarding the adoption of these tools across different domains. This study aims to fill this gap by contributing with knowledge on how designers might integrate Generative AI into their workflows. By adopting a Research through design (RtD) approach, three workshops were held where designers used generative AI tools to co-create design fictions envisioning how AI tools would permeate their future workflows. Thematic analysis of the workshop data revealed both desirable and undesirable futures from the designers’ perspectives situating AI at various stages of design – from assisting designers with mundane tasks to helping with ideation and testing. The futures brought up reflections on designers in control of the workflow, the dynamics of human-AI collaboration and the evolving role of the designer. The study contributes knowledge about different forms human-AI interactions could take in the near future, and highlights the importance of careful consideration when deploying these tools in a human-centric manner. / De senaste åren har generative AI-verktyg blivit alltmer vanliga, vilket har skapat många diskussioner angående deras påverkan på arbetet, både för personligt bruk och inom företagsmiljöer. Trots att generativ AI är ett snabbt växande fält, finns det en forskningslucka när det gäller anpassning av dessa verktyg inom olika domäner. Syftet med studien är att fylla detta gap genom att generera kunskap om hur designers kan integrera generativ AI i sina arbetsflöden. Genom att använda en forskning-genom-design (RtD) metod, hölls tre workshops där designers skulle använda generativa AIverktyg för att gemensamt skapa “designfiktioner” för att föreställa sig hur AI-verktyg skulle påverka deras arbetsflöden. Tematisk analys av workshopdata uppvisade både önskvärda och oönskade framtidsscenarier som placerade generativ AI i olika stadier av design — från att assistera designers med repetativa uppgifter till att hjälpa till med kreativitet och testning. Framtiderna inspirerade reflektioner över designers kontroll över processen, dynamiken i mänsklig-AIsamverkan och designers egna roller. Studien bidrar med kunskap om olika former av mänsklig-AI-interaktioner skulle kunna se ut i en nära framtid och belyser vikten av noggranna överväganden vid implementering av dessa verktyg på ett människocentrerat sätt.
27

Anonymizing Faces without Destroying Information

Rosberg, Felix January 2024 (has links)
Anonymization is a broad term. Meaning that personal data, or rather data that identifies a person, is redacted or obscured. In the context of video and image data, the most palpable information is the face. Faces barely change compared to other aspect of a person, such as cloths, and we as people already have a strong sense of recognizing faces. Computers are also adroit at recognizing faces, with facial recognition models being exceptionally powerful at identifying and comparing faces. Therefore it is generally considered important to obscure the faces in video and image when aiming for keeping it anonymized. Traditionally this is simply done through blurring or masking. But this de- stroys useful information such as eye gaze, pose, expression and the fact that it is a face. This is an especial issue, as today our society is data-driven in many aspects. One obvious such aspect is autonomous driving and driver monitoring, where necessary algorithms such as object-detectors rely on deep learning to function. Due to the data hunger of deep learning in conjunction with society’s call for privacy and integrity through regulations such as the General Data Protection Regularization (GDPR), anonymization that preserve useful information becomes important. This Thesis investigates the potential and possible limitation of anonymizing faces without destroying the aforementioned useful information. The base approach to achieve this is through face swapping and face manipulation, where the current research focus on changing the face (or identity) while keeping the original attribute information. All while being incorporated and consistent in an image and/or video. Specifically, will this Thesis demonstrate how target-oriented and subject-agnostic face swapping methodologies can be utilized for realistic anonymization that preserves attributes. Thru this, this Thesis points out several approaches that is: 1) controllable, meaning the proposed models do not naively changes the identity. Meaning that what kind of change of identity and magnitude is adjustable, thus also tunable to guarantee anonymization. 2) subject-agnostic, meaning that the models can handle any identity. 3) fast, meaning that the models is able to run efficiently. Thus having the potential of running in real-time. The end product consist of an anonymizer that achieved state-of-the-art performance on identity transfer, pose retention and expression retention while providing a realism. Apart of identity manipulation, the Thesis demonstrate potential security issues. Specifically reconstruction attacks, where a bad-actor model learns convolutional traces/patterns in the anonymized images in such a way that it is able to completely reconstruct the original identity. The bad-actor networks is able to do this with simple black-box access of the anonymization model by constructing a pair-wise dataset of unanonymized and anonymized faces. To alleviate this issue, different defense measures that disrupts the traces in the anonymized image was investigated. The main take away from this, is that naively using what qualitatively looks convincing of hiding an identity is not necessary the case at all. Making robust quantitative evaluations important.
28

Prompting for progression : How well can GenAI create a sense of progression in a set of multiple-choice questions? / Prompt för progression : Hur bra kan GenAI skapa progression i en uppsättning flervalsfrågor?

Jönsson, August January 2024 (has links)
Programming education is on the rise, leading to an increase in learning resources needed for universities and online courses. Questions are crucial for promoting good learning, and providing students with ample practice opportunities. Learning a subject relies heavily on a structured progression of topics and complexity. Yet, creating numerous questions has been proven to be a time-consuming task. Recently the technology world has been introduced to Generative AI (GenAI) systems using Large Language Models (LLMs) capable of generating large amounts of text and performing other text-related tasks. How can GenAI be used to solve problems related to creating learning materials while ensuring good quality? This study aims to investigate how well GenAI can create a sense of progression in a set of programming questions based on different prompt strategies. The method involves three question-generation cases using Chat-GPT API. Then, a qualitative evaluation of questions complexity, order, and quality is conducted. The first case aims to be the most simple way of asking Chat-GPT to generate 10 MCQs about a specific topic. The second case introduces defined complexity levels and desires of logical order and progression in complexity. The final case is the more advanced prompt building upon the second case along with a skill map as inspiration to the LLM. The skill map is a structured outline that highlights key points when learning a topic. According to the results, providing more instructions along with a skill map had a better impact on the progression of questions generated compared to a simpler prompt. The first case prompt still resulted in questions with good order but lacking in increasing complexity. The results indicate that while GenAI is capable of creating questions with a good progression that could be used in a real teaching context, it still requires quality control of the content to find outliers. Further research should be done to investigate optimal prompts and what constitutes a good skill map. / Programmeringsutbildningar blir allt fler, vilket leder till en ökning av behovet för lärresurser för universtitet och onlinekurser. Frågor är avgörande för att främja bra lärande och ge eleverna övningsmöjligheter. Att lära sig ett ämne är starkt beroende av en strukturerad progression av ämnen och komplexitet. Men att skapa många frågor har visat sig vara en tidskrävande uppgift. Nyligen har teknikvärlden introducerats till Generativa AI (GenAI)-system som använder Stora språkmodeller (LLM) som kan generera stora mängder text och utföra andra textrelaterade uppgifter. Hur kan GenAI användas för att lösa problem relaterade till att skapa läromedel samtidigt som man säkerställer en god kvalitet? Denna studie syftar till att undersöka hur väl GenAI kan skapa en känsla av progression i en uppsättning programmeringsfrågor baserade på olika prompt strategier. Metoden använder tre olika sätt att generera frågor med hjälp av Chat-GPTs API. Därefter genomförs en kvalitativ utvärdering av frågornas komplexitet, ordning och kvalité. Det första sättet syftar till att vara det enklaste sättet att be Chat-GPT att generera 10 flervalsfrågor om ett specifikt ämne. Det andra fallet introducerar definierade komplexitetsnivåer och önskemål om logisk ordning och progression i komplexitet. Det sista fallet är den mer avancerade prompten som bygger på det andra fallet tillsammans med en färdighetskarta som inspiration. Färdighetskartan är en strukturerad disposition av ett ämne som lyfter fram nyckelpunkter när man lär sig ett ämne. Resultaten visade att tillhandahålla fler instruktioner tillsammans med en färdighetskarta hade en bättre inverkan på progressionen av de genererade frågorna jämfört med det första sättet. Den första prompten resulterade fortfarande i frågor med god ordning men som saknade stegrande komplexitet. Resultaten indikerar att även om GenAI kan skapa frågor med god progression som skulle kunna användas i ett verkligt undervisningssammanhang, så krävs fortfarande en kvalitetskontroll av innehållet för att hitta felaktigheter. Ytterligare forskning bör göras för att undersöka optimala prompt och hur en bra färdighetskarta bör se ut.
29

Unpacking a Hierarchy of Trust : The Impacts of Trust in Mediating User Experiences with AI Avatar Technology

McTaggart, Christopher January 2024 (has links)
This research addresses the growing applications and impacts of AI-generated digital human avatars from software suites like HeyGen. By exploring the role of trust in mediating user interaction with such technology, this study establishes a basic hierarchical model which supports some foundational theories of human-computer interaction, while also calling into question some more recent theories and models previously used to evaluate avatar technology. By modeling user behavior and user preference through the lens of trust, this study is able to demonstrate how this emerging technology is similar to its predecessors and their relevant theories, while also establishing this technology as something distinctly new and largely untested. This research serves as an exploratory study, using notions of social presence, anthropomorphic design, social trust, technological trust, and human source-bias to separate this generation of AI Avatar technology from its predecessors, and determine what theories and models govern the use of this new technology. The findings from this study and their impacts on use-cases are then applied, speculating on prosocial as well as potentially unethical uses of such technology. Finally, this study problematizes the loss of “primary trust” that this technology may afford, highlighting the importance not only of continued research, but also rapid oversight in the deployment of this emerging technology.
30

The Intersection of AI-Generated Content and Digital Capital : An Exploration of Factors Impacting AI-Detection and its Consequences

Basta, Zofie January 2024 (has links)
Abstract: This thesis investigates the capacity of individuals to detect AI-generated text, and the indicators that enable them to do so. This inquiry is situated in the broader theoretical context of digital capital, the digitization of society, deep mediatization, and AI literacy. Using a quantitative correlation approach, the study tested participants’ accuracy in detecting AI content, and shared factors between participants with high scores on this task. Participants were assessed on a number of self-reported demographic, digital capital, and digital society-based benchmarks in conjunction with AI detection accuracy. The study employed a mix of statistical methods, including logistic regression and point-biserial correlation matrices. However, only a few specific questions within the digital capital and digital society framework had a statistically significant impact on a participant being in the high-accuracy group, and these correlations were weak. Furthermore, two aspects of digital capital actually had a negative effect on the odds of scoring high on the text detection task.  The findings reveal that there is room for more research into what indicators influence human AI detection capabilities, and whether these skills are learnable or inherent to certain individuals. Moreover, the research highlights the necessity of fostering AI literacy, particularly if these capabilities improve human AI detection. While AI systems can ‘catch’ AI-generated text, their efficacy is mixed, and producers of AI text and evaluators are constantly locked in a game of cat-and-mouse, using evolving AI to recognize evolving AI. Thus, human skills are pivotal, lest we become even more dependent on technology in our deeply mediatized society.

Page generated in 0.3282 seconds