• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 968
  • 480
  • 294
  • 27
  • 17
  • 13
  • 10
  • 7
  • 7
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 1994
  • 644
  • 640
  • 494
  • 433
  • 342
  • 337
  • 308
  • 298
  • 297
  • 296
  • 289
  • 229
  • 221
  • 172
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Why won't you let (A)I help you? : A quantitative study that explains the effects of AI perceptions on willingness to disclose personal information to AI

Benda, Tim, Lind, Vincent January 2021 (has links)
Purpose The purpose of the study is to explain the perceived benefits and perceived privacy concerns of AI’s effects on willingness to disclose personal information to AI while explaining the moderating effect of perceived knowledge of AI.  Design/methodology/approach With the explanatory purpose in mind was firstly a deductive approach of research applied. The researchers further applied a quantitative approach of research in the form of a questionnaire. A total number of 193 responses of the questionnaire was validly collected. Furthermore, 10 hypotheses were conducted in order to investigate the relationships within research. Findings The findings are that perceived knowledge of AI does not have a positive moderating effect on any of the perceived benefits of AI nor perceived concerns of AI effectiveness on willingness to disclose personal information to AI. The findings also show that perceived privacy concerns of AI have a negative effect on willingness to disclose personal information to AI. Perceived personalization benefits, perceived health benefits and perceived financial benefits of AI have a positive effect on willingness to disclose personal information to AI.  Research contributions/limitations The research contributes to current research by highlighting the importance of context in regard to privacy calculus in order to improve on the model’s ability to explain variations. The research is limited by its data being skewed towards younger people and thus the study is representative of a younger Swedish sample.  Practical implications The research shows that it is important for both businesses and policy makers to take into consideration that individuals possess a higher perceived privacy concern of AI in comparison to the benefits when it comes to disclosing personal information to AI. Highlighting the importance of educating individuals in how AI actually function, as it is implied that the benefits are valued but it does not make individuals more willing to disclose personal information to AI.   Originality/value The originality of the study is that it makes use of the context of AI in relation to the privacy calculus, which has not been done before. Additionally, incorporating specific benefits as opposed to explaining the general perception of AI benefits, the study is able to explain more specifically how different benefits of AI affect individual’s willingness to disclose personal information to AI.
122

Artificiell intelligens ur ett intressentperspektiv : En kvalitativ studie om hur intressenter hanteras och påverkas av implementering av AI-system. / Artificial Intelligence from a Stakeholder Perspective : A Qualitative Study of How Stakeholders Are Handled and Affected by Implementing AI-Systems.

Johansson, Julia, Schwabe, Stephanie January 2021 (has links)
Problemformulering: På vilka sätt hanteras och påverkas en organisations intressenter av implementeringen av AI-system?  Syfte: Syftet med denna studie är att utifrån en organisations intressenters uppfattning kartlägga på vilka sätt intressenterna hanteras och påverkas av implementering av AI-system. Metod: Studien har utgångspunkt i kvalitativ forskningsstrategi med en deduktiv ansats. Den genomförda studien är en fallstudie, där Länsförsäkringar har studerats. Det empiriska materialet är insamlat genom tio semistrukturerade intervjuer.  Slutsats: Med vår studie kan vi se att implementeringen av Länsförsäkringars chatbot påverkar de anställda. Den potentiella utvecklingen av AI däremot tenderar att påverka flera intressentgrupper. Vidare kan vi se i studiens resultat svårigheter att identifiera organisationens intressenter samt svårigheter att prioritera och värdera intressenter, vilket överlag överensstämmer med den framtagna teorin gällande intressentmodellen. Vi kan därför dra slutsatsen att Länsförsäkringar bör identifiera intressenter och dess påverkan av utvecklingen av AI för veta hur intressenter ska hanteras. / Research question: In what ways is an organization's stakeholders handled and affected by the implementation of AI-systems? Purpose: The purpose of this study is to map, based on the perception of an organization's stakeholders, in what ways stakeholders are handled and affected by the implementation of AI-systems.  Method: The study is based on a qualitative research strategy with a deductive approach. The completed study is a case study, where Länsförsäkringar has been studied. The empirical material is collected through ten semi-structured interviews. Conclusion: With our study, can we see that the implementation of Länsförsäkringar's chatbot affects the employees. The potential development of AI, on the other hand, tends to affect several stakeholder groups. Furthermore, we can see in the results of the study difficulties in identifying the organization's stakeholders as well as difficulties in prioritizing and evaluating stakeholders, which is generally in line with the developed theory regarding the stakeholder model. We can therefore conclude that Länsförsäkringar should identify stakeholders and their impact on the development of AI in order to know how stakeholders should be handled.
123

Technology and the Value of Trust : Can Society Trust AI?

Janus, Dominika January 2022 (has links)
Ensuring "public trust" in AI seems to be a priority for policymakers and the private sector. It is expected that without public trust, such innovations cannot be implemented with legitimacy, and there is a risk of potential public backlash or resistance (for example cases of Cambridge Analytica, predictive policing, or Clearview AI). There is a rich body of research relating to public trust in data use that suggests that "building public trust" can too often place the burden on the public to be "more trusting" and will do little to address other concerns, including whether trust is a desirable and attainable characteristic of human-AI relation. I argue that there is good reason for the public not to trust AI, especially in the absence of regulatory structures that afford genuine accountability, but at the same time AI can be considered reliable. To that end, the main argument of this paper is 1. We are asked to trust an entity that cannot enter the trust relationship, because it doesn’t fulfil the conditions spelled out by the definitions of trust. 2. We are presented with a misdescription of the agent. Who we trust in fact are developers or policy makers. I also argue that the term "reliance" should be used instead of "trust", as by definition it is more fitting current AI applications. Additionally, the focus should be on framing trust as part of practices expected from AI solution providers, developers and regulators.
124

Beating Humans at their own Game

Stöckel, Frank January 2022 (has links)
Creating an artificial intelligence that can play games such as chess against humans has become a popular subject of research with it’s roots in the 1950’s. Over the following 70 years, strategies and algorithms evolved to the point where the average computer using the state of the art chess AI easily beats the best humans players. This was not always the case, as it took until 1997 for the then current reigning world champion Garry Kasparov to lose to the chess engine Deep Blue. In this thesis a survey over the different techniques developed and refined over time in chess AI along with their strengths and weaknesses are presented. Finally, an implementation of a chess engine built for the purpose of this thesis project will be presented.
125

ChatGPT as a Supporting Tool for System Developers : Understanding User Adoption

Andersson, Mattias, Marshall Olsson, Tom January 2023 (has links)
Bakgrund: AI, specifikt konversations-AI som OpenAI:s ChatGPT, växer snabbt i både privata och professionella sammanhang, vilket erbjuder möjligheter till kostnadsbesparingar och modernisering för företag. ChatGPT kan simulera mänskliga konversationer, vilket kan ge fördelar i flera olika industrier och kan genom samarbete mellan människa och AI potentiellt förbättra anställdas produktivitet. Det huvudsakliga forskningsproblemet är att identifiera faktorer som påverkar systemutvecklarens användning av ChatGPT och beakta dess design och implementation för att minska potentiella negativa effekter. Syfte:  Denna studie syftar till att undersöka de faktorer som påverkar användares adoption ChatGPT som ett verktyg för att stödja systemutvecklare. Dessutom syftar studien till att identifiera hur ChatGPT kan hjälpa systemutvecklare i deras dagliga arbete och vilka hinder som finns för att inkorporera ChatGPT i denna kontext. Metod: Genom en fallstudieansats med kvalitativa och kvantitativa datainsamlingsmetoder, använder studien positivistiska och interpretivistiska paradigm. Resultat: Resultatet visar att den uppfattade förmågan hos ChatGPT att förbättra effektiviteten och generera korrekta svar påverkar avsikten att använda tekniken. Faktorer som tidsbesparing, produktivitetsförbättring och användarvänlighet gav dock inte statistiskt signifikanta resultat. Utvecklare finner ChatGPT användbart för att förenkla uppgifter och hjälpa juniora utvecklare, men det finns oro för att hantera komplexa uppgifter och säkerhetsfrågor. Slutsatser: Användarnas acceptans av ChatGPT drivs främst av den uppfattade precisionen och effektiviteten. ChatGPT kan hjälpa till med uppgifter som felsökning, kodgenerering, kodrefaktorering, kodoptimering och teknisk dokumentation, men med vissa potentiella begränsningar när det gäller hantering av alltför komplex kod. Trots detta så finns hinder för införandet i form av oro för integritet, säkerhet och brist på medvetenhet samt funktionella begränsningar. Följder: De insikter som vunnits kan indirekt gynna företag, inklusive vår affärspartner CGI, genom att bidra till beslutsfattandeprocesser relaterade till adoption och användning av ChatGPT. / Background: AI, specifically conversational AI like OpenAI's ChatGPT, is rapidly expanding in personal and professional settings, offering cost-cutting and modernization opportunities for businesses. This technology, capable of simulating human-like conversations, holds promise across various industries, potentially enhancing productivity through human-AI collaboration. The main research problem is to identify factors influencing system developers' adoption of ChatGPT, considering its design and implementation to mitigate potential negative impacts. Aim: This study aims to investigate the factors that influence user adoption of ChatGPT as a tool to support system developers. Additionally, it aims to identify how ChatGPT can aid system developers in their daily work, and challenges associated with incorporating ChatGPT in this context. Method: Using a case study approach with qualitative and quantitative data collection methods, the study employs positivist and interpretivist philosophical paradigms. Results: Results showed that the perceived ability of ChatGPT to enhance efficiency and generate accurate responses significantly impacts adoption intentions. When examining aspects related to timesaving, productivity enhancement, and user-friendliness, no statistically significant results were found. Among developers, ChatGPT is considered valuable for simplifying tasks and assisting junior developers. There are concerns regarding its capability to handle complex tasks and potential security issues. Suggestions for improvement include better integration with integrated development environments (IDEs) and enhanced accuracy. Conclusions: The findings highlight perceived accuracy and efficiency as driving factors for user adoption regarding ChatGPT. ChatGPT can support tasks like debugging, code generation, code refactoring, code optimization, and technical documentation. However, there may be some potential limitations when dealing with overly complex code. Barriers to adoption include concerns about integrity and security, lack of awareness, and functional limitations. Implications: The insights gained can indirectly benefit companies, including our business partner CGI, by guiding decision-making processes related to the effective adoption and utilization of ChatGPT.
126

Evaluating Trust in AI-Assisted Bridge Inspection through VR

Pathak, Jignasu Yagnesh 29 January 2024 (has links)
The integration of Artificial Intelligence (AI) in collaborative tasks has gained momentum, with particular implications for critical infrastructure maintenance. This study examines the assurance goals of AI—security, explainability, and trustworthiness—within Virtual Reality (VR) environments for bridge maintenance. Adopting a within-subjects design approach, this research leverages VR environments to simulate real-world bridge maintenance scenarios and gauge user interactions with AI tools. With the industry transitioning from paper-based to digital bridge maintenance, this investigation underscores the imperative roles of security and trust in adopting AI-assisted methodologies. Recent advancements in AI assurance within critical infrastructure highlight its monumental role in ensuring safe, explainable, and trustworthy AI-driven solutions. / Master of Science / In today's rapidly advancing world, the traditional methods of inspecting and maintaining our bridges are being revolutionized by digital technology and artificial intelligence (AI). This study delves into the emerging role of AI in bridge maintenance, a field historically reliant on manual inspection. With the implementation of AI, we aim to enhance the efficiency and accuracy of assessments, ensuring that our bridges remain safe and functional. Our research employs virtual reality (VR) to create a realistic setting for examining how users interact with AI during bridge inspections. This immersive approach allows us to observe the decision-making process in a controlled environment that closely mimics real-life scenarios. By doing so, we can understand the potential benefits and challenges of incorporating AI into maintenance routines. One of the critical challenges we face is the balance of trust in AI. Too little trust could undermine the effectiveness of AI assistance, while too much could lead to overreliance and potential biases. Furthermore, the use of digital systems introduces the risk of cyber threats, which could compromise the security and reliability of the inspection data. Our research also investigates the impact of AI-generated explanations on users' decisions. In essence, we explore whether providing rationale behind AI's recommendations helps users make better judgments during inspections. The ultimate objective is to develop AI tools that are not only advanced but also understandable and reliable for those who use them, even if they do not have a deep background in technology. As we integrate AI into bridge inspections, it's vital to ensure that such systems are protected against cyber threats and that they function as reliable companions to human inspectors. This study seeks to pave the way for AI to become a trusted ally in maintaining the safety and integrity of our infrastructure.
127

Från Ateljé till Algoritm : Upphovsrätt, äkthet och kreativitet i den AI-genererade bildens tid / From Studio to Algorithm : Copyright, Authenticity, and Creativity in the Era of AI-Generated Images

Björk, Elin Eira January 2023 (has links)
This essay delves into the world of AI-generated images and the tools that create them. These type of tools became widely available in 2022, allowing users to generate images based on instructions written in plain text. The objective of this essay is to explore the landscape of AI-generated images based on text instructions, specifically focusing on the perspectives of key stakeholders: the artist community, the AI companies who create the tools, and the individuals utilising them. The essay draws upon Walter Benjamin's thoughts on politics, mass production, and authenticity as articulated in the essay "The Work of Art in the Age of Mechanical Reproduction," published in 1935. The material has been analysed through a phenomenological lens, considering how AI-generated images are perceived and their relationship to representation. The text reflects on who benefits from AI tools and whether it happens at the expense of others. Consequently, the essay also examines power structures and the power distribution among users of AI tools, the AI companies developing these tools, and existing artists not utilising AI tools. The subject is complex, and the essay identifies concerns within the artist community regarding potentially being replaced by AI tools constructed using the work of human artists without their knowledge or consent. Simultaneously, the essay recognises that AI tools can offer individuals who would not typically engage in image creation a pathway to creative expression.
128

Kan skadeståndsrätten hålla artificiell intelligens i schack? : Skadeståndsansvar för AI-programmerare

Lundgren, Johan January 2024 (has links)
Produkter idag använder AI med automatiska inlärningssystem. På grund av den bristande in-synen i AI-system uppstår det utmaningar med att härleda orsaken till en inträffad skada. Den bristande insynen i AI-system kallas för black box och detta resulterar i problem med att kunna fastställa ett skadeståndsansvar. Näringsverksamheten för AI-programmerare kan åläggas ansvar för AI-tekniken, om AI-tekniken utgör en del i en produkt enligt PAL. Däremot uppkommer det bevissvårigheter för att kunna fastställa skadeorsaken till följd av AI-teknik. Det går icke desto mindre att föreskriva ett ansvar för AI-programmerares delprodukt, om delproduktens fel går att behäfta med en sä-kerhetsbrist. PAL utgör ett regelverk med strikt ansvar för produkter med säkerhetsbrister, men i lagen utryms samtidigt vissa undantag. Det strikta ansvaret kan undantas för en AI-program-merare när säkerhetsbristen uppstår efter att produkten har satts i omlopp. Vidare kan ett ska-deståndsansvar undantas, om tillverkaren till AI-tekniken kan visa att det var omöjligt att upp-täcka innan förekomsten av säkerhetsbristen på grund av det vetenskapliga och tekniska kun-skapsläget. Detta talar för ett undantag mot ett strikt produktansvar för en AI-programmerare som har programmerat en AI-baserad produkt. Undantagen i PAL tillämpas dock restriktivt och det åligger alltjämt näringsverksamheten att kunna presentera sådan bevisning. Utöver bestämmelserna i PAL finns det möjlighet till att tillämpa ett principalansvar på AI-programmerare genom en culpaprövning som finns uttryckt i SkL. Produkter med AI-system kan agera oberäkneligt och kan till följd av detta orsaka skador. AI-systems black box medför svårigheter när det gäller att fastställa den relevanta skadan genom kausalitet. På grund av black box är det AI-programmerare som har störst möjlighet till att förhindra skador orsakat av en AI-baserad produkt eftersom AI-programmerare har störst möjlighet att kunna härleda skade-orsaken. Det uppstår därmed en svårighet för konsumenter att kunna säkra bevisning vid en uppkommen AI-relaterad skada. AI-programmerare har störst möjlighet att säkra bevisningen vilket talar för en bevislättnad för den skadelidande. Efter att AI-programmerares skadestånds-ansvar har culpaprövats i arbetet kan bedömningen landa både i ett ansvar eller ett icke-ansvar för AI-programmerare. En utredning av AI-relaterade ansvarsfrågor framstår följaktligen som nödvändig. Det tillkommer olika problem med att tillämpa gällande rätt på AI-programmerares skade-ståndsansvar på AI-baserade produkter. I arbetet diskuteras därför olika lösningsförslag på de ansvarsproblem som uppstår med AI-baserade produkter. Ett transparent AI-system kan lösa problemet med att kunna identifiera skadeorsaken. Ett transparent AI-system skapar förtroende hos konsumenterna samtidigt som detta kan kontrasteras av tillverkarens affärsmässiga intresse. Det har utarbetats förslag med en definition om AI-system med hög risk där det tillkommer ett strikt ansvar för utgivaren av AI-system. Till följd av en ökad användning av AI-baserade pro-dukter och att dessa produkter orsakar nya typer av skador, finns det därför behov till att kom-plettera produkt- och skadebegreppet i PAL. Sammanfattningsvis finns det behov av att förtyd-liga skadeståndslagstiftningen och att anpassa denna till AI-relaterad skada – detta för att kunna säkerställa ersättnings- och säkerhetsnivån för konsumenter gällande ansvarsfrågor för AI-ba-serade produkter.
129

Swedish Digital Marketers Utilization of AI Tools : A Qualitative Study on how AI Tools are Used and What the Limitations are for Swedish Digital Marketers.

Kurman, Rasmus, Blom, Benjamin January 2024 (has links)
This study examined how Swedish digital marketers use AI and its impact on their workflow, as well as the limitations of adopting AI for digital marketing. Semi-structured interviews were performed with Swedish digital marketing professionals, and a thematic analysis was conducted to identify themes and patterns that appeared in the data collection.  The software used by participants varied, but all seven utilized ChatGPT. Five used Google Ads and Google Analytics, three used Adobe software (including Adobe Firefly and Photoshop), and two used Midjourney. Other software was also used independently by participants. The findings indicate that AI has enhanced perceived productivity and proved valuable to marketers. Those who employed AI technology reported more effective work sessions and shorter work completion times. However, the study also identified significant limitations of AI in digital marketing. These limitations include AI's inability to match human creativity, which limits the development of creative brand storylines, campaign designs, and content production. Additionally, issues with the tone and accuracy of generative AI content highlight the need to maintain authenticity and reliability in marketing communications. Marketers expressed concerns about the accuracy and quality of AI-generated information and sought clearer guidelines and regulations regarding the use of AI tools.
130

Toward Designing Active ORR Catalysts via Interpretable and Explainable Machine Learning

Omidvar, Noushin 22 September 2022 (has links)
The electrochemical oxygen reduction reaction (ORR) is a very important catalytic process that is directly used in carbon-free energy systems like fuel cells. However, the lack of active, stable, and cost-effective ORR cathode materials has been a major impediment to the broad adoption of these technologies. So, the challenge for researchers in catalysis is to find catalysts that are electrochemically efficient to drive the reaction, made of earth-abundant elements to lower material costs and allow scalability, and stable to make them last longer. The majority of commercial catalysts that are now being used have been found through trial and error techniques that rely on the chemical intuition of experts. This method of empirical discovery is, however, very challenging, slow, and complicated because the performance of the catalyst depends on a myriad of factors. Researchers have recently turned to machine learning (ML) to find and design heterogeneous catalysts faster with emerging catalysis databases. Black-box models make up a lot of the ML models that are used in the field to predict the properties of catalysts that are important to their performance, such as their adsorption energies to reaction intermediates. However, as these black-box models are based on very complicated mathematical formulas, it is very hard to figure out how they work and the underlying physics of the desired catalyst properties remains hidden. As a way to open up these black boxes and make them easier to understand, more attention is being paid to interpretable and explainable ML. This work aims to speed up the process of screening and optimizing Pt monolayer alloys for ORR while gaining physical insights. We use a theory-infused machine learning framework in combination with a high-throughput active screening approach to effectively find promising ORR Pt monolayer catalysts. Furthermore, an explainability game-theory approach is employed to find electronic factors that control surface reactivity. The novel insights in this study can provide new design strategies that could shape the paradigm of catalyst discovery. / Doctor of Philosophy / The electrochemical oxygen reduction reaction (ORR) is a very important catalytic process that is used directly in carbon-free energy systems like fuel cells. But the lack of ORR cathode materials that are active, stable, and cheap has made it hard for these technologies to be widely used. Most of the commercially used catalysts have been found through trial-and-error methods that rely on the chemical intuition of experts. This method of finding out through experience is hard, slow, and complicated, though, because the performance of the catalyst depends on a variety of factors. Researchers are now using machine learning (ML) and new catalysis databases to find and design heterogeneous catalysts faster. But because black-box ML models are based on very complicated mathematical formulas, it is very hard to figure out how they work, and the physics behind the desired catalyst properties remains hidden. In recent years, more attention has been paid to ML that can be understood and explained as a way to decode these "black boxes" and make them easier to understand. The goal of this work is to speed up the screening and optimization of Pt monolayer alloys for ORR. We find promising ORR Pt monolayer catalysts by using a machine learning framework that is based on theory and a high-throughput active screening method. A game-theory approach is also used to find the electronic factors that control surface reactivity. The new ideas in this study can lead to new ways of designing that could alter how researchers find catalysts.

Page generated in 0.0428 seconds