• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 8
  • 8
  • 6
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Tools for responsible decision-making in machine learning

Rastegarpanah, Bashir 03 March 2022 (has links)
Machine learning algorithms are increasingly used by decision making systems that affect individual lives in a wide variety of ways. Consequently, in recent years concerns have been raised about the social and ethical implications of using such algorithms. Particular concerns include issues surrounding privacy, fairness, and transparency in decision systems. This dissertation introduces new tools and measures for improving the social desirability of data-driven decision systems, and consists of two main parts. The first part provides a useful tool for an important class of decision making algorithms: collaborative filtering in recommender systems. In particular, it introduces the idea of improving socially relevant properties of a recommender system by augmenting the input with additional training data, an approach which is inspired by prior work on data poisoning attacks and adapts them to generate `antidote data' for social good. We provide an algorithmic framework for this strategy and show that it can efficiently improve the polarization and fairness metrics of factorization-based recommender systems. In the second part, we focus on fairness notions that incorporate data inputs used by decision systems. In particular, we draw attention to `data minimization', an existing principle in data protection regulations that restricts a system to use the minimal information that is necessary for performing the task at hand. First, we propose an operationalization for this principle that is based on classification accuracy, and we show how a natural dependence of accuracy on data inputs can be expressed as a trade-off between fair-inputs and fair-outputs. Next, we address the problem of auditing black- box prediction models for data minimization compliance. For this problem, we suggest a metric for data minimization that is based on model instability under simple imputations, and we extend its applicability from a finite sample model to a distributional setting by introducing a probabilistic data minimization guarantee. Finally, assuming limited system queries, we formulate the problem of allocating a query budget to simple imputations for investigating model instability as a multi-armed bandit framework, for which we design efficient exploration strategies.
2

Taking Responsible AI from Principle to Practice : A study of challenges when implementing Responsible AI guidelines in an organization and how to overcome them

Hedlund, Matilda, Henriksson, Hanna January 2023 (has links)
The rapid advancement of AI technology emphasizes the importance of developing practical and ethical frameworks to guide its evolution and deployment in a responsible manner. In light of more complex AI and its capacity to influence society, AI researchers and other prominent individuals are now indicating that AI evolution has to be regulated to a greater extent. This study examines the practical implementation of Responsible AI guidelines in an organization by investigating the challenges encountered and proposing solutions to overcome them. Previous research has primarily focused on conceptualizing Responsible AI guidelines, resulting in a tremendous number of abstract and high-level recommendations. However, there is an emerging demand to shift the focus toward studying the practical implementation of these. This study addresses the research question: ‘How can an organization overcome challenges that may arise when implementing Responsible AI guidelines in practice?’. The study utilizes the guidelines produced by the European Commission’s High-Level Expert Group on AI as a reference point, considering their influence on shaping future AI policy and regulation in the EU. The study is conducted in collaboration with the telecommunications company Ericsson, which henceforth will be referred to as 'the case organization’, which possesses a large global workforce and headquarters in Sweden. Specific focus is delineated to the department that works on developing AI internally for other units with the purpose of simplifying operations and processes, which henceforth in this study will be referred to as 'the AI unit'. Through an inductive interpretive approach, data from 16 semi-structured interviews and organization-specific documents were analyzed through a thematic analysis. The findings reveal challenges related to (1) understanding and defining Responsible AI, (2) technical conditions and complexity, (3) organizational structures and barriers, as well as (4) inconsistent and overlooked ethics. Proposed solutions include (1) education and awareness, (2) integration and implementation, (3) governance and accountability, and (4) alignment and values. The findings contribute to a deeper understanding of Responsible AI implementation and offer practical recommendations for organizations navigating the rapidly evolving landscape of AI technology.
3

Fairness in AI : Discussion of a Unified Approach to Ensure Responsible AI Development

Kessing, Maria January 2021 (has links)
Besides entailing various benefits, AI technologies have also led to increased ethical concerns. Due to the growing attention, a large number of frameworks discussing responsible AI development have been released since 2016. This work aims at analyzing some of these proposals to answer the question (1) “Which approaches can be found to ensure responsible AI development?” For this, the theory section of this paper is looking at various approaches, including (inter-)governmental regulations, research organizations and private companies.  Further, expert interviews have been conducted to answer the second research question (2) “How can a unified solution be reached to ensure responsible AI development?” The results of the study have identified the governments as the main driver of this process. Overall, a detailed plan is necessary that brings together the public and private sector as well as research organizations. The paper also points out the importance of education in regard to making AI explainable and comprehensive for everyone. / Utöver de fördelar som AI-teknologier har bidragit med, så har även etiska dilemman och problem uppstått. På grund av ökat fokus, har ett stort antal förslag till system och regelverk som diskuterar ansvarstagande AI-utveckling publicerats sedan 2016. Denna rapport kommer analysera ett urval av dessa förslag med avsikt att besvara frågan (1) “Vilka tillvägagångssätt kan försäkra oss om en ansvarsfull AI-utveckling?” För att utforska denna fråga kommer denna rapport analysera olika metoder och tillvägagångssätt, på bland annat mellanstatliga- och statliga regelverk, forskningsgrupper samt privata företag. Dessutom har expertintervjuer genomförts för att besvara den andra problemformuleringen (2) “Hur kan vi nå en övergripande, gemensam, lösning för att försäkra oss om ansvarsfull AI-utveckling?” Denna rapport redogör för att statliga organisationer och myndigheter är den främsta drivkraften för att detta ska ske. Vidare krävs en detaljerad plan som knyter ihop forskningsgrupper med den offentliga- och privata sektorn. Slutligen anser rapporten även att det är av stor vikt för vidare utbildning när det kommer till att göra AI förklarbart och tydligt för alla.
4

Advancing the Understanding of the Role of Responsible AI in the Continued Use of IoMT in Healthcare

Al-Dhaen, Fatema, Hou, Jiachen, Rana, Nripendra P., Weerakkody, Vishanth J.P. 15 September 2021 (has links)
No / This paper examines the continuous intention by healthcare professionals to use the Internet of Medical Things (IoMT) in combination with responsible artificial intelligence (AI). Using the theory of Diffusion of Innovation (DOI), a model was developed to determine the continuous intention to use IoMT taking into account the risks and complexity involved in using AI. Data was gathered from 276 healthcare professionals through a survey questionnaire across hospitals in Bahrain. Empirical outcomes reveal nine significant relationships amongst the constructs. The findings show that despite contradictions associated with AI, continuous intention to use behaviour can be predicted during the diffusion of IoMT. This study advances the under- standing of the role of responsible AI in the continued use of IoMT in healthcare and extends DOI to address the diffusion of two innovations concurrently.
5

AI IMPLEMENTATION AND USAGE : A qualitative study of managerial challenges in implementation and use of AI solutions from the researchers’ perspective.

Umurerwa, Janviere, Lesjak, Maja January 2021 (has links)
Artificial intelligence (AI) technologies are developing rapidly and cause radical changes in organizations, companies, society, and individual levels. Managers are facing new challenges that they might not be prepared for. In this work, we seek to explore managerial challenges experienced while implementing and using AI technologies from the researchers’ perspective. Moreover, we explore how appropriate ethical deliberations should be applied when using big data concerning AI and the meaning of understanding or defining it. We describe qualitative research, the triangulation that includes related literature, in-depth interviews with researchers working on related topics from various fields, and a focus group discussion. Our findings show that AI algorithms are not universal, objective, or neutral and therefore researchers believe, it requires managers to have a solid understanding of the complexity of AI technologies and the nature of big data. Those are necessary to develop sufficient purchase capabilities and apply appropriate ethical considerations. Based on our results, we believe researchers are aware that those issues should be handled, but so far have too little attention. Therefore, we suggest further discussion and encourage research in this field.
6

Cyber Security Risks and Opportunities of Artificial Intelligence: A Qualitative Study : How AI would form the future of cyber security

Kirov, Martin January 2023 (has links)
Cybercriminals' digital threats to security are increasing, and organisations seek smarter solutions to combat them. Many organisations are using artificial intelligence (AI) to protect their assets. Statistics show that the adoption of AI in cyber security worldwide has grown steadily over the past few years, demonstrating that more and more companies are searching for more effective methods than traditional ones. At the same time, some are cautious about its implementation. Previous research shows this is a topic of discussion in the cyber security branch. Researchers seek to understand further how AI is used, uncovering how it may benefit security and the challenges organisations face. Sweden is a country known for its high level of technological advancement and innovation, and it has seen a particularly significant increase in the integration of AI in cyber security practices. Using semi-structured interviews as the primary research method, a diverse range of companies, were interviewed regarding their viewpoints on the topic, both those implementing AI-based cyber security solutions and those who do not. The research objectives were to examine how companies in Sweden understand and perceive AI in cyber security, identify their perceived risks associated with any potential opportunities with AI adoption, and explore possible future developments in the field. Through in-depth interviews, participants discussed their experiences, concerns, and expectations surrounding the topic, showing anywhere from mixed to negative opinions from companies not utilising AI cyber security. This study shows how more research is needed to advance our understanding of AI cyber security and how it is implemented in companies. The study concludes that when showing interest in strengthening their security with the help of AI, organisations should consider the ethical and legal issues as well as the importance of choosing the right AI solutions. Professionals recommend AI implementation for companies wishing to increase cyber security defences in the rising and ever-changing cyber threats landscape. / Cyberbrottslingarnas digitala hot mot säkerheten ökar, och organisationer söker smartare lösningar för att bekämpa dem. Många organisationer använder artificiell intelligens (AI) för att skydda sina tillgångar. Statistik visar att användningen av AI inom cybersäkerhet världen över har ökat stadigt under de senaste åren, vilket visar att allt fler företag söker efter mer effektiva metoder än de traditionella. Samtidigt är vissa försiktiga vad gäller AI:s implementering. Tidigare forskning visar att detta är ett diskussionsämne inom cybersäkerhetsbranschen. Forskarna vill förstå mer om hur AI används, hur det kan gynna säkerheten och vilka utmaningar organisationerna står inför. Sverige är ett land som är känt för sin höga nivå av teknisk utveckling och innovation och man har sett en särskilt betydande ökning av integrationen av AI i cybersäkerhetspraxis i landet. Med hjälp av semistrukturerade intervjuer som primär forskningsmetod intervjuades en rad olika företag om deras syn på ämnet, både de som implementerar AI-baserade cybersäkerhetslösningar och de som inte gör det. Målet var att undersöka hur företag i Sverige förstår och uppfattar AI inom cybersäkerhet, att identifiera deras upplevda risker i samband med eventuella möjligheter med AI-adoption och utforska möjlig framtida utveckling inom området. Genom djupintervjuer diskuterade deltagarna sina erfarenheter, farhågor och förväntningar i ämnet, som visade allt från blandade till negativa åsikter från företag som inte använder AI i cybersäkerhet. Studien visar att det behövs ytterligare forskning för att öka vår förståelse för AI-cybersäkerhet och hur den ska implementeras i företag. Studien drar slutsatsen att organisationer som visar intresse för att stärka sin säkerhet med hjälp av AI bör ta hänsyn till etiska och juridiska frågor samt vikten av att välja rätt AI-lösningar. Experter rekommenderar att AI implementeras för företag som vill stärka sin cybersäkerhet i det ständigt ökande och föränderliga cyberhotslandskapet.
7

<b>DEVELOPING A RESPONSIBLE AI INSTRUCTIONAL FRAMEWORK FOR ENHANCING AI LEGISLATIVE EFFICACY IN THE UNITED STATES</b>

Kylie Ann Kristine Leonard (17583945) 09 December 2023 (has links)
<p dir="ltr">Artificial Intelligence (AI) is anticipated to exert a considerable impact on the global Gross Domestic Product (GDP), with projections estimating a contribution of 13 trillion dollars by the year 2030 (IEEE Board of Directors, 2019). In light of this influence on economic, societal, and intellectual realms, it is imperative for Policy Makers to acquaint themselves with the ongoing developments and consequential impacts of AI. The exigency of their preparedness lies in the potential for AI to evolve in unpredicted directions should proactive measures not be promptly instituted.</p><p dir="ltr">This paper endeavors to address a pivotal research question: " Do United States Policy Makers have a sufficient knowledgebase to understand Responsible AI in relation to Machine Learning to pass Artificial Intelligence legislation; and if they do not, how should a pedological instructional framework be created to give them the necessary knowledge?" The pursuit of answers to this question unfolded through the systematic review, gap analysis, and formulation of an instructional framework specifically tailored to elucidate the intricacies of Machine Learning. The findings of this study underscore the imperative for policymakers to undergo educational initiatives in the realm of artificial intelligence. Such educational interventions are deemed essential to empower policymakers with the requisite understanding for formulating effective regulatory frameworks that ensure the development of Responsible AI. The ethical dimensions inherent in this technological landscape warrant consideration, and policymakers must be equipped with the necessary cognitive tools to navigate these ethical quandaries adeptly.</p><p dir="ltr">In response to this exigency, the present study has undertaken the design and development of an instructional framework. This framework is conceived as a strategic intervention to address the evident cognitive gap existing among policymakers concerning the nuances of AI. By imparting an understanding of AI-related concepts, the framework aspires to cultivate a more informed and discerning governance ethos among policymakers, thus contributing to the responsible and ethical deployment of AI technologies.</p>
8

Trustworthy AI: Ensuring Explainability and Acceptance

Davinder Kaur (17508870) 03 January 2024 (has links)
<p dir="ltr">In the dynamic realm of Artificial Intelligence (AI), this study explores the multifaceted landscape of Trustworthy AI with a dedicated focus on achieving both explainability and acceptance. The research addresses the evolving dynamics of AI, emphasizing the essential role of human involvement in shaping its trajectory.</p><p dir="ltr">A primary contribution of this work is the introduction of a novel "Trustworthy Explainability Acceptance Metric", tailored for the evaluation of AI-based systems by field experts. Grounded in a versatile distance acceptance approach, this metric provides a reliable measure of acceptance value. Practical applications of this metric are illustrated, particularly in a critical domain like medical diagnostics. Another significant contribution is the proposal of a trust-based security framework for 5G social networks. This framework enhances security and reliability by incorporating community insights and leveraging trust mechanisms, presenting a valuable advancement in social network security.</p><p dir="ltr">The study also introduces an artificial conscience-control module model, innovating with the concept of "Artificial Feeling." This model is designed to enhance AI system adaptability based on user preferences, ensuring controllability, safety, reliability, and trustworthiness in AI decision-making. This innovation contributes to fostering increased societal acceptance of AI technologies. Additionally, the research conducts a comprehensive survey of foundational requirements for establishing trustworthiness in AI. Emphasizing fairness, accountability, privacy, acceptance, and verification/validation, this survey lays the groundwork for understanding and addressing ethical considerations in AI applications. The study concludes with exploring quantum alternatives, offering fresh perspectives on algorithmic approaches in trustworthy AI systems. This exploration broadens the horizons of AI research, pushing the boundaries of traditional algorithms.</p><p dir="ltr">In summary, this work significantly contributes to the discourse on Trustworthy AI, ensuring both explainability and acceptance in the intricate interplay between humans and AI systems. Through its diverse contributions, the research offers valuable insights and practical frameworks for the responsible and ethical deployment of AI in various applications.</p>

Page generated in 0.0577 seconds