1 |
The Governance of AI-based Information Technologies within Corporate EnvironmentsLobana, Jodie January 2021 (has links)
Artificial Intelligence (AI) is making significant progress in recent times and is gaining a strong foothold in business. Currently, there is no generally accepted scholarly framework for the governance of AI-based information technologies within corporate environments. Boards of directors who have the responsibility of overseeing corporate operations need to know how best to govern AI technologies within their companies. In response, this dissertation aims to understand the key elements that can assist boards in the governance of AI-based information technologies. Further, it attempts to understand how AI governance elements dynamically interact within a holistic system.
As AI governance is a novel phenomenon, an exploratory investigation was conducted via a qualitative approach. Specifically, the study adopted a grounded theory methodology, within the constructivist paradigm, with the intent of generating theory instead of validating existing theory. Data collection included in-depth interviews with key experts in AI research, development, management, and governance processes in corporate and academic settings. Data were further supplemented with data received from conference presentations given by AI experts.
Findings from this dissertation elicited a theoretical model of AI governance that shows various AI governance areas and constituting elements, their dynamic interaction, as well as the impact of these elements in enhancing the organizational performance of AI-based projects and reducing the risks associated with those projects. This dissertation provides a scholarly contribution by comparing governance elements within the IT governance domain and the new AI governance domain. In addition to theoretical contributions, this study provides practical contributions for the benefit of the boards of directors. These include a holistic AI governance framework that pictorially represents twenty-two AI governance elements that boards can use to build their own custom AI governance frameworks. In addition, recommendations are provided to assist boards in starting or enhancing their AI governance journeys. / Thesis / Doctor of Philosophy (PhD) / Artificial Intelligence (AI) refers to a set of technologies that seek to perform cognitive functions associated with human minds, such as learning, planning, and problem-solving. AI brings abundant opportunities as well as substantial risks. Major companies are trying to figure out how best to benefit from AI technologies. Boards of directors, with the responsibility of overseeing company operations, need to know how best to govern such technologies.
In response, this study was conducted to uncover key AI governance elements that can assist boards in the governance of AI. Data were collected through in-depth interviews with AI experts and by attending AI conference presentations.
Findings yield a theoretical model of AI governance that can assist scholars in enhancing their understanding of this emerging governance area. Findings also provide a holistic framework of AI governance that boards can use as a practical tool to enhance their effectiveness of the AI governance process.
|
2 |
Taking Responsible AI from Principle to Practice : A study of challenges when implementing Responsible AI guidelines in an organization and how to overcome themHedlund, Matilda, Henriksson, Hanna January 2023 (has links)
The rapid advancement of AI technology emphasizes the importance of developing practical and ethical frameworks to guide its evolution and deployment in a responsible manner. In light of more complex AI and its capacity to influence society, AI researchers and other prominent individuals are now indicating that AI evolution has to be regulated to a greater extent. This study examines the practical implementation of Responsible AI guidelines in an organization by investigating the challenges encountered and proposing solutions to overcome them. Previous research has primarily focused on conceptualizing Responsible AI guidelines, resulting in a tremendous number of abstract and high-level recommendations. However, there is an emerging demand to shift the focus toward studying the practical implementation of these. This study addresses the research question: ‘How can an organization overcome challenges that may arise when implementing Responsible AI guidelines in practice?’. The study utilizes the guidelines produced by the European Commission’s High-Level Expert Group on AI as a reference point, considering their influence on shaping future AI policy and regulation in the EU. The study is conducted in collaboration with the telecommunications company Ericsson, which henceforth will be referred to as 'the case organization’, which possesses a large global workforce and headquarters in Sweden. Specific focus is delineated to the department that works on developing AI internally for other units with the purpose of simplifying operations and processes, which henceforth in this study will be referred to as 'the AI unit'. Through an inductive interpretive approach, data from 16 semi-structured interviews and organization-specific documents were analyzed through a thematic analysis. The findings reveal challenges related to (1) understanding and defining Responsible AI, (2) technical conditions and complexity, (3) organizational structures and barriers, as well as (4) inconsistent and overlooked ethics. Proposed solutions include (1) education and awareness, (2) integration and implementation, (3) governance and accountability, and (4) alignment and values. The findings contribute to a deeper understanding of Responsible AI implementation and offer practical recommendations for organizations navigating the rapidly evolving landscape of AI technology.
|
3 |
EU Entering the Era of AI : A Qualitative Text Analysis on the European Union’s Policy on Artificial IntelligenceParviala, Tuulia January 2019 (has links)
In December 2018, two documents central for the European Union’s artificial intelligence policy were released: A Coordinated Plan on Artificial Intelligence by the European Commission, and the High-Level Expert Group on Artificial Intelligence’s Draft Ethics Guidelines for Trustworthy AI. These two documents are both an internal signal to the member states, but also an international sign about the role the EU aspires to take within the emerging AI development. Moreover, the documents are the research material used for this paper. The question this thesis seeks to answer is: “What role(s) does the European Union aspire to take in the global rise of AI?” The question will be answered by utilizing role theory. The study is conducted by carrying out a qualitative manifest content analysis with deductive approach. The main finding of this study is that the EU’s AI policy reflects the roles the EU has traditionally taken, referring to civilian power, soft power and normative power as roles. The normative power seems to, however, be the dominating role conception within the AI policy.
|
4 |
Technoethics and Sensemaking: Risk Assessment and Knowledge Management of Ethical Hacking in a Sociotechnical SocietyAbu-Shaqra, Baha 17 April 2020 (has links)
Cyber attacks by domestic and foreign threat actors are increasing in frequency and sophistication. Cyber adversaries exploit a cybersecurity skill/knowledge gap and an open society, undermining the information security/privacy of citizens and businesses and eroding trust in governments, thus threatening social and political stability. The use of open digital hacking technologies in ethical hacking in higher education and within broader society raises ethical, technical, social, and political challenges for liberal democracies. Programs teaching ethical hacking in higher education are steadily growing but there is a concern that teaching students hacking skills increases crime risk to society by drawing students toward criminal acts. A cybersecurity skill gap undermines the security/viability of business and government institutions. The thesis presents an examination of opportunities and risks involved in using AI powered intelligence gathering/surveillance technologies in ethical hacking teaching practices in Canada. Taking a qualitative exploratory case study approach, technoethical inquiry theory (Bunge-Luppicini) and Weick’s sensemaking model were applied as a sociotechnical theory (STEI-KW) to explore ethical hacking teaching practices in two Canadian universities. In-depth interviews with ethical hacking university experts, industry practitioners, and policy experts, and a document review were conducted. Findings pointed to a skill/knowledge gap in ethical hacking literature regarding the meanings, ethics, values, skills/knowledge, roles and responsibilities, and practices of ethical hacking and ethical hackers which underlies an identity and legitimacy crisis for professional ethical hacking practitioners; and a Teaching vs Practice cybersecurity skill gap in ethical hacking curricula. Two main S&T innovation risk mitigation initiatives were explored: An OSINT Analyst cybersecurity role and associated body of knowledge foundation framework as an interdisciplinary research area, and a networked centre of excellence of ethical hacking communities of practice as a knowledge management and governance/policy innovation approach focusing on the systematization and standardization of an ethical hacking body of knowledge.
|
5 |
<b>DEVELOPING A RESPONSIBLE AI INSTRUCTIONAL FRAMEWORK FOR ENHANCING AI LEGISLATIVE EFFICACY IN THE UNITED STATES</b>Kylie Ann Kristine Leonard (17583945) 09 December 2023 (has links)
<p dir="ltr">Artificial Intelligence (AI) is anticipated to exert a considerable impact on the global Gross Domestic Product (GDP), with projections estimating a contribution of 13 trillion dollars by the year 2030 (IEEE Board of Directors, 2019). In light of this influence on economic, societal, and intellectual realms, it is imperative for Policy Makers to acquaint themselves with the ongoing developments and consequential impacts of AI. The exigency of their preparedness lies in the potential for AI to evolve in unpredicted directions should proactive measures not be promptly instituted.</p><p dir="ltr">This paper endeavors to address a pivotal research question: " Do United States Policy Makers have a sufficient knowledgebase to understand Responsible AI in relation to Machine Learning to pass Artificial Intelligence legislation; and if they do not, how should a pedological instructional framework be created to give them the necessary knowledge?" The pursuit of answers to this question unfolded through the systematic review, gap analysis, and formulation of an instructional framework specifically tailored to elucidate the intricacies of Machine Learning. The findings of this study underscore the imperative for policymakers to undergo educational initiatives in the realm of artificial intelligence. Such educational interventions are deemed essential to empower policymakers with the requisite understanding for formulating effective regulatory frameworks that ensure the development of Responsible AI. The ethical dimensions inherent in this technological landscape warrant consideration, and policymakers must be equipped with the necessary cognitive tools to navigate these ethical quandaries adeptly.</p><p dir="ltr">In response to this exigency, the present study has undertaken the design and development of an instructional framework. This framework is conceived as a strategic intervention to address the evident cognitive gap existing among policymakers concerning the nuances of AI. By imparting an understanding of AI-related concepts, the framework aspires to cultivate a more informed and discerning governance ethos among policymakers, thus contributing to the responsible and ethical deployment of AI technologies.</p>
|
Page generated in 0.0385 seconds