The rapid advancement of AI technology emphasizes the importance of developing practical and ethical frameworks to guide its evolution and deployment in a responsible manner. In light of more complex AI and its capacity to influence society, AI researchers and other prominent individuals are now indicating that AI evolution has to be regulated to a greater extent. This study examines the practical implementation of Responsible AI guidelines in an organization by investigating the challenges encountered and proposing solutions to overcome them. Previous research has primarily focused on conceptualizing Responsible AI guidelines, resulting in a tremendous number of abstract and high-level recommendations. However, there is an emerging demand to shift the focus toward studying the practical implementation of these. This study addresses the research question: ‘How can an organization overcome challenges that may arise when implementing Responsible AI guidelines in practice?’. The study utilizes the guidelines produced by the European Commission’s High-Level Expert Group on AI as a reference point, considering their influence on shaping future AI policy and regulation in the EU. The study is conducted in collaboration with the telecommunications company Ericsson, which henceforth will be referred to as 'the case organization’, which possesses a large global workforce and headquarters in Sweden. Specific focus is delineated to the department that works on developing AI internally for other units with the purpose of simplifying operations and processes, which henceforth in this study will be referred to as 'the AI unit'. Through an inductive interpretive approach, data from 16 semi-structured interviews and organization-specific documents were analyzed through a thematic analysis. The findings reveal challenges related to (1) understanding and defining Responsible AI, (2) technical conditions and complexity, (3) organizational structures and barriers, as well as (4) inconsistent and overlooked ethics. Proposed solutions include (1) education and awareness, (2) integration and implementation, (3) governance and accountability, and (4) alignment and values. The findings contribute to a deeper understanding of Responsible AI implementation and offer practical recommendations for organizations navigating the rapidly evolving landscape of AI technology.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:liu-196242 |
Date | January 2023 |
Creators | Hedlund, Matilda, Henriksson, Hanna |
Publisher | Linköpings universitet, Informationssystem och digitalisering, Linköpings universitet, Filosofiska fakulteten |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0016 seconds