• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

<b>Systems Integration Model for AI Solutions with Explainability Requirements</b>

Sandra Smyth (20978657) 02 April 2025 (has links)
<p dir="ltr">Automated processes are helping companies become more efficient at an accelerated pace. According to Burke et al. (2017), automation processes are dynamic, and such dynamism is data-driven. The resulting insights of data processing allow the implementation of real-time decision-making tasks that companies use to adjust their business strategies, keep their market share, and compete for more. However, those data-driven environments do not work in isolation; these are digital-data-driven setups where connectivity and integration are key elements for such collaboration.</p><p dir="ltr">As Daniel (2023) explained, a fundamental requirement for integrating systems is understanding how each connected system works. This understanding process includes a comprehensive picture of its functionality, architecture, protocols, data structures, and other components to help design the integration of such systems. However, automated decision-making models based on artificial intelligence (AI) algorithms are considered a “black box” that lacks transparency and interpretability.</p><p dir="ltr">Explainability is a concept derived from the EU General Data Protection Regulation implemented in May 2018 by the European Commission. This law requires describing the logic behind automated decision-making processes that can affect data subjects’ interests. As Selbst and Powles (2017) suggested, AI solutions can defy human understanding due to the complexity of their models. Thus, knowing how a system works is difficult to accomplish when AI solutions are involved.</p><p dir="ltr">With the approval of the EU Artificial Intelligent Act by the EU Commission in March 2024, the explainability requirements initially applicable only to those decision-making systems that processed personal data have been extended to all AI algorithms that processed data for systems categorized as high-risk (R. Jain, 2024). New legal obligations are listed in the EU AI Act for high-risk systems; some of these new obligations consist of adapting or producing AI systems to be transparent, explainable, and designed to allow human oversight.</p><p dir="ltr">Under the EU AI Act, high-risk systems are those that could negatively affect the safety or fundamental rights of an individual, group of individuals, society, or the environment in general. Kempf and Rauer (2024) explained that malfunctioning essential systems could risk people’s lives and health or disrupt social and economic activities. Thus, according to the EU AI Act (2024), critical infrastructure systems such as water supply, gas, and electricity are contained in this high-risk category.</p><p dir="ltr">Within the energy sector, and as Niet et al., (2021) defined, the power grid’s ‘System Operators’ are the ones who plan, build, and maintain the electricity distribution or transmission network and provide a fair electricity market and network connections. As per Articles 13 and 14 of the EU AI Act, the systems operator, or any deployer in charge of overseeing an AI system, should be trained and equipped with transparent and explainability elements to understand the capabilities and limitations of the AI solutions so they can stop, confirm, or overwrite the recommendations made by such a model.</p><p dir="ltr">The present dissertation completed a qualitative study, starting with exploratory research to explain the different concepts involved in the study and their relationships. Thus, document analysis, GT-Grounded Theory, and triangulation were used as the primary qualitative research methods to comprehensively explain the challenges regarding the Systems Integration (SI) of AI solutions that require Explainability (XAI) modules. As part of data triangulation methods, informal conversations with subject matter experts were conducted to share the findings of this research and gather insights related to the current stage of XAI's applicability in the energy sector.</p><p dir="ltr">The scope of the population and sample for this qualitative research was composed of various types of data sources, including regulations, guidelines, standards, frameworks, reports in newspapers, dissertations, journals, business journals, and government publications. A total of 902 biographic references were collected in Zotero and then transferred to NVivo for data analysis. The study responded to the following research questions:</p><p dir="ltr">1. What XAI requirements need to be incorporated as part of enterprise and systems integration frameworks for high-risk AI implementations?</p><p dir="ltr">2. What are the critical operational challenges in integrating Explainability modules with AI systems and business processes?</p><p dir="ltr">The purpose of this study was to identify missing elements from the Enterprise Integration framework proposed by Lam and Shankararaman (2007), that are necessary to comply with XAI legal and ethical requirements delineated by the EU GDPR and EU AI Act. The findings included elements such as (a) monitoring of AI industry regulation and establishment of an AI policy, (b) executing fundamental rights impact assessments, (c) defining clauses to share accountability with third-party contributors of the solution, (d) changing the project management approach to be data-centric and (e) defining post-deployment processes to monitor and improve the performance of AI models. This contribution aspiration is to reduce implementation costs by offering a standard set of steps to follow on AI integration projects, facilitating communication between the project team and AI subject matter experts.</p><p dir="ltr">During this study, Lam and Shankararaman’s framework was reviewed against new legal explainability, and transparency obligations imposed by the EU GDPR and the EU AI Act. Then, the missing elements to operationalize those legal obligations were extracted from AI frameworks such as (a) the NIST AI Risk Management Framework (AI RMF1.0), (b) the standard ISO/IEC 42001:2023 Artificial Intelligence Management Systems, (c) the Singapore Model AI Governance Framework, and (d) the Japanese Machine Learning Quality Management Guideline. Finally, a new version of an Enterprise Integration framework incorporating those missing elements is offered, which could be used to guide the gathering of explainability and transparency requirements for enterprise and system integration of high-risk AI solutions, specifically for the electricity sector.</p><p dir="ltr">This paper explores the concept of AI explainability from technical and regulatory perspectives. The researcher expects that the presented findings will contribute to and trigger industry and academic discussions related to understanding this emerging topics' challenges and guide those in charge of implementing and operationalizing XAI solutions.</p>
Read more

Page generated in 0.0826 seconds