• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 4
  • Tagged with
  • 31
  • 20
  • 17
  • 16
  • 15
  • 15
  • 12
  • 11
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

QUANTIFYING TRUST IN DEEP LEARNING WITH OBJECTIVE EXPLAINABLE AI METHODS FOR ECG CLASSIFICATION / EVALUATING TRUST AND EXPLAINABILITY FOR DEEP LEARNING MODELS

Siddiqui, Mohammad Kashif 11 1900 (has links)
Trustworthiness is a roadblock in mass adoption of artificial intelligence (AI) in medicine. This thesis developed a framework to explore the trustworthiness as it applies to AI in medicine with respect to common stakeholders in medical device development. Within this framework the element of explainability of AI models was explored by evaluating explainable AI (XAI) methods. In current literature a litany of XAI methods are available that provide a variety of insights into the learning and function of AI models. XAI methods provide a human readable output for the AI’s learning process. These XAI methods tend to be bespoke and provide very subjective outputs with varying degrees of quality. Currently, there are no metrics or methods of objectively evaluating XAI outputs against outputs from different types of XAI methods. This thesis presents a set of constituent elements (similarity, stability and novelty) to explore the concept of explainability and then presents a series of metrics to evaluate those constituent elements. Thus providing a repeatable and testable framework to evaluate XAI methods and their generated explanations. This is accomplished using subject matter expert (SME) annotated ECG signals (time-series signals) represented as images to AI models and XAI methods. A small subset from all available XAI methods, Vanilla Saliency, SmoothGrad, GradCAM and GradCAM++ were used to generate XAI outputs for a VGG-16 based deep learning classification model. The framework provides insights about XAI method generated explanations for the AI and how closely that learning corresponds to SME decision making. It also objectively evaluates how closely explanations generated by any XAI method resemble outputs from other XAI methods. Lastly, the framework provides insights about possible novel learning done by the deep learning model beyond what was identified by the SMEs in their decision making. / Thesis / Master of Applied Science (MASc) / The goal of this thesis was to develop a framework of how trustworthiness can be improved for a variety of stakeholders in the use of AI in medical applications. Trust was broken down into basic elements (Explainability, Verifiability, Fairness & Ro- bustness) and ’Explainability’ was further explored. This was done by determining how explainability (offered by XAI methods) can address the needs (Accuracy, Safety, and Performance) of stakeholders and how those needs can be evaluated. Methods of comparison (similarity, stability, and novelty) were developed that allow an objective evaluation of the explanations from various XAI methods using repeatable metrics (Jaccard, Hamming, Pearson Correlation, and TF-IDF). Combining the results of these measurements into the framework of trust, work towards improving AI trust- worthiness and provides a way to evaluate and compare the utility of explanations.
2

Towards eXplainable Artificial Intelligence (XAI) in cybersecurity

Lopez, Eduardo January 2024 (has links)
A 2023 cybersecurity research study highlighted the risk of increased technology investment not being matched by a proportional investment in cybersecurity, exposing organizations to greater cyber identity compromise vulnerabilities and risk. The result is that a survey of security professionals found that 240\% expected growth in digital identities, 68\% were concerned about insider threats from employee layoffs and churn, 99\% expect identity compromise due to financial cutbacks, geopolitical factors, cloud adoption and hybrid work, while 74\% were concerned about confidential data loss through employees, ex-employees and third party vendors. In the light of continuing growth of this type of criminal activity, those responsible for keeping such risks under control have no alternative than to use continually more defensive measures to prevent them from happening and causing unnecessary businesses losses. This research project explores a real-life case study: an Artificial Intelligence (AI) information systems solution implemented in a mid-size organization facing significant cybersecurity threats. A holistic approach was taken, where AI was complemented with key non-technical elements such as organizational structures, business processes, standard operating documentation and training - oriented towards driving behaviours conducive to a strong cybersecurity posture for the organization. Using Design Science Research (DSR) guidelines, the process for conceptualizing, designing, planning and implementing the AI project was richly described from both a technical and information systems perspective. In alignment with DSR, key artifacts are documented in this research, such as a model for AI implementation that can create significant value for practitioners. The research results illustrate how an iterative, data-driven approach to development and operations is essential, with explainability and interpretability taking centre stage in driving adoption and trust. This case study highlighted how critical communication, training and cost-containment strategies can be to the success of an AI project in a mid-size organization. / Thesis / Doctor of Science (PhD) / Artificial Intelligence (AI) is now pervasive in our lives, intertwined with myriad other technology elements in the fabric of society and organizations. Instant translations, complex fraud detection and AI assistants are not the fodder of science fiction any longer. However, realizing its bene fits in an organization can be challenging. Current AI implementations are different from traditional information systems development. AI models need to be trained with large amounts of data, iteratively focusing on outcomes rather than business requirements. AI projects may require an atypical set of skills and significant financial resources, while creating risks such as bias, security, interpretability, and privacy. The research explores a real-life case study in a mid-size organization using Generative AI to improve its cybersecurity posture. A model for successful AI implementations is proposed, including the non-technical elements that practitioners should consider when pursuing AI in their organizations.
3

Evaluating Trust in AI-Assisted Bridge Inspection through VR

Pathak, Jignasu Yagnesh 29 January 2024 (has links)
The integration of Artificial Intelligence (AI) in collaborative tasks has gained momentum, with particular implications for critical infrastructure maintenance. This study examines the assurance goals of AI—security, explainability, and trustworthiness—within Virtual Reality (VR) environments for bridge maintenance. Adopting a within-subjects design approach, this research leverages VR environments to simulate real-world bridge maintenance scenarios and gauge user interactions with AI tools. With the industry transitioning from paper-based to digital bridge maintenance, this investigation underscores the imperative roles of security and trust in adopting AI-assisted methodologies. Recent advancements in AI assurance within critical infrastructure highlight its monumental role in ensuring safe, explainable, and trustworthy AI-driven solutions. / Master of Science / In today's rapidly advancing world, the traditional methods of inspecting and maintaining our bridges are being revolutionized by digital technology and artificial intelligence (AI). This study delves into the emerging role of AI in bridge maintenance, a field historically reliant on manual inspection. With the implementation of AI, we aim to enhance the efficiency and accuracy of assessments, ensuring that our bridges remain safe and functional. Our research employs virtual reality (VR) to create a realistic setting for examining how users interact with AI during bridge inspections. This immersive approach allows us to observe the decision-making process in a controlled environment that closely mimics real-life scenarios. By doing so, we can understand the potential benefits and challenges of incorporating AI into maintenance routines. One of the critical challenges we face is the balance of trust in AI. Too little trust could undermine the effectiveness of AI assistance, while too much could lead to overreliance and potential biases. Furthermore, the use of digital systems introduces the risk of cyber threats, which could compromise the security and reliability of the inspection data. Our research also investigates the impact of AI-generated explanations on users' decisions. In essence, we explore whether providing rationale behind AI's recommendations helps users make better judgments during inspections. The ultimate objective is to develop AI tools that are not only advanced but also understandable and reliable for those who use them, even if they do not have a deep background in technology. As we integrate AI into bridge inspections, it's vital to ensure that such systems are protected against cyber threats and that they function as reliable companions to human inspectors. This study seeks to pave the way for AI to become a trusted ally in maintaining the safety and integrity of our infrastructure.
4

Visualization design for improving layer-wise relevance propagation and multi-attribute image classification

Huang, Xinyi 01 December 2021 (has links)
No description available.
5

Explainable AI by Training Introspection / Explainable AI by Training Introspection

Dastkarvelayati, Rozhin, Ghafourian, Soudabeh January 2023 (has links)
Deep Neural Networks (DNNs) are known as black box algorithmsthat lack transparency and interpretability for humans. eXplainableArtificial Intelligence (XAI) is introduced to tackle this problem. MostXAI methods are utilized post-training, providing explanations of themodel to clarify its predictions and inner workings for human understanding. However, there is a shortage of methods that utilize XAIduring training to not only observe the model’s behavior but alsoexploit this information for the benefit of the model.In our approach, we propose a novel method that leverages XAIduring the training process itself. Incorporating feedback from XAIcan give us insights into important features of input data that impact model decisions. This work explores focusing more on specificfeatures during training, which could potentially improve model performance introspectively throughout the training phase. We analyzethe stability of feature explanations during training and find thatthe model’s attention to specific features is consistent in the MNISTdataset. However, unimportant features lack stability. The OCTMNIST dataset, on the other hand, has stable explanations for important features but less consistent explanations for less significant features. Based on this observation, two types of masks, namely fixedand dynamic, are applied to the model’s structure using XAI’s feedback with minimal human intervention. These masks identify themore important features from the less important ones and set the pixels associated with less significant features to zero. The fixed mask isgenerated based on XAI feedback after the model is fully trained, andthen it is applied to the output of the first convolutional layer of a newmodel (with the same architecture), which is trained from scratch. Onthe other hand, the dynamic mask is generated based on XAI feedback during training, and it is applied to the model while the modelis still training. As a result, these masks are changing during different epochs. Examining these two methods on both deep and shallowmodels, we find that both masking methods, particularly the fixedone, reduce the focus of all models on the least important parts of theinput data. This results in improved accuracy and loss in all models.As a result, this approach enhances the model’s interpretability andperformance by incorporating XAI into the training process.
6

Utvärdering av tolkningsbara maskininlärningsmodeller för att prediktera processegenskaper vid kartongtillverkning / Evaluation of interpretable machine learning models for predicting process characteristics in paperboard manufacturing

Åström, Olle January 2023 (has links)
To produce paperboard is a complex process which requires sophisticated monitoring to achieve a paperboard of high quality. Holmen Iggesund is a company in the paperboard manufacturing industry, aiming to produce paperboard of world leading quality. Therefore, they continuously develop their knowledge the production process. In this study, conducted at Holmen Iggesund, the focus is the property of delamination, which is tested with a method called Scott bond. Seven different input signals, measured over a two-year period, were used as input to six different models and used to predict the output (Scott bond). The result showed that a Random Forest model provided the best prediction performance among the tested models. EXplainable Artificial Intelligence (XAI) was then used to better understand the predictions of the Random forest model. It provided an understanding of which input signals were most significant for the model predictions and the values that the input signals should have to predict a high or low value of the output signal. The results from the work give an increased understanding of the process behavior which may help to improve the monitoring of the process and how to counter interact when a process disturbance occurs. It also shows the potential of using complex machine learning models combined with XAI algorithms.
7

Explainable Deep Learning Methods for Market Surveillance / Förklarbara Djupinlärningsmetoder för Marknadsövervakning

Jonsson Ewerbring, Marcus January 2021 (has links)
Deep learning methods have the ability to accurately predict and interpret what data represents. However, the decision making of a deep learning model is not comprehensible for humans. This is a problem for sectors like market surveillance which needs clarity in the decision making of the used algorithms. This thesis aimed to investigate how a deep learning model can be constructed to make the decision making of the model humanly comprehensible, and to investigate the potential impact on classification performance. A literature study was performed and publicly available explanation methods were collected. The explanation methods LIME, SHAP, model distillation and SHAP TreeExplainer were implemented and evaluated on a ResNet trained on three different time-series datasets. A decision tree was used as the student model for model distillation, where it was trained with both soft and hard labels. A survey was conducted to evaluate if the explanation method could increase comprehensibility. The results were that all methods could improve comprehensibility for people with experience in machine learning. However, none of the methods could provide full comprehensibility and clarity of the decision making. The model distillation reduced the performance compared to the ResNet model and did not improve the performance of the student model. / Djupinlärningsmetoder har egenskapen att förutspå och tolka betydelsen av data. Däremot så är djupinlärningsmetoders beslut inte förståeliga för människor. Det är ett problem för sektorer som marknadsövervakning som behöver klarhet i beslutsprocessen för använda algoritmer. Målet för den här uppsatsen är att undersöka hur en djupinlärningsmodell kan bli konstruerad för att göra den begriplig för en människa, och att undersöka eventuella påverkan av klassificeringsprestandan. En litteraturstudie genomfördes och publikt tillgängliga förklaringsmetoder samlades. Förklaringsmetoderna LIME, SHAP, modelldestillering och SHAP TreeExplainer blev implementerade och utvärderade med en ResNet modell tränad med tre olika dataset. Ett beslutsträd användes som studentmodell för modelldestillering och den blev tränad på båda mjuka och hårda etiketter. En undersökning genomfördes för att utvärdera om förklaringsmodellerna kan förbättra förståelsen av modellens beslut. Resultatet var att alla metoder kan förbättra förståelsen för personer med förkunskaper inom maskininlärning. Däremot så kunde ingen av metoderna ge full förståelse och insyn på hur beslutsprocessen fungerade. Modelldestilleringen minskade prestandan jämfört med ResNet modellen och förbättrade inte prestandan för studentmodellen.
8

Explainable Artificial Intelligence for Radio Resource Management Systems : A diverse feature importance approach / Förklarande Artificiell Intelligens inom System för Hantering av Radioresurser : Metoder för klassifisering av betydande predikatorer

Marcu, Alexandru-Daniel January 2022 (has links)
The field of wireless communications is arguably one of the most rapidly developing technological fields. Therefore, with each new advancement in this field, the complexity of wireless systems can grow significantly. This phenomenon is most visible in mobile communications, where the current 5G and 6G radio access networks (RANs) have reached unprecedented complexity levels to satisfy diverse increasing demands. In such increasingly complex environments, managing resources is becoming more and more challenging. Thus, experts employed performant artificial intelligence (AI) techniques to aid radio resource management (RRM) decisions. However, these AI techniques are often difficult to understand by humans, and may receive unimportant inputs which unnecessarily increase their complexity. In this work, we propose an explainability pipeline meant to be used for increasing humans’ understanding of AI models for RRM, as well as for reducing the complexity of these models, without loss of performance. To achieve this, the pipeline generates diverse feature importance explanations of the models with the help of three explainable artificial intelligence (XAI) methods: Kernel SHAP, CERTIFAI, and Anchors, and performs an importance-based feature selection using one of three different strategies. In the case of Anchors, we formulate and utilize a new way of computing feature importance scores, since no current publication in the XAI literature suggests a way to do this. Finally, we applied the proposed pipeline to a reinforcement learning (RL)- based RRM system. Our results show that we could reduce the complexity of the RL model between ∼ 27.5% and ∼ 62.5% according to different metrics, without loss of performance. Moreover, we showed that the explanations produced by our pipeline can be used to answer some of the most common XAI questions about our RL model, thus increasing its understandability. Lastly, we achieved an unprecedented result showing that our RL agent could be completely replaced with Anchors rules when taking RRM decisions, without a significant loss of performance, but with a considerable gain in understandability. / Området trådlös kommunikation är ett av de snabbast utvecklande tekniska områdena, och varje framsteg riskerar att medföra en signifikant ökning av komplexiteten för trådlösa nätverk. Det här fenomenet är som tydligast i mobil kommunikaiton, framför allt inom 5G och 6G radioaccessnätvärk (RANs) som har nåt nivåer av komplexitet som saknar motstycke. Detta för att uppfylla de ökande kraven som ställs på systemet. I dessa komplexa system blir resurshantering ett ökande problem, därför används nu artificiell intelligens (AI) allt mer för att ta beslut om hantering av radioresurser (RRM). Dessa AI tekniker är dock ofta svåra att förstå för människor, och kan således ges oviktig input vilket leder till att öka AI modellernas komplexitet. I detta arbete föreslås en förklarande pipeline vars mål är att användas för att öka människors förståelse av AI modeller för RRM. Målet är även att minska modellernas komplexitet, utan att förlora prestanda. För att åstadkomma detta genererar pipelinen förklaringar av betydande predikatorer för modellen med hjälp av tre metoder för förklarande artificiell intelligens (XAI). Dessa tre metoder är, Kernel SHAP, CERTIFAI och Anchors. Sedan görs ett predikatorurval baserat på predikatorbetydelse med en av dessa tre metoder. För metoden Anchors formuleras ett nytt sätt att beräkna betydelsen hos predikatorer, eftersom tidigare forskning inte föreslår någon metod för detta. Slutligen appliceras den föreslagna pipelinen på en förstärkt inlärnings- (RL) baserat RRM system. Resultaten visar att komplexiteten av RL modellen kunde reduceras med mellan ∼ 27, 5% och ∼ 62, 5% baserat på olika nyckeltal:er, utan att förlora någon prestanda. Utöver detta visades även att förklaringarna som producerats kan användas för att svara på de vanligaste XAI frågoran om RL modellen, och på det viset har även förståelsen för modellen ökat. Sistnämnt uppnåddes enastående resultat som visade att RL modellen helt kunde ersättas med regler producerade av Anchor-metoden för beslut inom RRM, utan någon störra förlust av prestanda, men med an stor vinst i förståelse.
9

AI-system för sjukvården - en studie kring design av förklaringar till AI-modeller och dess inverkan på sjukvårdspersonalens förståelse och tillit / AI systems for healthcare - a study on the design of explanations for AI models and its impact on healthcare professionals' understanding and trust

Bohlander, Joacim January 2021 (has links)
Användningsområdena för artificiell intelligens ökar ständigt vilket är inte förvånande då AIs förmåga att lösa komplicerade problem många gånger överstiger den mänskliga motsvarigheten. Implementeringen av AI-system har ibland gått så långt att utvecklarna själva inte längre vet hur systemet har tagit fram en slutsats; vilket har gjort att möjligheten att undersöka, förstå och felsöka utfall är näst intill icke-existerande. Eftersom dagens AI-system inte erbjuder förklaringar till utfallen har det resulterat i en ovilja hos slutanvändaren. Forskningsområdet eXplainable AI (XAI) menar att genom att använda genererade förklaringar kan AI-systemen bli mer förståeliga för slutanvändaren. Ett område som är i stort behov av AI-system är sjukvården, speciellt inom sepsis där en snabb diagnostisering drastiskt minskar sjukdomens mortalitet. Syftet med denna studie var att ta fram designriktlinjer vid utveckling av förklaringar som är ämnade att främja tillit till och förståelse för AI-baserade kliniska beslutsstöd menade för diagnostisering av sepsis. Studien påbörjades med en förstudie som bestod av en enkät och en litteraturstudie, sedan utvecklades en mid-fi prototyp som följdes av användarupplevelsetester. Insamlad dataanalyserades med hjälp av top-down och en induktiv analysmetod varefter ett slutligt resultat togs fram. Resultatet säkerställde att det finns flera faktorer som behöver inkorporeras vid framtagandet avförklaringar till ett AI-systems rekommendationer för främjande av tillit och förståelse. För en ökad tillit behöver en förklaring kompletteras med data som tillåter slutanvändaren att validera förklaringen och bemöta användarens informationsbehov. För en ökad förståelse bör en förklaring innehålla information som tillåter användaren förstå anledningen till förklaringens huvudinnehåll, exempelvis ”X beror på Z och Y”. Tilliten och förståelsen i denna studie mättes endast vid ett tillfälle vilket gör att frågan om hur riktlinjerna skulle påverka tillit till och förståelse för AI-system över tid kvarstår. / The fields of application for artificial intelligence is constantly increasing, which is not suprising as the AI's ability to solve complex problems often exceeds the human counterpart. The development of AI systems has come so far that somtimes not even the developers themselves can explain how the system came to its conclusion; which has made the possibility of examining, understanding and troubleshooting outcomes almost non-existent. Since today's AI systems do not offer explanations for the outcomes, it has resulted in resistance on the part of the end user. The research are eXplainable AI (XAI) believes that using generated explanations, AI systems can become more understandable to the end user. One area that is in great need of AI systems is healthcare, especially for diagnosing sepsis where a rapid diagnosis drastically reduces the mortality of the disease. The purpose of this study was to develop design guidelines for the development of explanations that are intended to promote trust and understanding of AI-based clinical decision support intended for the diagnosis of sepsis.The study began with a feasibility study consisting of a questionnaire and a literature study, then amid-fi prototype was developed that was followed by user experience tests. Collected data were analyzed using top-down and an inductive analysis after which a final result was obtained.The results ensured that there are several factors that need to be incorporated in the development of explanations for the promotion of trust and understanding. For increased trust, an explanation needs to be supplemented with data that allows the end user to validate the explanation and meet the user's information needs. For an increased understanding, an explanation should contain information that allows the user to understand the reason for the main content of the explanation, for example "Xdepends on Z and Y".The trust and understanding in this study was only measured at one occasion, as such the question of how the guidelines would affect trust and understanding of AI systems over time remains.
10

Explainability Methods for Transformer-based Artificial Neural Networks: : a Comparative Analysis / Förklaringsmetoder för Transformer-baserade artificiella neurala nätverk : en jämförande analys

Remmer, Eliott January 2022 (has links)
The increasing complexity of Artificial Intelligence (AI) models is accompanied by an increase in difficulty in interpreting model predictions. This thesis work provides insights and understanding of the differences and similarities between explainability methods for AI models. Opening up black-box models is important, especially if AI is applied in sensitive domains such as to, e.g., aid medical professionals. In recent years, the use of Transformer-based artificial neural network architectures such as Bidirectional Encoder Representations from Transformers (BERT) has become common in the field of Natural Language Processing (NLP), showing human-level performance on tasks such as sentiment classification and question answering. In addition, a growing portion of research within eXplainable AI (XAI) has shown success in using explainability methods to output auxiliary explanations at inference time together with predictions made by these complex models. When scoping the different methods, there is a distinction to be made whether the explanations emerge as part of the prediction process or subsequently via a separate model. These two categories of explainability methods are referred to as self-explaining and post-hoc, respectively. The goal of this work is to evaluate, analyze and compare these two categories of methods for assisting BERT models with explanations in the context of sentiment classification. A comparative analysis was therefore conducted in order to investigate quantitative and qualitative differences. To measure the quality of explanations, the Intersection Over Union (IOU) and Precision-Recall Area Under the Curve (PR-AUC) scores were used together with Explainable NLP (ExNLP) datasets, containing human annotated explanations. Apart from discussing benefits, drawbacks and assumptions of the different methods, results of the work indicated that the self-explaining method proved more successful in some instances while the post-hoc method performed better in others. Given the subjective nature of explanation quality, however, this work should be extended in several proposed directions, in order to fully capture the nuances of the explainability methods. / Parallellt med den ökande komplexiteten hos modeller med artificiell intelligens (AI) följer en ökad svårighet att tolka förutsägelser som modellerna gör. Detta examensarbete fokuserar på skillnader och likheter mellan förklaringsmetoder för AI-modeller. Att skapa mer transparens kring modellerna är viktigt, speciellt om AI ska appliceras i känsliga områden som t.ex. inom hälso- och sjukvård. Under de senaste åren har användningen av Transformer-baserade artificiella neurala nätverk som Bidirectional Encoder Representations from Transformers (BERT) blivit vanligt inom Natural Language Processing (NLP). Resultaten som modellerna når på uppgifter såsom sentimentklassificering och svar på frågor är på en mänsklig nivå. En växande del av forskningen inom eXplainable AI (XAI) har dessutom kunnat visa stora framsteg inom användandet av förklaringsmetoder, för att bistå förutsägelserna som dessa komplexa modeller gör med förklaringar. I kategoriseringar av metoderna särskiljs det ofta mellan huruvida förklaringarna uppstår som en del av förutsägelsen, tillsammans med modellen eller om de skapas efteråt via en separat modell. Dessa två kategorier av förklaringsmetoder kallas självförklarande och post-hoc. Målet med detta arbete är att utvärdera, analysera och jämföra dessa två kategorier av metoder som används för att hjälpa BERT-modeller med förklaringar i samband med sentimentklassificering av text. En jämförande analys genomfördes därför för att undersöka kvantitativa och kvalitativa skillnader. För att mäta kvaliteten på förklaringar användes Intersection Over Union (IOU) och Precision-Recall Area Under the Curve (PR-AUC) tillsammans med dataset skräddarsydda för just Explainable NLP (ExNLP) innehållande mänskligt annoterade förklaringar. Förutom att diskutera fördelar, nackdelar och antaganden med de olika metoderna, pekade resultaten på att den självförklarande metoden presterade bättre i vissa fall medan post-hoc-metoden presterade bättre i andra. Med tanke på hur kvaliteten av förklaringar till stor del handlar om en subjektiv bedömning bör dock detta arbete utvidgas i flera riktningar – föreslagna i detta arbete – för att fånga alla nyanser av förklaringsmetoderna.

Page generated in 0.4004 seconds