• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 98
  • 3
  • 2
  • 1
  • Tagged with
  • 113
  • 69
  • 63
  • 57
  • 48
  • 48
  • 47
  • 40
  • 39
  • 37
  • 28
  • 24
  • 20
  • 20
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Interpretation, Verification and Privacy Techniques for Improving the Trustworthiness of Neural Networks

Dethise, Arnaud 22 March 2023 (has links)
Neural Networks are powerful tools used in Machine Learning to solve complex problems across many domains, including biological classification, self-driving cars, and automated management of distributed systems. However, practitioners' trust in Neural Network models is limited by their inability to answer important questions about their behavior, such as whether they will perform correctly or if they can be entrusted with private data. One major issue with Neural Networks is their "black-box" nature, which makes it challenging to inspect the trained parameters or to understand the learned function. To address this issue, this thesis proposes several new ways to increase the trustworthiness of Neural Network models. The first approach focuses specifically on Piecewise Linear Neural Networks, a popular flavor of Neural Networks used to tackle many practical problems. The thesis explores several different techniques to extract the weights of trained networks efficiently and use them to verify and understand the behavior of the models. The second approach shows how strengthening the training algorithms can provide guarantees that are theoretically proven to hold even for the black-box model. The first part of the thesis identifies errors that can exist in trained Neural Networks, highlighting the importance of domain knowledge and the pitfalls to avoid with trained models. The second part aims to verify the outputs and decisions of the model by adapting the technique of Mixed Integer Linear Programming to efficiently explore the possible states of the Neural Network and verify properties of its outputs. The third part extends the Linear Programming technique to explain the behavior of a Piecewise Linear Neural Network by breaking it down into its linear components, generating model explanations that are both continuous on the input features and without approximations. Finally, the thesis addresses privacy concerns by using Trusted Execution and Differential Privacy during the training process. The techniques proposed in this thesis provide strong, theoretically provable guarantees about Neural Networks, despite their black-box nature, and enable practitioners to verify, extend, and protect the privacy of expert domain knowledge. By improving the trustworthiness of models, these techniques make Neural Networks more likely to be deployed in real-world applications.
22

Using XAI Tools to Detect Harmful Bias in ML Models

Virtanen, Klaus January 2022 (has links)
In the past decade, machine learning (ML) models have become farmore powerful, and are increasingly being used in many important contexts. At the same time, ML models have become more complex, and harder to understand on their own, which has necessitated an interesting explainable AI (XAI), a field concerned with ensuring that ML and other AI system can be understood by human users and practitioners. One aspect of XAI is the development of ”explainers”, tools that take a more complex system (here: an ML model) and generate a simpler but sufficiently accurate model of this system — either globally or locally —to yield insight into the behaviour of the original system. As ML models have become more complex and prevalent, concerns that they may embody and perpetuate harmful social biases have also risen, with XAI being one proposed tool for bias detection. This paper investigates the ability of two explainers, LIME and SHAP, which explain the prediction of potentially more complex models by way of locally faithful linear models, to detect harmful social bias (here in the form of the influence of the racial makeup of a neighbourhood on property values), in a simple experiment involving two kinds of ML models, line arregression and an ensemble method, trained on the well-known Boston-housing dataset. The results show that LIME and SHAP appear to be helpful in bias detection, while also revealing an instance where the explanations do not quite reflect the workings of the model, while still yielding accurate insight into the predictions the model makes.
23

CEFYDRA: Cluster-first Explainable FuzzY-based Deep Reorganizing Algorithm

Viana, Javier 23 August 2022 (has links)
No description available.
24

Under the Guise of Machine Neutrality : Machine Learning Uncertainty Exploration as Design Material to Identify Gender Bias in AI Systems

Veloso, Gelson January 2022 (has links)
Structural gendered inequality permeates intelligent systems, shaping everyday lives and reinforcing gender oppression. This study investigates how uncertainty, as an inherent characteristic of Machine Learning (ML) models, can be translated as a design material to highlight gender bias in Artificial Intelligence (AI) systems. It follows an HCI feminist methodology with a threefold horizon: the re-conceptualisation of the design space that considers human and non-human perspectives (Giaccardi & Redström, 2020); the exploration of ML uncertainty as design materiality (Benjamin et al., 2020) to underscore imbued gender inequality in intelligent systems; and the disputed relations of ML uncertainty as materiality with unpredictability in Explainable AI systems, more specifically Graspable AI (Ghajargar et al., 2021, 2022). As a critical exploratory process, the knowledge contribution is the development of a set of guidelines for the design of better and more equal ML systems.
25

Famtile: An Algorithm For Learning High-level Tactical Behavior From Observation

Stensrud, Brian 01 January 2005 (has links)
This research focuses on the learning of a class of behaviors defined as high-level behaviors. High-level behaviors are defined here as behaviors that can be executed using a sequence of identifiable behaviors. Represented by low-level contexts, these behaviors are known a priori to learning and can be modeled separately by a knowledge engineer. The learning task, which is achieved by observing an expert within simulation, then becomes the identification and representation of the low-level context sequence executed by the expert. To learn this sequence, this research proposes FAMTILE - the Fuzzy ARTMAP / Template-Based Interpretation Learning Engine. This algorithm attempts to achieve this learning task by constructing rules that govern the low-level context transitions made by the expert. By combining these rules with models for these low-level context behaviors, it is hypothesized that an intelligent model for the expert can be created that can adequately model his behavior. To evaluate FAMTILE, four testing scenarios were developed that attempt to achieve three distinct evaluation goals: assessing the learning capabilities of Fuzzy ARTMAP, evaluating the ability of FAMTILE to correctly predict expert actions and context choices given an observation, and creating a model of the expert's behavior that can perform the high-level task at a comparable level of proficiency.
26

Explainable AI by Training Introspection / Explainable AI by Training Introspection

Dastkarvelayati, Rozhin, Ghafourian, Soudabeh January 2023 (has links)
Deep Neural Networks (DNNs) are known as black box algorithmsthat lack transparency and interpretability for humans. eXplainableArtificial Intelligence (XAI) is introduced to tackle this problem. MostXAI methods are utilized post-training, providing explanations of themodel to clarify its predictions and inner workings for human understanding. However, there is a shortage of methods that utilize XAIduring training to not only observe the model’s behavior but alsoexploit this information for the benefit of the model.In our approach, we propose a novel method that leverages XAIduring the training process itself. Incorporating feedback from XAIcan give us insights into important features of input data that impact model decisions. This work explores focusing more on specificfeatures during training, which could potentially improve model performance introspectively throughout the training phase. We analyzethe stability of feature explanations during training and find thatthe model’s attention to specific features is consistent in the MNISTdataset. However, unimportant features lack stability. The OCTMNIST dataset, on the other hand, has stable explanations for important features but less consistent explanations for less significant features. Based on this observation, two types of masks, namely fixedand dynamic, are applied to the model’s structure using XAI’s feedback with minimal human intervention. These masks identify themore important features from the less important ones and set the pixels associated with less significant features to zero. The fixed mask isgenerated based on XAI feedback after the model is fully trained, andthen it is applied to the output of the first convolutional layer of a newmodel (with the same architecture), which is trained from scratch. Onthe other hand, the dynamic mask is generated based on XAI feedback during training, and it is applied to the model while the modelis still training. As a result, these masks are changing during different epochs. Examining these two methods on both deep and shallowmodels, we find that both masking methods, particularly the fixedone, reduce the focus of all models on the least important parts of theinput data. This results in improved accuracy and loss in all models.As a result, this approach enhances the model’s interpretability andperformance by incorporating XAI into the training process.
27

Enhancement of an Ad Reviewal Process through Interpretable Anomaly Detecting Machine Learning Models / Förbättring av en annonsgranskingsprocess genom tolkbara och avvikelsedetekterande maskinsinlärningsmodeller

Dahlgren, Eric January 2022 (has links)
Technological advancements made in recent decades in the fields of artificial intelligence (AI) and machine learning (ML) has lead to further automation of tasks previously performed by humans. Manually reviewing and assessing content uploaded to social media and marketplace platforms is one of said tasks that is both tedious and expensive to perform, and could possibly be automated through ML based systems. When introducing ML model predictions to a human decision making process, interpretability and explainability of models has been proven to be important factors for humans to trust in individual sample predictions. This thesis project aims to explore the performance of interpretable ML models used together with humans in an ad review process for a rental marketplace platform. Utilizing the XGBoost framework and SHAP for interpretable ML, a system was built with the ability to score an individual ad and explain the prediction with human readable sentences based on feature importance. The model reached an ROC AUC score of 0.90 and an Average Precision score of 0.64 on a held out test set. An end user survey was conducted which indicated some trust in the model and an appreciation for the local prediction explanations, but low general impact and helpfulness. While most related work focus on model performance, this thesis contributes with a smaller model usability study which can provide grounds for utilizing interpretable ML software in any manual decision making process.
28

Explainable Artificial Intelligence for Image Segmentation and for Estimation of Optical Aberrations

Vinogradova, Kira 18 December 2023 (has links)
State-of-the-art machine learning methods such as convolutional neural networks (CNNs) are frequently employed in computer vision. Despite their high performance on unseen data, CNNs are often criticized for lacking transparency — that is, providing very limited if any information about the internal decision-making process. In some applications, especially in healthcare, such transparency of algorithms is crucial for end users, as trust in diagnosis and prognosis is important not only for the satisfaction and potential adherence of patients, but also for their health. Explainable artificial intelligence (XAI) aims to open up this “black box,” often perceived as a cryptic and inconceivable algorithm, to increase understanding of the machines’ reasoning.XAI is an emerging field, and techniques for making machine learning explainable are becoming increasingly available. XAI for computer vision mainly focuses on image classification, whereas interpretability in other tasks remains challenging. Here, I examine explainability in computer vision beyond image classification, namely in semantic segmentation and 3D multitarget image regression. This thesis consists of five chapters. In Chapter 1 (Introduction), the background of artificial intelligence (AI), XAI, computer vision, and optics is presented, and the definitions of the terminology for XAI are proposed. Chapter 2 is focused on explaining the predictions of U-Net, a CNN commonly used for semantic image segmentation, and variations of this architecture. To this end, I propose the gradient-weighted class activation mapping for segmentation (Seg-Grad-CAM) method based on the well-known Grad-CAM method for explainable image classification. In Chapter 3, I present the application of deep learning to estimation of optical aberrations in microscopy biodata by identifying the present Zernike aberration modes and their amplitudes. A CNN-based approach PhaseNet can accurately estimate monochromatic aberrations in images of point light sources. I extend this method to objects of complex shapes. In Chapter 4, an approach for explainable 3D multitarget image regression is reported. First, I visualize how the model differentiates the aberration modes using the local interpretable model-agnostic explanations (LIME) method adapted for 3D image classification. Then I “explain,” using LIME modified for multitarget 3D image regression (Image-Reg-LIME), the outputs of the regression model for estimation of the amplitudes. In Chapter 5, the results are discussed in a broader context. The contribution of this thesis is the development of explainability methods for semantic segmentation and 3D multitarget image regression of optical aberrations. The research opens the door for further enhancement of AI’s transparency.:Title Page i List of Figures xi List of Tables xv 1 Introduction 1 1.1 Essential Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Artificial intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.2 Explainable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.3 Proposed definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 Explainable Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.1 Aims and applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.2 Image classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.3 Image regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.3.4 Image segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.4 Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.4.1 Aberrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.2 Zernike polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.5 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.5.2 Dissertation outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2 Explainable Image Segmentation 23 2.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.3.1 CAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.3.2 Grad-CAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.3.3 U-Net . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.3.4 Seg-Grad-CAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.4 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.4.1 Circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.4.2 TextureMNIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.4.3 Cityscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.5.1 Circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.5.2 TextureMNIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.5.3 Cityscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3 Estimation of Aberrations 55 3.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.1 PhaseNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.2 PhaseNet data generator . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.3.3 Retrieval of noise parameters . . . . . . . . . . . . . . . . . . . . . . . . 62 3.3.4 Data generator with phantoms . . . . . . . . . . . . . . . . . . . . . . . 62 3.3.5 Restoration via deconvolution . . . . . . . . . . . . . . . . . . . . . . . . 63 3.3.6 Convolution with the “zero” synthetic PSF . . . . . . . . . . . . . . . . 63 3.4 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.4.1 Astrocytes (synthetic data) . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.4.2 Fluorescent beads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.4.3 Drosophila embryo (live sample) . . . . . . . . . . . . . . . . . . . . . . 67 3.4.4 Neurons (fixed sample) . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.5.1 Astrocytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.5.2 Conclusions on the results for astrocytes . . . . . . . . . . . . . . . . . . 74 3.5.3 Fluorescent beads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.5.4 Conclusions on the results for fluorescent beads . . . . . . . . . . . . . . 81 3.5.5 Drosophila embryo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.5.6 Conclusions on the results for Drosophila embryo . . . . . . . . . . . . . 87 3.5.7 Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4 Explainable Multitarget Image Regression 99 4.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.3.1 LIME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.3.2 Superpixel algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.3.3 LIME for 3D image classification . . . . . . . . . . . . . . . . . . . . . . 104 4.3.4 Image-Reg-LIME: LIME for 3D image regression . . . . . . . . . . . . . 107 4.4 Results: Classification of Aberrations . . . . . . . . . . . . . . . . . . . . . . . . 109 viii TABLE OF CONTENTS 4.4.1 Transforming the regression task into classification . . . . . . . . . . . . 110 4.4.2 Data augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.4.3 Parameter search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.4 Clustering of 3D images . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.4.5 Explanations of classification . . . . . . . . . . . . . . . . . . . . . . . . 114 4.4.6 Conclusions on the results for classification . . . . . . . . . . . . . . . . 117 4.5 Results: Explainable Regression of Aberrations . . . . . . . . . . . . . . . . . . 118 4.5.1 Explanations with a reference value . . . . . . . . . . . . . . . . . . . . 121 4.5.2 Validation of explanations . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5 Conclusions and Outlook 127 References 129
29

User Preference-Based Evaluation of Counterfactual Explanation Methods

Akram, Muhammad Zain January 2023 (has links)
Explainable AI (XAI) has grown as an important field over the years. As more complicated AI systems are utilised in decision-making situations, the necessity for explanations for such systems is also increasing in order to ensure transparency and stakeholder trust. This study focuses on a specific type of explanation method, namely counterfactual explanations. Counterfactual explanations provide feedback that outlines what changes should be made to the input to reach a different outcome. This study expands on a previous dissertation in which a proof-of-concept tool was created for comparing several counterfactual explanation methods. This thesis investigates the properties of counterfactual explanation methods along with some appropriate metrics. The identified metrics are then used to evaluate and compare the desirable properties of the counterfactual approaches. The proof-of-concept tool is extended with a properties-metrics mapping module, and a user preference-based system is developed, allowing users to evaluate different counterfactual approaches depending on their preferences. This addition to the proof-of-concept tool is a critical step in providing field researchers with a standardised benchmarking tool.
30

Explainable Intrusion Detection Systems using white box techniques

Ables, Jesse 08 December 2023 (has links) (PDF)
Artificial Intelligence (AI) has found increasing application in various domains, revolutionizing problem-solving and data analysis. However, in decision-sensitive areas like Intrusion Detection Systems (IDS), trust and reliability are vital, posing challenges for traditional black box AI systems. These black box IDS, while accurate, lack transparency, making it difficult to understand the reasons behind their decisions. This dissertation explores the concept of eXplainable Intrusion Detection Systems (X-IDS), addressing the issue of trust in X-IDS. It explores the limitations of common black box IDS and the complexities of explainability methods, leading to the fundamental question of trusting explanations generated by black box explainer modules. To address these challenges, this dissertation presents the concept of white box explanations, which are innately explainable. While white box algorithms are typically simpler and more interpretable, they often sacrifice accuracy. However, this work utilized white box Competitive Learning (CL), which can achieve competitive accuracy in comparison to black box IDS. We introduce Rule Extraction (RE) as another white box technique that can be applied to explain black box IDS. It involves training decision trees on the inputs, weights, and outputs of black box models, resulting in human-readable rulesets that serve as global model explanations. These white box techniques offer the benefits of accuracy and trustworthiness, which are challenging to achieve simultaneously. This work aims to address gaps in the existing literature, including the need for highly accurate white box IDS, a methodology for understanding explanations, small testing datasets, and comparisons between white box and black box models. To achieve these goals, the study employs CL and eclectic RE algorithms. CL models offer innate explainability and high accuracy in IDS applications, while eclectic RE enhances trustworthiness. The contributions of this dissertation include a novel X-IDS architecture featuring Self-Organizing Map (SOM) models that adhere to DARPA’s guidelines for explainable systems, an extended X-IDS architecture incorporating three CL-based algorithms, and a hybrid X-IDS architecture combining a Deep Neural Network (DNN) predictor with a white box eclectic RE explainer. These architectures create more explainable, trustworthy, and accurate X-IDS systems, paving the way for enhanced AI solutions in decision-sensitive domains.

Page generated in 0.0771 seconds