• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 107
  • 4
  • 3
  • 1
  • Tagged with
  • 128
  • 79
  • 72
  • 60
  • 58
  • 56
  • 55
  • 41
  • 40
  • 40
  • 30
  • 29
  • 25
  • 24
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Learning acyclic probabilistic logic programs from data. / Aprendizado de programas lógico-probabilísticos acíclicos.

Francisco Henrique Otte Vieira de Faria 12 December 2017 (has links)
To learn a probabilistic logic program is to find a set of probabilistic rules that best fits some data, in order to explain how attributes relate to one another and to predict the occurrence of new instantiations of these attributes. In this work, we focus on acyclic programs, because in this case the meaning of the program is quite transparent and easy to grasp. We propose that the learning process for a probabilistic acyclic logic program should be guided by a scoring function imported from the literature on Bayesian network learning. We suggest novel techniques that lead to orders of magnitude improvements in the current state-of-art represented by the ProbLog package. In addition, we present novel techniques for learning the structure of acyclic probabilistic logic programs. / O aprendizado de um programa lógico probabilístico consiste em encontrar um conjunto de regras lógico-probabilísticas que melhor se adequem aos dados, a fim de explicar de que forma estão relacionados os atributos observados e predizer a ocorrência de novas instanciações destes atributos. Neste trabalho focamos em programas acíclicos, cujo significado é bastante claro e fácil de interpretar. Propõe-se que o processo de aprendizado de programas lógicos probabilísticos acíclicos deve ser guiado por funções de avaliação importadas da literatura de aprendizado de redes Bayesianas. Neste trabalho s~ao sugeridas novas técnicas para aprendizado de parâmetros que contribuem para uma melhora significativa na eficiência computacional do estado da arte representado pelo pacote ProbLog. Além disto, apresentamos novas técnicas para aprendizado da estrutura de programas lógicos probabilísticos acíclicos.
12

Foundations of Human-Aware Planning -- A Tale of Three Models

January 2018 (has links)
abstract: A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware" -- i.e. the human task model enables it to envisage desired roles of the human in joint action, while the human mental model allows it to anticipate how its own actions are perceived from the point of view of the human. In my research, I explore how these concepts of human-awareness manifest themselves in the scope of planning or sequential decision making with humans in the loop. To this end, I will show (1) how the AI agent can leverage the human task model to generate symbiotic behavior; and (2) how the introduction of the human mental model in the deliberative process of the AI agent allows it to generate explanations for a plan or resort to explicable plans when explanations are not desired. The latter is in addition to traditional notions of human-aware planning which typically use the human task model alone and thus enables a new suite of capabilities of a human-aware AI agent. Finally, I will explore how the AI agent can leverage emerging mixed-reality interfaces to realize effective channels of communication with the human in the loop. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
13

Human Understandable Interpretation of Deep Neural Networks Decisions Using Generative Models

Alabdallah, Abdallah January 2019 (has links)
Deep Neural Networks have long been considered black box systems, where their interpretability is a concern when applied in safety critical systems. In this work, a novel approach of interpreting the decisions of DNNs is proposed. The approach depends on exploiting generative models and the interpretability of their latent space. Three methods for ranking features are explored, two of which depend on sensitivity analysis, and the third one depends on Random Forest model. The Random Forest model was the most successful to rank the features, given its accuracy and inherent interpretability.
14

Explainable Neural Networks based Anomaly Detection for Cyber-Physical Systems

Amarasinghe, Kasun 01 January 2019 (has links)
Cyber-Physical Systems (CPSs) are the core of modern critical infrastructure (e.g. power-grids) and securing them is of paramount importance. Anomaly detection in data is crucial for CPS security. While Artificial Neural Networks (ANNs) are strong candidates for the task, they are seldom deployed in safety-critical domains due to the perception that ANNs are black-boxes. Therefore, to leverage ANNs in CPSs, cracking open the black box through explanation is essential. The main objective of this dissertation is developing explainable ANN-based Anomaly Detection Systems for Cyber-Physical Systems (CP-ADS). The main objective was broken down into three sub-objectives: 1) Identifying key-requirements that an explainable CP-ADS should satisfy, 2) Developing supervised ANN-based explainable CP-ADSs, 3) Developing unsupervised ANN-based explainable CP-ADSs. In achieving those objectives, this dissertation provides the following contributions: 1) a set of key-requirements that an explainable CP-ADS should satisfy, 2) a methodology for deriving summaries of the knowledge of a trained supervised CP-ADS, 3) a methodology for validating derived summaries, 4) an unsupervised neural network methodology for learning cyber-physical (CP) behavior, 5) a methodology for visually and linguistically explaining the learned CP behavior. All the methods were implemented on real-world and benchmark datasets. The set of key-requirements presented in the first contribution was used to evaluate the performance of the presented methods. The successes and limitations of the presented methods were identified. Furthermore, steps that can be taken to overcome the limitations were proposed. Therefore, this dissertation takes several necessary steps toward developing explainable ANN-based CP-ADS and serves as a framework that can be expanded to develop trustworthy ANN-based CP-ADSs.
15

Comparing Human Reasoning and Explainable AI

Helgstrand, Carl Johan, Hultin, Niklas January 2022 (has links)
Explainable AI (XAI) is a research field dedicated to formulating avenues of breaching the black box nature of many of today’s machine learning models. As society finds new ways of applying these models in everyday life, certain risk thresholds are crossed when society replaces human decision making with autonomous systems. How can we trust the algorithms to make sound judgement when all we provide is input and all they provide is an output? XAI methods examine different data points in the machine learning process to determine what factors influenced the decision making. While these methods of post-hoc explanation may provide certain insights, previous studies into XAI have found the designs to often be biased towards the designers and do not incorporate necessary interdisciplinary fields to improve user understanding. In this thesis, we look at animal classification and what features in animal images were found to be important by humans. We use a novel approach of first letting the participants create their own post-hoc explanations, before asking them to evaluate real XAI explanations as well as a pre-made human explanation generated from a test group. The results show strong cohesion in the participants' answers and can provide guidelines for designing XAI explanations more closely related to human reasoning. The data also indicates a preference for human-like explanations within the context of this study. Additionally, a potential bias was identified as participants preferred explanations marking large portions of an image as important, even if many of the important areas coincided with what the participants themselves considered to be unimportant. While the sample pool and data gathering tools are limiting, the results points toward a need for additional research into comparisons of human reasoning and XAI explanations and how it may affect the evaluation of, and bias towards, explanation methods.
16

Global Translation of Machine Learning Models to Interpretable Models

Almerri, Mohammad 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The widespread and growing usage of machine learning models, especially in highly critical areas such as law, predicate the need for interpretable models. Models that cannot be audited are vulnerable to inheriting biases from the dataset. Even locally interpretable models are vulnerable to adversarial attack. To address this issue a new methodology is proposed to translate any existing machine learning model into a globally interpretable one. This methodology, MTRE-PAN, is designed as a hybrid SVM-decision tree model and leverages the interpretability of linear hyperplanes. MTRE-PAN uses this hybrid model to create polygons that act as intermediates for the decision boundary. MTRE-PAN is compared to a previously proposed model, TRE-PAN, on three non-synthetic datasets: Abalone, Census and Diabetes data. TRE-PAN translates a machine learning model to a 2-3 decision tree in order to provide global interpretability for the target model. The datasets are each used to train a Neural Network that represents the non-interpretable model. For all target models, the results show that MTRE-PAN generates interpretable decision trees that have a lower number of leaves and higher parity compared to TRE-PAN.
17

Pruning GHSOM to create an explainable intrusion detection system

Kirby, Thomas Michael 12 May 2023 (has links) (PDF)
Intrusion Detection Systems (IDS) that provide high detection rates but are black boxes leadto models that make predictions a security analyst cannot understand. Self-Organizing Maps(SOMs) have been used to predict intrusion to a network, while also explaining predictions throughvisualization and identifying significant features. However, they have not been able to compete withthe detection rates of black box models. Growing Hierarchical Self-Organizing Maps (GHSOMs)have been used to obtain high detection rates on the NSL-KDD and CIC-IDS-2017 network trafficdatasets, but they neglect creating explanations or visualizations, which results in another blackbox model.This paper offers a high accuracy, Explainable Artificial Intelligence (XAI) based on GHSOMs.One obstacle to creating a white box hierarchical model is the model growing too large and complexto understand. Another contribution this paper makes is a pruning method used to cut down onthe size of the GHSOM, which provides a model that can provide insights and explanation whilemaintaining a high detection rate.
18

Estimating Brain Maturation in Very Preterm Neonates : An Explainable Machine Learning Approach / Estimering av hjärnmognad i mycket prematura spädbarn : En ansats att tillämpa förklarbar maskininlärning

Svensson, Patrik January 2023 (has links)
Introduction: Assessing brain maturation in preterm neonates is essential for the health of the neonates. Machine learning methods have been introduced as a prospective assessment tool for neonatal electroencephalogram(EEG) signals. Explainable methods are essential in the medical field, and more research regarding explainability is needed in the field of using machine learning for neonatal EEG analysis. Methodology: This thesis develops an explainable machine learning model that estimates postmenstrual age in very preterm neonates from EEG signals and investigates the importance of the features used in the model. Dual-channel EEG signals had been collected from 14 healthy preterm neonates of postmenstrual age spanning 25 to 32 weeks. The signals were converted to amplitude-integrated EEG (aEEG) and a list of features was extracted from the signals. A regression tree model was developed and the feature importance of the model was assessed using permutation importance and Shapley additive explanations. Results: The model had an RMSE of 1.73 weeks (R2=0.45, PCC=0.676). The best feature was the mean amplitude of the lower envelope of the signal, followed by signal time spent over 100 µV. Conclusion: The model is performing comparably to human experts, and as it can be improved in multiple ways, this result indicates a promising outlook for explainable machine learning model applications in neonatal EEG analysis.
19

What do you mean? : The consequences of different stakeholders’ logics in machine learning and how disciplinary differences should be managed within an organization

Eliasson, Nina January 2022 (has links)
This research paper identifies the disciplinary differences of stakeholders and its effects on working cross-functional in the context of machine learning. This study specifically focused on 1) how stakeholders with disciplinary differences interpret a search system, and 2) how the multi-disciplines should be managed in an organization. This was studied through 12 interviews with stakeholders from design disciplines, product management, data science and machine learning engineering, followed by a focus group with a participant from each of the different disciplines. The findings were analyzed through a thematic analysis and institutional logics and concluded that the different logics had a high impact on the stakeholders’ understanding of the search system. The research also concluded that bridging the gap between the multi-disciplinary stakeholders are of high importance in context of machine learning.
20

The Contribution of Visual Explanations in Forensic Investigations of Deepfake Video : An Evaluation

Fjellström, Lisa January 2021 (has links)
Videos manipulated by machine learning have rapidly increased online in the past years. So called deepfakes can depict people who never participated in a video recording by transposing their faces onto others in it. This raises the concern of authenticity of media, which demand for higher performing detection methods in forensics. Introduction of AI detectors have been of interest, but is held back today by their lack of interpretability. The objective of this thesis was therefore to examine what the explainable AI method local interpretable model-agnostic explanations (LIME) could contribute to forensic investigations of deepfake video.  An evaluation was conducted where 3 multimedia forensics evaluated the contribution of visual explanations of classifications when investigating deepfake video frames. The estimated contribution was not significant yet answers showed that LIME may be used to indicate areas to start examine. LIME was however not considered to provide sufficient proof to why a frame was classified as `fake', and would if introduced be used as one of several methods in the process. Issues were apparent regarding the interpretability of the explanations, as well as LIME's ability to indicate features of manipulation with superpixels.

Page generated in 0.0777 seconds