Spelling suggestions: "subject:"interpretability"" "subject:"nterpretability""
21 |
Explainable Multimodal FusionAlvi, Jaweriah January 2021 (has links)
Recently, there has been a lot of interest in explainable predictions, with new explainability approaches being created for specific data modalities like images and text. However, there is a dearth of understanding and minimal exploration in terms of explainability in the multimodal machine learning domain, where diverse data modalities are fused together in the model. In this thesis project, we look into two multimodal model architectures namely single-stream and dual-stream for the Visual Entailment (VE) task, which compromises of image and text modalities. The models considered in this project are UNiversal Image-TExt Representation Learning (UNITER), Visual-Linguistic BERT (VLBERT), Vision-and-Language BERT (ViLBERT) and Learning Cross-Modality Encoder Representations from Transformers (LXMERT). Furthermore, we conduct three different experiments for multimodal explainability by applying the Local Interpretable Model-agnostic Explanations (LIME) technique. Our results show that UNITER has the best accuracy among these models for the problem of VE. However, the explainability of all these models is similar. / Under den senaste tiden har intresset för förklarbara prediktioner (eng. explainable predictions) varit stort, med nya metoder skapade för specifika datamodaliteter som bilder och text. Samtidigt finns en brist på förståelse och lite utforskning har gjorts när det gäller förklarbarhet för multimodal maskininlärning, där olika datamodaliteter kombineras i modellen. I detta examensarbete undersöker vi två multimodala modellarkitekturer, så kallade en-ström och två-strömsarkitekturer (eng. single-steam och dual-stream) för en uppgift som kombinerar bilder och text, Visual Entailment (VE). Modellerna som studeras är UNiversal Image-TExt Representation Learning (UNITER), Visual-Linguistic BERT (VLBERT), Vision-and-Language BERT (ViLBERT) och Learning Cross-Modality Encoder Representations from Transformers (LXMERT). Dessutom genomför vi tre olika experiment för multimodal förklarbarhet genom att tillämpa en metod som heter Local Interpretable Model-agnostic Explanations (LIME). Våra resultat visar att UNITER har bäst prestanda av dessa modeller för VE-uppgiften. Å andra sidan är förklarbarheten för alla dessa modeller likvärdig.
|
22 |
Interpretable Early Classification of Multivariate Time SeriesGhalwash, Mohamed January 2013 (has links)
Recent advances in technology have led to an explosion in data collection over time rather than in a single snapshot. For example, microarray technology allows us to measure gene expression levels in different conditions over time. Such temporal data grants the opportunity for data miners to develop algorithms to address domain-related problems, e.g. a time series of several different classes can be created, by observing various patient attributes over time and the task is to classify unseen patient based on his temporal observations. In time-sensitive applications such as medical applications, some certain aspects have to be considered besides providing accurate classification. The first aspect is providing early classification. Accurate and timely diagnosis is essential for allowing physicians to design appropriate therapeutic strategies at early stages of diseases, when therapies are usually the most effective and the least costly. We propose a probabilistic hybrid method that allows for early, accurate, and patient-specific classification of multivariate time series that, by training on a full time series, offer classification at a very early time point during the diagnosis phase, while staying competitive in terms of accuracy with other models that use full time series both in training and testing. The method has attained very promising results and outperformed the baseline models on a dataset of response to drug therapy in Multiple Sclerosis patients and on a sepsis therapy dataset. Although attaining accurate classification is the primary goal of data mining task, in medical applications it is important to attain decisions that are not only accurate and obtained early, but can also be easily interpreted which is the second aspect of medical applications. Physicians tend to prefer interpretable methods rather than black-box methods. For that purpose, we propose interpretable methods for early classification by extracting interpretable patterns from the raw time series to help physicians in providing early diagnosis and to gain insights into and be convinced about the classification results. The proposed methods have been shown to be more accurate and provided classifications earlier than three alternative state-of-the-art methods when evaluated on human viral infection datasets and a larger myocardial infarction dataset. The third aspect has to be considered for medical applications is the need for predictions to be accompanied by a measure which allows physicians to judge about the uncertainty or belief in the prediction. Knowing the uncertainty associated with a given prediction is especially important in clinical diagnosis where data mining methods assist clinical experts in making decisions and optimizing therapy. We propose an effective method to provide uncertainty estimate for the proposed interpretable early classification methods. The method was evaluated on four challenging medical applications by characterizing decrease in uncertainty of prediction. We showed that our proposed method meets the requirements of uncertainty estimates (the proposed uncertainty measure takes values in the range [0,1] and propagates over time). To the best of our knowledge, this PhD thesis will have a great impact on the link between data mining community and medical domain experts and would give physicians sufficient confidence to put the proposed methods into real practice. / Computer and Information Science
|
23 |
A Machine Learning-Based Heuristic to Explain Game-Theoretic ModelsBaswapuram, Avinashh Kumar 17 July 2024 (has links)
This paper introduces a novel methodology that integrates Machine Learning (ML), Operations Research (OR), and Game Theory (GT) to develop an interpretable heuristic for principal-agent models (PAM). We extract solution patterns from ensemble tree models trained on solved instances of a PAM. Using these patterns, we develop a hierarchical tree-based approach that forms an interpretable ML-based heuristic to solve the PAM. This method ensures the interpretability, feasibility, and generalizability of ML predictions for game-theoretic models. The predicted solutions from this ensemble model-based heuristic are consistently high quality and feasible, significantly reducing computational time compared to traditional optimization methods to solve PAM. Specifically, the computational results demonstrate the generalizability of the ensemble heuristic in varying problem sizes, achieving high prediction accuracy with optimality gaps between 1--2% and significant improvements in solution times. Our ensemble model-based heuristic, on average, requires only 4.5 out of the 9 input features to explain its predictions effectively for a particular application. Therefore, our ensemble heuristic enhances the interpretability of game-theoretic optimization solutions, simplifying explanations and making them accessible to those without expertise in ML or OR. Our methodology adds to the approaches for interpreting ML predictions while also improving numerical tractability of PAMs. Consequently, enhancing policy design and operational decisions, and advancing real-time decision support where understanding and justifying decisions is crucial. / Master of Science / This paper introduces a new method that combines Machine Learning (ML) with Operations Research (OR) to create a clear and understandable approach for solving a principal-agent model (PAM). We use patterns from a group of decision trees to form an ML-based strategy to predict solutions that greatly reduces the time to solve the problem compared to traditional optimization techniques. Our approach works well for different sizes of problems, maintaining high accuracy with very small differences in objective function value from the best possible solutions (1-2%). The solutions predicted are consistently high quality and practical, significantly reducing the time needed compared to traditional optimization methods. Remarkably, our heuristic typically uses only 4.5 out of 9 input features to explain its predictions, making it much simpler and more interpretable than other methods. The results show that our method is both efficient and effective, with faster solution times and better accuracy. Our method can make complex game-theoretic optimization solutions more understandable, even for those without expertise in ML or OR. By improving the interpretability making PAMs analytically explainable, our approach supports better policy design and operational decision-making, advancing real-time decision support where clarity and justification of decisions are essential.
|
24 |
Interpreting embedding models of knowledge bases. / Interpretando modelos de embedding de bases de conhecimento.Arthur Colombini Gusmão 26 November 2018 (has links)
Knowledge bases are employed in a variety of applications, from natural language processing to semantic web search; alas, in practice, their usefulness is hurt by their incompleteness. To address this issue, several techniques aim at performing knowledge base completion, of which embedding models are efficient, attain state-of-the-art accuracy, and eliminate the need for feature engineering. However, embedding models predictions are notoriously hard to interpret. In this work, we propose model-agnostic methods that allow one to interpret embedding models by extracting weighted Horn rules from them. More specifically, we show how the so-called \"pedagogical techniques\", from the literature on neural networks, can be adapted to take into account the large-scale relational aspects of knowledge bases, and show experimentally their strengths and weaknesses. / Bases de conhecimento apresentam diversas aplicações, desde processamento de linguagem natural a pesquisa semântica da web; contudo, na prática, sua utilidade é prejudicada por não serem totalmente completas. Para solucionar esse problema, diversas técnicas focam em completar bases de conhecimento, das quais modelos de embedding são eficientes, atingem estado da arte em acurácia, e eliminam a necessidade de fazer-se engenharia de características dos dados de entrada. Entretanto, as predições dos modelos de embedding são notoriamente difíceis de serem interpretadas. Neste trabalho, propomos métodos agnósticos a modelo que permitem interpretar modelos de embedding através da extração de regras Horn ponderadas por pesos dos mesmos. Mais espeficicamente, mostramos como os chamados \"métodos pedagógicos\", da literatura de redes neurais, podem ser adaptados para lidar com os aspectos relacionais e de larga escala de bases de conhecimento, e mostramos experimentalmente seus pontos fortes e fracos.
|
25 |
Interpreting embedding models of knowledge bases. / Interpretando modelos de embedding de bases de conhecimento.Gusmão, Arthur Colombini 26 November 2018 (has links)
Knowledge bases are employed in a variety of applications, from natural language processing to semantic web search; alas, in practice, their usefulness is hurt by their incompleteness. To address this issue, several techniques aim at performing knowledge base completion, of which embedding models are efficient, attain state-of-the-art accuracy, and eliminate the need for feature engineering. However, embedding models predictions are notoriously hard to interpret. In this work, we propose model-agnostic methods that allow one to interpret embedding models by extracting weighted Horn rules from them. More specifically, we show how the so-called \"pedagogical techniques\", from the literature on neural networks, can be adapted to take into account the large-scale relational aspects of knowledge bases, and show experimentally their strengths and weaknesses. / Bases de conhecimento apresentam diversas aplicações, desde processamento de linguagem natural a pesquisa semântica da web; contudo, na prática, sua utilidade é prejudicada por não serem totalmente completas. Para solucionar esse problema, diversas técnicas focam em completar bases de conhecimento, das quais modelos de embedding são eficientes, atingem estado da arte em acurácia, e eliminam a necessidade de fazer-se engenharia de características dos dados de entrada. Entretanto, as predições dos modelos de embedding são notoriamente difíceis de serem interpretadas. Neste trabalho, propomos métodos agnósticos a modelo que permitem interpretar modelos de embedding através da extração de regras Horn ponderadas por pesos dos mesmos. Mais espeficicamente, mostramos como os chamados \"métodos pedagógicos\", da literatura de redes neurais, podem ser adaptados para lidar com os aspectos relacionais e de larga escala de bases de conhecimento, e mostramos experimentalmente seus pontos fortes e fracos.
|
26 |
A Fuzzy Software Prototype For Spatial Phenomena: Case Study Precipitation DistributionYanar, Tahsin Alp 01 October 2010 (has links) (PDF)
As the complexity of a spatial phenomenon increases, traditional modeling becomes impractical. Alternatively, data-driven modeling, which is based on the analysis of data characterizing the phenomena, can be used. In this thesis, the generation of understandable and reliable spatial models using observational data is addressed. An interpretability oriented data-driven fuzzy modeling approach is proposed. The methodology is based on construction of fuzzy models from data, tuning and fuzzy model simplification. Mamdani type fuzzy models with triangular membership functions are considered. Fuzzy models are constructed using fuzzy clustering algorithms and simulated annealing metaheuristic is adapted for the tuning step. To obtain compact and interpretable fuzzy models a simplification methodology is proposed. Simplification methodology reduced the number of fuzzy sets for each variable and simplified the rule base. Prototype software is developed and mean annual precipitation data of Turkey is examined as case study to assess the results of the approach in terms of both precision and interpretability. In the first step of the approach, in which fuzzy models are constructed from data, " / Fuzzy Clustering and Data Analysis Toolbox" / , which is developed for use with MATLAB, is used. For the other steps, the optimization of obtained fuzzy models from data using adapted simulated annealing algorithm step and the generation of compact and interpretable fuzzy models by simplification algorithm step, developed prototype software is used. If the accuracy is the primary objective then the proposed approach can produce more accurate solutions for training data than geographically weighted regression method. The minimum training error value produced by the proposed approach is 74.82 mm while the error obtained by geographically weighted regression method is 106.78 mm. The minimum error value on test data is 202.93 mm. An understandable fuzzy model for annual precipitation is generated only with 12 membership functions and 8 fuzzy rules. Furthermore, more interpretable fuzzy models are obtained when Gath-Geva fuzzy clustering algorithms are used during fuzzy model construction.
|
27 |
Interactive Object Retrieval using Interpretable Visual ModelsRebai, Ahmed 18 May 2011 (has links) (PDF)
This thesis is an attempt to improve visual object retrieval by allowing users to interact with the system. Our solution lies in constructing an interactive system that allows users to define their own visual concept from a concise set of visual patches given as input. These patches, which represent the most informative clues of a given visual category, are trained beforehand with a supervised learning algorithm in a discriminative manner. Then, and in order to specialize their models, users have the possibility to send their feedback on the model itself by choosing and weighting the patches they are confident of. The real challenge consists in how to generate concise and visually interpretable models. Our contribution relies on two points. First, in contrast to the state-of-the-art approaches that use bag-of-words, we propose embedding local visual features without any quantization, which means that each component of the high-dimensional feature vectors used to describe an image is associated to a unique and precisely localized image patch. Second, we suggest using regularization constraints in the loss function of our classifier to favor sparsity in the models produced. Sparsity is indeed preferable for concision (a reduced number of patches in the model) as well as for decreasing prediction time. To meet these objectives, we developed a multiple-instance learning scheme using a modified version of the BLasso algorithm. BLasso is a boosting-like procedure that behaves in the same way as Lasso (Least Absolute Shrinkage and Selection Operator). It efficiently regularizes the loss function with an additive L1-constraint by alternating between forward and backward steps at each iteration. The method we propose here is generic in the sense that it can be used with any local features or feature sets representing the content of an image region.
|
28 |
Relační přístup k univerzální algebře / Relational Approach to Universal AlgebraOpršal, Jakub January 2016 (has links)
Title: Relational Approach to Universal Algebra Author: Jakub Opršal Department: Department of Algebra Supervisor: doc. Libor Barto, Ph.D., Department of Algebra Abstract: We give some descriptions of certain algebraic properties using rela- tions and relational structures. In the first part, we focus on Neumann's lattice of interpretability types of varieties. First, we prove a characterization of vari- eties defined by linear identities, and we prove that some conditions cannot be characterized by linear identities. Next, we provide a partial result on Taylor's modularity conjecture, and we discuss several related problems. Namely, we show that the interpretability join of two idempotent varieties that are not congruence modular is not congruence modular either, and the analogue for idempotent va- rieties with a cube term. In the second part, we give a relational description of higher commutator operators, which were introduced by Bulatov, in varieties with a Mal'cev term. Furthermore, we use this result to prove that for every algebra with a Mal'cev term there exists a largest clone containing the Mal'cev operation and having the same congruence lattice and the same higher commu- tator operators as the original algebra, and to describe explicit (though infinite) set of identities describing supernilpotence...
|
29 |
Interpretable Superhuman Machine Learning Systems: An explorative study focusing on interpretability and detecting Unknown Knowns using GANHermansson, Adam, Generalao, Stefan January 2020 (has links)
I en framtid där förutsägelser och beslut som tas av maskininlärningssystem överträffar människors förmåga behöver systemen att vara tolkbara för att vi skall kunna lita på och förstå dem. Vår studie utforskar världen av tolkbar maskininlärning genom att designa och undersöka artefakter. Vi genomför experiment för att utforska förklarbarhet, tolkbarhet samt tekniska utmaningar att skapa maskininlärningsmodeller för att identifiera liknande men unika objekt. Slutligen genomför vi ett användartest för att utvärdera toppmoderna förklaringsverktyg i ett direkt mänskligt sammanhang. Med insikter från dessa experiment diskuterar vi den potentiella framtiden för detta fält / In a future where predictions and decisions made by machine learning systems outperform humans we need the systems to be interpretable in order for us to trust and understand them. Our study explore the realm of interpretable machine learning through designing artifacts. We conduct experiments to explore explainability, interpretability as well as technical challenges of creating machine learning models to identify objects that appear similar to humans. Lastly, we conduct a user test to evaluate current state-of-the-art visual explanatory tools in a human setting. From these insights, we discuss the potential future of this field.
|
30 |
Multivariate analysis of the parameters in a handwritten digit recognition LSTM system / Multivariat analys av parametrarna i ett LSTM-system för igenkänning av handskrivna siffrorZervakis, Georgios January 2019 (has links)
Throughout this project, we perform a multivariate analysis of the parameters of a long short-term memory (LSTM) system for handwritten digit recognition in order to understand the model’s behaviour. In particular, we are interested in explaining how this behaviour precipitate from its parameters, and what in the network is responsible for the model arriving at a certain decision. This problem is often referred to as the interpretability problem, and falls under scope of Explainable AI (XAI). The motivation is to make AI systems more transparent, so that we can establish trust between humans. For this purpose, we make use of the MNIST dataset, which has been successfully used in the past for tackling digit recognition problem. Moreover, the balance and the simplicity of the data makes it an appropriate dataset for carrying out this research. We start by investigating the linear output layer of the LSTM, which is directly associated with the models’ predictions. The analysis includes several experiments, where we apply various methods from linear algebra such as principal component analysis (PCA) and singular value decomposition (SVD), to interpret the parameters of the network. For example, we experiment with different setups of low-rank approximations of the weight output matrix, in order to see the importance of each singular vector for each class of the digits. We found out that cutting off the fifth left and right singular vectors the model practically losses its ability to predict eights. Finally, we present a framework for analysing the parameters of the hidden layer, along with our implementation of an LSTM based variational autoencoder that serves this purpose. / I det här projektet utför vi en multivariatanalys av parametrarna för ett long short-term memory system (LSTM) för igenkänning av handskrivna siffror för att förstå modellens beteende. Vi är särskilt intresserade av att förklara hur detta uppträdande kommer ur parametrarna, och vad i nätverket som ligger bakom den modell som kommer fram till ett visst beslut. Detta problem kallas ofta för interpretability problem och omfattas av förklarlig AI (XAI). Motiveringen är att göra AI-systemen öppnare, så att vi kan skapa förtroende mellan människor. I detta syfte använder vi MNIST-datamängden, som tidigare framgångsrikt har använts för att ta itu med problemet med igenkänning av siffror. Dessutom gör balansen och enkelheten i uppgifterna det till en lämplig uppsättning uppgifter för att utföra denna forskning. Vi börjar med att undersöka det linjära utdatalagret i LSTM, som är direkt kopplat till modellernas förutsägelser. Analysen omfattar flera experiment, där vi använder olika metoder från linjär algebra, som principalkomponentanalys (PCA) och singulärvärdesfaktorisering (SVD), för att tolka nätverkets parametrar. Vi experimenterar till exempel med olika uppsättningar av lågrangordnade approximationer av viktutmatrisen för att se vikten av varje enskild vektor för varje klass av siffrorna. Vi upptäckte att om man skär av den femte vänster och högervektorn förlorar modellen praktiskt taget sin förmåga att förutsäga siffran åtta. Slutligen lägger vi fram ett ramverk för analys av parametrarna för det dolda lagret, tillsammans med vårt genomförande av en LSTM-baserad variational autoencoder som tjänar detta syfte.
|
Page generated in 0.0824 seconds