• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 39
  • 20
  • 17
  • 17
  • 13
  • 10
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Evolutionary Optimization of Decision Trees for Interpretable Reinforcement Learning

Custode, Leonardo Lucio 27 April 2023 (has links)
While Artificial Intelligence (AI) is making giant steps, it is also raising concerns about its trustworthiness, due to the fact that widely-used black-box models cannot be exactly understood by humans. One of the ways to improve humans’ trust towards AI is to use interpretable AI models, i.e., models that can be thoroughly understood by humans, and thus trusted. However, interpretable AI models are not typically used in practice, as they are thought to be less performing than black-box models. This is more evident in Reinforce- ment Learning, where relatively little work addresses the problem of performing Reinforce- ment Learning with interpretable models. In this thesis, we address this gap, proposing methods for Interpretable Reinforcement Learning. For this purpose, we optimize Decision Trees by combining Reinforcement Learning with Evolutionary Computation techniques, which allows us to overcome some of the challenges tied to optimizing Decision Trees in Reinforcement Learning scenarios. The experimental results show that these approaches are competitive with the state-of-the-art score while being extremely easier to interpret. Finally, we show the practical importance of Interpretable AI by digging into the inner working of the solutions obtained.
2

Interpretable Machine Learning in Alzheimer’s Disease Dementia

Kadem, Mason January 2023 (has links)
Alzheimer’s disease (AD) is among the top 10 causes of global mortality, and dementia imposes a yearly $1 trillion USD economic burden. Of particular importance, women and minoritized groups are disproportionately affected by AD, with females having higher risk of developing AD compared to male cohorts. Differentiating mild cognitive impairment (MCIstable) from early stage Alzheimer’s disease (MCIAD) is vital worldwide. Despite genetic markers, such as apo-lipoprotein-E (APOE), identification of patients before they develop early stages of MCIAD, a critical period for possible pharmaceutical intervention, is not yet possible. Based on review of the literature three key limitations in existing AD-specific prediction models are apparent: 1) models developed by traditional statistics which overlook nonlinear relationships and complex interactions between features, 2) machine learning models are based on difficult to acquire, occasionally invasive, manually selected, and costly data, and 3) machine learning models often lack interpretability. Rapid, accurate, low-cost, easily accessible, non-invasive, interpretable and early clinical evaluation of AD is critical if an intervention is to have any hope at success. To support healthcare decision making and planning, and potentially reduce the burden of AD, this research leverages the Alzheimer’s Disease Neuroimaging Initiative (ADNI1/GO/2/3) database and a mathematical modelling approach based on supervised machine learning to identify 1) predictive markers of AD, and 2) patients at the highest risk of AD. Specifically we implemented a supervised XGBoost classifier with diagnostic (Exp 1) and prognostic (Exp 2) objectives. In experiment 1 (n=441) classification of AD (n=72) was performed in comparison to healthy controls (n= 369), while experiment 2 (n=738) involved classification of MCIstable (n = 444) compared to MCIAD(n = 294). In Experiment 1, machine learning tools identified three features (i.e., Everyday Cognition Questionnaire (Study partner) - Total, Alzheimer’s Disease Assessment Scale (13 items) and Delayed Total Recall) with ROC AUC scores consistently above 97%. Low performance on delayed recall alone appears to distinguish most AD patients. This finding is consistent with the pathophysiology of AD with individuals having problems storing new information into long-term memory. In experiment 2, the algorithm identified the major indicators of MCI-to-AD progression by integrating genetic, cognitive assessment, demographic and brain imaging to achieve ROC AUC scores consistently above 87%. This speaks to the multi-faceted nature of MCI progression and the utility of comprehensive feature selection. These features are important because they are non-invasive and easily collected. As an important focus of this research, the interpretability of the ML models and their predictions were investigated. The interpretable model for both experiments maintained performance with their complex counterparts while improving their interpretability. The interpretable models provide an intuitive explanation of the decision process which are vital steps towards the clinical adoption of machine learning tools for AD evaluation. The models can reliably predict patient diagnosis (Exp 1) and prognosis (Exp 2). In summary, our work extends beyond the identification of high-risk factors for developing AD. We identified accessible clinical features, together with clinically operable decision routes, to reliably and rapidly predict patients at the highest risk of developing Alzheimer’s disease. We addressed the aforementioned limitations by providing an intuitive explanation of the decision process among the high-risk non-invasive and accessible clinical features that lead to the patient’s risk. / Thesis / Master of Science in Biomedical Engineering / Early identification of patients at the highest risk of Alzheimer’s disease (AD) is crucial for possible pharmaceutical intervention. Existing prediction models have limitations, including inaccessible data and lack of interpretability. This research used a machine learning approach to identify patients at the highest risk of Alzheimer’s disease and found that certain clinical features, such as specific executive function- related cognitive testing (i.e., task switching), combined with genetic predisposition, brain imaging, and demographics, were important contributors to AD risk. The models were able to reliably predict patient diagnosis and prognosis and were designed to be low-cost, non-invasive, clinically operable and easily accessible. The interpretable models provided an intuitive explanation of the decision process, making it a valuable tool for healthcare decision-making and planning.
3

Inference and synthesis of temporal logic properties for autonomous systems

Aasi, Erfan 17 January 2024 (has links)
Recently, formal methods have gained significant traction for describing, checking, and synthesizing the behaviors of cyber-physical systems. Among these methods, temporal logics stand out as they offer concise mathematical formulas to express desired system properties. In this thesis, our focus revolves around two primary applications of temporal logics in describing the behavior of autonomous system. The first involves integrating temporal logics with machine learning techniques to deduce a temporal logic specification based on the system's execution traces. The second application concerns using temporal logics to define traffic rules and develop a control scheme that guarantees compliance with these rules for autonomous vehicles. Ultimately, our objective is to combine these approaches, infer a specification that characterizes the desired behaviors of autonomous vehicles, and ensure that these behaviors are upheld during runtime. In the first study of this thesis, our focus is on learning Signal Temporal Logic (STL) specifications from system execution traces. Our approach involves two main phases. Initially, we address an offline supervised learning problem, leveraging the availability of system traces and their corresponding labels. Subsequently, we introduce a time-incremental learning framework. This framework is designed for a dataset containing labeled signal traces with a common time horizon. It provides a method to predict the label of a signal as it is received incrementally over time. To tackle both problems, we propose two decision tree-based approaches, with the aim of enhancing the interpretability and classification performance of existing methods. The simulation results demonstrate the efficiency of our proposed approaches. In the next study, we address the challenge of guaranteeing compliance with traffic rules expressed as STL specifications within the domain of autonomous driving. Our focus is on developing control frameworks for a fully autonomous vehicle operating in a deterministic or stochastic environment. Our frameworks effectively translate the traffic rules into high-level decisions and accomplish low-level vehicle control with good real-time performance. Compared to existing literature, our approaches demonstrate significant enhancements in terms of runtime performance. / 2025-01-17T00:00:00Z
4

Towards Interpretable Vision Systems

Zhang, Peng 06 December 2017 (has links)
Artificial intelligent (AI) systems today are booming and they are used to solve new tasks or improve the performance on existing ones. However, most AI systems work in a black-box fashion, which prevents the users from accessing the inner modules. This leads to two major problems: (i) users have no idea when the underlying system will fail and thus it could fail abruptly without any warning or explanation, and (ii) users' lack of proficiency about the system could fail pushing the AI progress to its state-of-the-art. In this work, we address these problems in the following directions. First, we develop a failure prediction system, acting as an input filter. It raises a flag when the system is likely to fail with the given input. Second, we develop a portfolio computer vision system. It is able to predict which of the candidate computer vision systems perform the best on the input. Both systems have the benefit of only looking at the inputs without running the underlying vision systems. Besides, they are applicable to any vision system. By equipped such systems on different applications, we confirm the improved performance. Finally, instead of identifying errors, we develop more interpretable AI systems, which reveal the inner modules directly. We take two tasks as examples, words semantic matching and Visual Question Answering (VQA). In VQA, we take binary questions on abstract scenes as the first stage, then we extend to all question types on real images. In both cases, we take attention as an important intermediate output. By explicitly forcing the systems to attend correct regions, we ensure the correctness in the systems. We build a neural network to directly learn the semantic matching, instead of using the relation similarity between words. Across all the above directions, we show that by diagnosing errors and making more interpretable systems, we are able to improve the performance in the current models. / Ph. D.
5

Explaining the output of a black box model and a white box model: an illustrative comparison

Joel, Viklund January 2020 (has links)
The thesis investigates how one should determine the appropriate transparency of an information processing system from a receiver perspective. Research in the past has suggested that the model should be maximally transparent for what is labeled as ”high stake decisions”. Instead of motivating the choice of a model’s transparency on the non-rigorous criterion that the model contributes to a high stake decision, this thesis explores an alternative method. The suggested method involves that one should let the transparency depend on how well an explanation of the model’s output satisfies the purpose of an explanation. As a result, we do not have to bother if it is a high stake decision, we should instead make sure the model is sufficiently transparent to provide an explanation that satisfies the expressed purpose of an explanation.
6

Understanding through games : Life Philosophies and Socratic Dialogue in an unusual Medium / Förståelse genom spel : Livsfilosofier och Sokratisk dialog i ett ovanligt medium

Levall, Michael, Boström, Carl January 2014 (has links)
Games as a medium is about to change, and with this change comes a search for themes outside the normal range of what is seen as acceptable in the medium. In this paper we, Michael Levall and Carl Boström, use debate and Socratic dialogue to portray the value of looking at a topic from several different angles, with the topic of choice for this project being life philosophies. During production, we create a game which sets out to affect its player even after he or she has finished playing it, possibly teaching the player the value of looking at a problem from different perspectives. Playtests conclude that in order to affect the player, the game should be catered to the player’s skill in interpreting games, and interpretable design can be used to affect how influenced the player is by the game. / Spel som ett medium håller på att förändras, och med dess ändringar kommer sökandet efter nya teman utanför det som idag ses som acceptabelt inom mediet. I detta arbete använder vi, Michael Levall och Carl Boström, debatt och Sokratisk dialog för att porträttera värdet av olika synvinklar, med livsåskådningar som tema. Under produktionen skapar vi ett spel som syftar till att påverka dess spelare även efter det att han eller hon har spelat klart det, med möjligheten att lära spelaren värdet av att se ett problem från olika vinklar. Speltester visar att för att påverka spelaren bör spelet möta spelarens skicklighet att tolka spel, och hur tolkningsbar design kan användas för att påverka hur påverkad spelaren blir av spelet. / Detta är en reflektionsdel till en digital medieproduktion.
7

Äldre förskolebarns förhandlingar i lek

Persson, Ulrika, Arvidsson, Emma January 2018 (has links)
Vårt syfte med examensarbetet är att skapa en fördjupad förståelse för hur barn i 3-5 årsåldern förhandlar i den fria leken. Studien besvarar tre frågeställningar; Vilka förhandlingsstrategier använder sig de äldre barnen i förskolan sig av? Vad förhandlar barn om i den fria leken? Vad leder barnens förhandlingar till? Studien utgår från teorier kring perspektivtagning, copingstrategier, turtagning och tolkningsbar reproduktion samt aktuell forskning kring förhandlingar och förhandlingsstrategier hos barn, för att tolka de olika förhandlingsstategierna. Genom att observera barnen vid ett antal tillfällen i den fria leken fick vi ihop data att kunna analysera barnens förhandlingssituationer vid olika tillfällen. I rollen som observatör valde vi att vara passiva och inte påverka barnens förhandlingar. Under observationerna var barnen medvetna om att vi studerade deras lek. Resultatet pekar på att de äldre barnen på förskolan använder sig av verbala förhandlingsstrategier i första hand. Att inte svara på förfrågan var däremot en ickeverbal strategi som det visade sig att många barn också använde sig av, oftast i försök att få sin egen vilja igenom. Ifall barnen inte kunde enas om vilken aktivitet som skulle ske i leken, bytte de ofta aktivitet till något annat som en kompromiss. Barnen förhandlade oftast om lekscenarion, lekobjekt eller plats. Med plats menas var barn eller lekobjekt ska vara placerade i leken. Det var få förhandlingar som inte berörde dessa tre aspekter, vilket kan bero på att vi endast observerade barnen i deras fria lek.
8

Anchor-based Topic Modeling with Human Interpretable Results / Tolkningsbara ämnesmodeller baserade på ankarord

Andersson, Henrik January 2020 (has links)
Topic models are useful tools for exploring large data sets of textual content by exposing a generative process from which the text was produced. Anchor-based topic models utilize the anchor word assumption to define a set of algorithms with provable guarantees which recover the underlying topics with a run time practically independent of corpus size. A number of extensions to the initial anchor word-based algorithms, and enhancements made to tangential models, have been proposed which improve the intrinsic characteristics of the model making them more interpretable by humans. This thesis evaluates improvements to human interpretability due to: low-dimensional word embeddings in combination with a regularized objective function, automatic topic merging using tandem anchors, and utilizing word embeddings to synthetically increase corpus density. Results show that tandem anchors are viable vehicles for automatic topic merging, and that using word embeddings significantly improves the original anchor method across all measured metrics. Combining low-dimensional embeddings and a regularized objective results in computational downsides with small or no improvements to the metrics measured.
9

The Contribution of Visual Explanations in Forensic Investigations of Deepfake Video : An Evaluation

Fjellström, Lisa January 2021 (has links)
Videos manipulated by machine learning have rapidly increased online in the past years. So called deepfakes can depict people who never participated in a video recording by transposing their faces onto others in it. This raises the concern of authenticity of media, which demand for higher performing detection methods in forensics. Introduction of AI detectors have been of interest, but is held back today by their lack of interpretability. The objective of this thesis was therefore to examine what the explainable AI method local interpretable model-agnostic explanations (LIME) could contribute to forensic investigations of deepfake video.  An evaluation was conducted where 3 multimedia forensics evaluated the contribution of visual explanations of classifications when investigating deepfake video frames. The estimated contribution was not significant yet answers showed that LIME may be used to indicate areas to start examine. LIME was however not considered to provide sufficient proof to why a frame was classified as `fake', and would if introduced be used as one of several methods in the process. Issues were apparent regarding the interpretability of the explanations, as well as LIME's ability to indicate features of manipulation with superpixels.
10

TOWARD ROBUST AND INTERPRETABLE GRAPH AND IMAGE REPRESENTATION LEARNING

Juan Shu (14816524) 27 April 2023 (has links)
<p>Although deep learning models continue to gain momentum, their robustness and interpretability have always been a big concern because of the complexity of such models. In this dissertation, we studied several topics on the robustness and interpretability of convolutional neural networks (CNNs) and graph neural networks (GNNs). We first identified the structural problem of deep convolutional neural networks that leads to the adversarial examples and defined DNN uncertainty regions. We also argued that the generalization error, the large sample theoretical guarantee established for DNN, cannot adequately capture the phenomenon of adversarial examples. Secondly, we studied the dropout in GNNs, which is an effective regularization approach to prevent overfitting. Contrary to CNN, GNN usually has a shallow structure because a deep GNN normally sees performance degradation. We studied different dropout schemes and established a connection between dropout and over-smoothing in GNNs. Therefore we developed layer-wise compensation dropout, which allows GNN to go deeper without suffering performance degradation. We also developed a heteroscedastic dropout which effectively deals with a large number of missing node features due to heavy experimental noise or privacy issues. Lastly, we studied the interpretability of graph neural networks. We developed a self-interpretable GNN structure that denoises useless edges or features, leading to a more efficient message-passing process. The GNN prediction and explanation accuracy were boosted compared with baseline models. </p>

Page generated in 0.0854 seconds