• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 23
  • 12
  • 10
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Towards Explainable Decision-making Strategies of Deep Convolutional Neural Networks : An exploration into explainable AI and potential applications within cancer detection

Hammarström, Tobias January 2020 (has links)
The influence of Artificial Intelligence (AI) on society is increasing, with applications in highly sensitive and complicated areas. Examples include using Deep Convolutional Neural Networks within healthcare for diagnosing cancer. However, the inner workings of such models are often unknown, limiting the much-needed trust in the models. To combat this, Explainable AI (XAI) methods aim to provide explanations of the models' decision-making. Two such methods, Spectral Relevance Analysis (SpRAy) and Testing with Concept Activation Methods (TCAV), were evaluated on a deep learning model classifying cat and dog images that contained introduced artificial noise. The task was to assess the methods' capabilities to explain the importance of the introduced noise for the learnt model. The task was constructed as an exploratory step, with the future aim of using the methods on models diagnosing oral cancer. In addition to using the TCAV method as introduced by its authors, this study also utilizes the CAV-sensitivity to introduce and perform a sensitivity magnitude analysis. Both methods proved useful in discerning between the model’s two decision-making strategies based on either the animal or the noise. However, greater insight into the intricacies of said strategies is desired. Additionally, the methods provided a deeper understanding of the model’s learning, as the model did not seem to properly distinguish between the noise and the animal conceptually. The methods thus accentuated the limitations of the model, thereby increasing our trust in its abilities. In conclusion, the methods show promise regarding the task of detecting visually distinctive noise in images, which could extend to other distinctive features present in more complex problems. Consequently, more research should be conducted on applying these methods on more complex areas with specialized models and tasks, e.g. oral cancer.
12

Interpretation of Dimensionality Reduction with Supervised Proxies of User-defined Labels

Leoni, Cristian January 2021 (has links)
Research on Machine learning (ML) explainability has received a lot of focus in recent times. The interest, however, mostly focused on supervised models, while other ML fields have not had the same level of attention. Despite its usefulness in a variety of different fields, unsupervised learning explainability is still an open issue. In this paper, we present a Visual Analytics framework based on eXplainable AI (XAI) methods to support the interpretation of Dimensionality reduction methods. The framework provides the user with an interactive and iterative process to investigate and explain user-perceived patterns for a variety of DR methods by using XAI methods to explain a supervised method trained on the selected data. To evaluate the effectiveness of the proposed solution, we focus on two main aspects: the quality of the visualization and the quality of the explanation. This challenge is tackled using both quantitative and qualitative methods, and due to the lack of pre-existing test data, a new benchmark has been created. The quality of the visualization is established using a well-known survey-based methodology, while the quality of the explanation is evaluated using both case studies and a controlled experiment, where the generated explanation accuracy is evaluated on the proposed benchmark. The results show a strong capacity of our framework to generate accurate explanations, with an accuracy of 89% over the controlled experiment. The explanation generated for the two case studies yielded very similar results when compared with pre-existing, well-known literature on ground truths. Finally, the user experiment generated high quality overall scores for all assessed aspects of the visualization.
13

Improving nuclear medicine with deep learning and explainability: two real-world use cases in parkinsonian syndrome and safety dosimetry

Nazari, Mahmood 17 March 2022 (has links)
Computer vision in the area of medical imaging has rapidly improved during recent years as a consequence of developments in deep learning and explainability algorithms. In addition, imaging in nuclear medicine is becoming increasingly sophisticated, with the emergence of targeted radiotherapies that enable treatment and imaging on a molecular level (“theranostics”) where radiolabeled targeted molecules are directly injected into the bloodstream. Based on our recent work, we present two use-cases in nuclear medicine as follows: first, the impact of automated organ segmentation required for personalized dosimetry in patients with neuroendocrine tumors and second, purely data-driven identification and verification of brain regions for diagnosis of Parkinson’s disease. Convolutional neural network was used for automated organ segmentation on computed tomography images. The segmented organs were used for calculation of the energy deposited into the organ-at-risk for patients treated with a radiopharmaceutical. Our method resulted in faster and cheaper dosimetry and only differed by 7% from dosimetry performed by two medical physicists. The identification of brain regions, however was analyzed on dopamine-transporter single positron emission tomography images using convolutional neural network and explainability, i.e., layer-wise relevance propagation algorithm. Our findings confirm that the extra-striatal brain regions, i.e., insula, amygdala, ventromedial prefrontal cortex, thalamus, anterior temporal cortex, superior frontal lobe, and pons contribute to the interpretation of images beyond the striatal regions. In current common diagnostic practice, however, only the striatum is the reference region, while extra-striatal regions are neglected. We further demonstrate that deep learning-based diagnosis combined with explainability algorithm can be recommended to support interpretation of this image modality in clinical routine for parkinsonian syndromes, with a total computation time of three seconds which is compatible with busy clinical workflow. Overall, this thesis shows for the first time that deep learning with explainability can achieve results competitive with human performance and generate novel hypotheses, thus paving the way towards improved diagnosis and treatment in nuclear medicine.
14

Statistical and Machine Learning Approaches For Visualizing and Analyzing Large-Scale Simulation Data

Hazarika, Subhashis January 2019 (has links)
No description available.
15

A Design Thinking Framework for Human-Centric Explainable Artificial Intelligence in Time-Critical Systems

Stone, Paul Benjamin January 2022 (has links)
No description available.
16

Automated Tactile Sensing for Quality Control of Locks Using Machine Learning

Andersson, Tim January 2024 (has links)
This thesis delves into the use of Artificial Intelligence (AI) for quality control in manufacturing systems, with a particular focus on anomaly detection through the analysis of torque measurements in rotating mechanical systems. The research specifically examines the effectiveness of torque measurements in quality control of locks, challenging the traditional method that relies on human tactile sense for detecting mechanical anomalies. This conventional approach, while widely used, has been found to yield inconsistent results and poses physical strain on operators. A key aspect of this study involves conducting experiments on locks using torque measurements to identify mechanical anomalies. This method represents a shift from the subjective and physically demanding practice of manually testing each lock. The research aims to demonstrate that an automated, AI-driven approach can offer more consistent and reliable results, thereby improving overall product quality. The development of a machine learning model for this purpose starts with the collection of training data, a process that can be costly and disruptive to normal workflow. Therefore, this thesis also investigates strategies for predicting and minimizing the sample size used for training. Additionally, it addresses the critical need of trustworthiness in AI systems used for final quality control. The research explores how to utilize machine learning models that are not only effective in detecting anomalies but also offers a level of interpretability, avoiding the pitfalls of black box AI models. Overall, this thesis contributes to advancing automated quality control by exploring the state-of-the-art machine learning algorithms for mechanical fault detection, focusing on sample size prediction and minimization and also model interpretability. To the best of the author’s knowledge, it is the first study that evaluates an AI-driven solution for quality control of mechanical locks, marking an innovation in the field. / Denna avhandling fördjupar sig i användningen av Artificiell Intelligens (AI) för kvalitetskontroll i tillverkningssystem, med särskilt fokus på anomalidetektion genom analys av momentmätningar i roterande mekaniska system. Forskningen undersöker specifikt effektiviteten av momentmätningar för kvalitetskontroll av lås, vilket utmanar den traditionella metoden som förlitar sig på människans taktila sinne för att upptäcka mekaniska anomalier. Denna konventionella metod, som är brett använd, har visat sig ge inkonsekventa resultat och medför fysisk belastning för operatörerna. En nyckelaspekt av denna studie innebär att genomföra experiment på lås med hjälp av momentmätningar för att identifiera mekaniska anomalier. Denna metod representerar en övergång från den subjektiva och fysiskt krävande praxisen att manuellt testa varje lås. Forskningen syftar till att demonstrera att en automatiserad, AI-driven metod kan erbjuda mer konsekventa och tillförlitliga resultat, och därmed förbättra den övergripande produktkvaliteten. Utvecklingen av en maskininlärningsmodell för detta ändamål börjar med insamling av träningsdata, en process som kan vara kostsam och störande för det normala arbetsflödet. Därför undersöker denna avhandling också strategier för att förutsäga och minimera mängden av data som används för träning. Dessutom adresseras det kritiska behovet av tillförlitlighet i AI-system som används för slutlig kvalitetskontroll. Forskningen utforskar hur man kan använda maskininlärningsmodeller som inte bara är effektiva för att upptäcka anomalier, utan också erbjuder en nivå av tolkningsbarhet, för att undvika fallgroparna med svart låda AI-modeller. Sammantaget bidrar denna avhandling till att främja automatiserad kvalitetskontroll genom att utforska de senaste maskininlärningsalgoritmerna för detektion av mekaniska fel, med fokus på prediktion och minimering av mängden träningsdata samt tolkbarheten av modellens beslut. Denna avhandling utgör det första försöket att utvärdera en AI-driven strategi för kvalitetskontroll av mekaniska lås, vilket utgör en nyskapande innovation inom området.
17

Human Factors Involved in Explainability of Autonomous Driving : Master’s Thesis / Mänskliga faktorer som är involverade i förklaringen av autonom körning : Magisteruppsats

Arisoni, Abriansyah January 2023 (has links)
Autonomous Car (AC) has been more common in recent years. Despite the rapid development of the driving part of the AC, researchers still need to improve the overall experience of the AC's passengers and boost their willingness to adopt the technology. When driving in an AC, passengers need to have a good situation awareness to feel more comfortable riding in an AC and have a higher trust towards the system. One of the options to improve the situation awareness is by giving passengers an explanation about the situation. This study investigates how the situational risk of specific driving scenarios and the availability of visual environment information for passengers will affect the type of explanation needed by the AC passenger. The study was conducted through a series of different scenario tests presented to online study participants and focused on the human interaction to level 4 and 5 AC. This study's primary goal is to understand the human-AC interactions further, thus improving the human experience while riding in an AC. The results show that visual information availability affects the type of explanation passengers need. When no visual information is available, passengers are more satisfied with the type that explain the cause of AC's action (causal explanation). When the visual information is available, passengers are more satisfied with the type that provide intentions behind the AC's certain actions (intentional explanation). Results also show that despite no significant differences in trust found between the groups, participants showed slightly higher trust in the AC that provided causal explanations in situations without visual information available. This study contributes to a better understanding of the explanation type passengers of AC need in the various situational degree of risk and visual information availability. By leveraging this, we can create a better experience for passengers in the AC and eventually boost the adoption of the AC on the road. / Autonomous car (AC) har blivit allt vanligare under de senaste åren. Trots den snabba utvecklingen av själva kördelen hos AC behöver forskare fortfarande förbättra den övergripande upplevelsen för AC-passagerare och öka deras vilja att anta teknologin. När man kör i en AC behöver passagerare ha god situationsmedvetenhet för att känna sig bekväma och ha högre förtroende för systemet. Ett av alternativen för att förbättra situationsmedvetenheten är att ge passagerare en förklaring om situationen. Denna studie undersöker hur den situationella risken för specifika körsituationer och tillgängligheten av visuell miljöinformation för passagerare påverkar vilken typ av förklaring som behövs av AC-passageraren. Studien genomfördes genom en serie olika scenariotester som presenterades för deltagare i en online-studie och fokuserade på mänsklig interaktion med nivå 4 och 5 AC. Denna studiens främsta mål är att förstå människa-AC-interaktionen bättre och därmed förbättra den mänskliga upplevelsen vid färd i en AC. Resultaten visar att tillgängligheten av visuell information påverkar vilken typ av förklaring passagerarna behöver. När ingen visuell information finns tillgänglig är passagerarna mer nöjda med den typ som förklarar orsaken till AC:s agerande (orsaksförklaring). När den visuella informationen finns tillgänglig är passagerarna mer nöjda med den typ som ger intentioner bakom AC:s vissa handlingar (avsiktlig förklaring). Resultaten visar också att trots att inga signifikanta skillnader i tillit hittats mellan grupperna, visade deltagarna något högre förtroende för AC som gav orsaksförklaringar i situationer utan visuell information tillgänglig. Denna studie bidrar till en bättre förståelse för vilken typ av förklaring passagerare i AC behöver vid olika situationella riskgrader och tillgänglighet av visuell information. Genom att dra nytta av detta kan vi skapa en bättre upplevelse för passagerare i AC och på sikt öka antagandet av AC på vägarna.
18

Automatic Analysis of Peer Feedback using Machine Learning and Explainable Artificial Intelligence / Automatisk analys av Peer feedback med hjälp av maskininlärning och förklarig artificiell Intelligence

Huang, Kevin January 2023 (has links)
Peer assessment is a process where learners evaluate and provide feedback on one another’s performance, which is critical to the student learning process. Earlier research has shown that it can improve student learning outcomes in various settings, including the setting of engineering education, in which collaborative teaching and learning activities are common. Peer assessment activities in computer-supported collaborative learning (CSCL) settings are becoming more and more common. When using digital technologies for performing these activities, much student data (e.g., peer feedback text entries) is generated automatically. These large data sets can be analyzed (through e.g., computational methods) and further used to improve our understanding of how students regulate their learning in CSCL settings in order to improve their conditions for learning by for example, providing in-time feedback. Yet there is currently a need to automatise the coding process of these large volumes of student text data since it is a very time- and resource consuming task. In this regard, the recent development in machine learning could prove beneficial. To understand how we can harness the affordances of machine learning technologies to classify student text data, this thesis examines the application of five models on a data set containing peer feedback from 231 students in the settings of a large technical university course. The models used to evaluate on the dataset are: the traditional models Multi Layer Perceptron (MLP), Decision Tree and the transformers-based models BERT, RoBERTa and DistilBERT. To evaluate each model’s performance, Cohen’s κ, accuracy, and F1-score were used as metrics. Preprocessing of the data was done by removing stopwords; then it was examined whether removing them improved the performance of the models. The results showed that preprocessing on the dataset only made the Decision Tree increase in performance while it decreased on all other models. RoBERTa was the model with the best performance on the dataset on all metrics used. Explainable artificial intelligence (XAI) was used on RoBERTa as it was the best performing model and it was found that the words considered as stopwords made a difference in the prediction. / Kamratbedömning är en process där eleverna utvärderar och ger feedback på varandras prestationer, vilket är avgörande för elevernas inlärningsprocess. Tidigare forskning har visat att den kan förbättra studenternas inlärningsresultat i olika sammanhang, däribland ingenjörsutbildningen, där samarbete vid undervisning och inlärning är vanligt förekommande. I dag blir det allt vanligare med kamratbedömning inom datorstödd inlärning i samarbete (CSCL). När man använder digital teknik för att utföra dessa aktiviteter skapas många studentdata (t.ex. textinlägg om kamratåterkoppling) automatiskt. Dessa stora datamängder kan analyseras (genom t.ex, beräkningsmetoder) och användas vidare för att förbättra våra kunskaper om hur studenterna reglerar sitt lärande i CSCL-miljöer för att förbättra deras förutsättningar för lärande. Men för närvarande finns det ett stort behov av att automatisera kodningen av dessa stora volymer av textdata från studenter. I detta avseende kan den senaste utvecklingen inom maskininlärning vara till nytta. För att förstå hur vi kan nyttja möjligheterna med maskininlärning teknik för att klassificera textdata från studenter, undersöker vi i denna studie hur vi kan använda fem modeller på en datamängd som innehåller feedback från kamrater till 231 studenter. Modeller som används för att utvärdera datasetet är de traditionella modellerna Multi Layer Perceptron (MLP), Decision Tree och de transformer-baserade modellerna BERT, RoBERTa och DistilBERT. För att utvärdera varje modells effektivitet användes Cohen’s κ, noggrannhet och F1-poäng som mått. Förbehandling av data gjordes genom att ta bort stoppord, därefter undersöktes om borttagandet av dem förbättrade modellernas effektivitet. Resultatet visade att förbehandlingen av datasetet endast fick Decision Tree att öka sin prestanda, medan den minskade för alla andra modeller. RoBERTa var den modell som presterade bäst på datasetet för alla mätvärden som användes. Förklarlig artificiell intelligens (XAI) användes på RoBERTa eftersom det var den modell som presterade bäst, och det visade sig att de ord som ansågs vara stoppord hade betydelse för prediktionen.
19

Interpreting Multivariate Time Series for an Organization Health Platform

Saluja, Rohit January 2020 (has links)
Machine learning-based systems are rapidly becoming popular because it has been realized that machines are more efficient and effective than humans at performing certain tasks. Although machine learning algorithms are extremely popular, they are also very literal and undeviating. This has led to a huge research surge in the field of interpretability in machine learning to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process. Moreover, in most real-world problems just making predictions using machine learning algorithms only solves the problem partially. Time series is one of the most popular and important data types because of its dominant presence in the fields of business, economics, and engineering. Despite this, interpretability in time series is still relatively unexplored as compared to tabular, text, and image data. With the growing research in the field of interpretability in machine learning, there is also a pressing need to be able to quantify the quality of explanations produced after interpreting machine learning models. Due to this reason, evaluation of interpretability is extremely important. The evaluation of interpretability for models built on time series seems completely unexplored in research circles. This thesis work focused on achieving and evaluating model agnostic interpretability in a time series forecasting problem.  The use case discussed in this thesis work focused on finding a solution to a problem faced by a digital consultancy company. The digital consultancy wants to take a data-driven approach to understand the effect of various sales related activities in the company on the sales deals closed by the company. The solution involved framing the problem as a time series forecasting problem to predict the sales deals and interpreting the underlying forecasting model. The interpretability was achieved using two novel model agnostic interpretability techniques, Local interpretable model- agnostic explanations (LIME) and Shapley additive explanations (SHAP). The explanations produced after achieving interpretability were evaluated using human evaluation of interpretability. The results of the human evaluation studies clearly indicate that the explanations produced by LIME and SHAP greatly helped lay humans in understanding the predictions made by the machine learning model. The human evaluation study results also indicated that LIME and SHAP explanations were almost equally understandable with LIME performing better but with a very small margin. The work done during this project can easily be extended to any time series forecasting or classification scenario for achieving and evaluating interpretability. Furthermore, this work can offer a very good framework for achieving and evaluating interpretability in any machine learning-based regression or classification problem. / Maskininlärningsbaserade system blir snabbt populära eftersom man har insett att maskiner är effektivare än människor när det gäller att utföra vissa uppgifter. Även om maskininlärningsalgoritmer är extremt populära, är de också mycket bokstavliga. Detta har lett till en enorm forskningsökning inom området tolkbarhet i maskininlärning för att säkerställa att maskininlärningsmodeller är tillförlitliga, rättvisa och kan hållas ansvariga för deras beslutsprocess. Dessutom löser problemet i de flesta verkliga problem bara att göra förutsägelser med maskininlärningsalgoritmer bara delvis. Tidsserier är en av de mest populära och viktiga datatyperna på grund av dess dominerande närvaro inom affärsverksamhet, ekonomi och teknik. Trots detta är tolkningsförmågan i tidsserier fortfarande relativt outforskad jämfört med tabell-, text- och bilddata. Med den växande forskningen inom området tolkbarhet inom maskininlärning finns det också ett stort behov av att kunna kvantifiera kvaliteten på förklaringar som produceras efter tolkning av maskininlärningsmodeller. Av denna anledning är utvärdering av tolkbarhet extremt viktig. Utvärderingen av tolkbarhet för modeller som bygger på tidsserier verkar helt outforskad i forskarkretsar. Detta uppsatsarbete fokuserar på att uppnå och utvärdera agnostisk modelltolkbarhet i ett tidsserieprognosproblem.  Fokus ligger i att hitta lösningen på ett problem som ett digitalt konsultföretag står inför som användningsfall. Det digitala konsultföretaget vill använda en datadriven metod för att förstå effekten av olika försäljningsrelaterade aktiviteter i företaget på de försäljningsavtal som företaget stänger. Lösningen innebar att inrama problemet som ett tidsserieprognosproblem för att förutsäga försäljningsavtalen och tolka den underliggande prognosmodellen. Tolkningsförmågan uppnåddes med hjälp av två nya tekniker för agnostisk tolkbarhet, lokala tolkbara modellagnostiska förklaringar (LIME) och Shapley additiva förklaringar (SHAP). Förklaringarna som producerats efter att ha uppnått tolkbarhet utvärderades med hjälp av mänsklig utvärdering av tolkbarhet. Resultaten av de mänskliga utvärderingsstudierna visar tydligt att de förklaringar som produceras av LIME och SHAP starkt hjälpte människor att förstå förutsägelserna från maskininlärningsmodellen. De mänskliga utvärderingsstudieresultaten visade också att LIME- och SHAP-förklaringar var nästan lika förståeliga med LIME som presterade bättre men med en mycket liten marginal. Arbetet som utförts under detta projekt kan enkelt utvidgas till alla tidsserieprognoser eller klassificeringsscenarier för att uppnå och utvärdera tolkbarhet. Dessutom kan detta arbete erbjuda en mycket bra ram för att uppnå och utvärdera tolkbarhet i alla maskininlärningsbaserade regressions- eller klassificeringsproblem.
20

Explainable AI For Predictive Maintenance

Karlsson, Nellie, Bengtsson, My January 2022 (has links)
As the complexity of deep learning model increases, the transparency of the systems does the opposite. It may be hard to understand the predictions a deep learning model makes, but even harder to understand why these predictions are made. Using eXplainable AI (XAI), we can gain greater knowledge of how the model operates and how the input in which the model receives can change its predictions. In this thesis, we apply Integrated Gradients (IG), an XAI method primarily used on image data and on datasets containing tabular and time-series data. We also evaluate how the results of IG differ from various types of models and how the change of baseline can change the outcome. In these results, we observe that IG can be applied to both sequenced and nonsequenced data, with varying results. We can see that the gradient baseline does not affect the results of IG on models such as RNN, LSTM, and GRU, where the data contains time series, as much as it does for models like MLP with nonsequenced data. To confirm this, we also applied IG to SVM models, which gave the results that the choice of gradient baseline has a significant impact on the results of IG.

Page generated in 0.2364 seconds