• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 4
  • 1
  • Tagged with
  • 69
  • 69
  • 37
  • 32
  • 30
  • 21
  • 19
  • 16
  • 16
  • 16
  • 15
  • 14
  • 13
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Unsupervised Online Anomaly Detection in Multivariate Time-Series / Oövervakad online-avvikelsedetektering i flerdimensionella tidsserier

Segerholm, Ludvig January 2023 (has links)
This research aims to identify a method for unsupervised online anomaly detection in multivariate time series in dynamic systems in general and on the case study of Devwards IoT-system in particular. A requirement of the solution is its explainability, online learning and low computational expense. A comprehensive literature review was conducted, leading to the experimentation and analysis of various anomaly detection approaches. Of the methods evaluated, a singular recurrent neural network autoencoder emerged as the most promising, emphasizing a simple model structure that encourages stable performance with consistent outputs, regardless of the average output. While other approaches such as Hierarchical Temporal Memory models and an ensemble strategy of adaptive model pooling yielded suboptimal results. A modified version of the Residual Explainer method for enhancing explainability in autoencoders for online scenarios showed promising outcomes. The use of Mahalanobis distance for anomaly detection was explored. Feature extraction and it's implications in the context of the proposed approach is explored. Conclusively, a single, streamlined recurrent neural network appears to be the superior approach for this application, though further investigation into online learning methods is warranted. The research contributes results into the field of unsupervised online anomaly detection in multivariate time series and contributes to the Residual Explainer method for online autoencoders. Additionally, it offers data on the ineffectiveness of the Mahalanobis distance in an online anomaly detection environment.
22

Leveraging Word Embeddings to Enrich Linguistics and Natural Language Understanding

Aljanaideh, Ahmad 22 July 2022 (has links)
No description available.
23

Explaining Turbulence Predictions from Deep Neural Networks: Finding Important Features with Approximate Shapley Values / Förklaring av förutsägelser för turbulent strömning från djupa neurala nätverk: Identifikation av viktiga egenskaper med approximativa Shapley värden

Plonczak, Antoni January 2022 (has links)
Deep-learning models have been shown to produce accurate predictions in various scientific and engineering applications, such as turbulence modelling, by efficiently learning complex nonlinear relations from data. However, deep networks are often black boxes and it is not clear from the model parameters which inputs are more important to a prediction. As a result, it is difficult to understand whether models are taking into account physically relevant information and little theoretical understanding of the phenomenon modelled by the deep network can be gained.  In this work, methods from the field of explainable AI, based on Shapley Value approximation, are applied to compute feature attributions in previously trained fully convolutional deep neural networks for predicting velocity fluctuations in an open channel turbulent flow using wall quantities as inputs. The results show that certain regions in the inputs to the model have a higher importance to a prediction, which is verified by computational experiments that confirm the models are more sensitive to those inputs as compared to randomly selected inputs, if the error in the prediction is considered. These regions correspond to certain strongly distinguishable features (visible structures) in the model inputs. The correlations between the regions with high importance and visible structures in the model inputs are investigated with a linear regression analysis. The results indicate that certain physical characteristics of these structures are highly correlated to the importance of individual input features within these structures. / Djupinlärningsmodeller har visat sig kunna producera korrekta förutsägelser i olika vetenskapliga och tekniska tillämpningar, såsom turbulensmodellering, genom att effektivt lära sig komplexa olinjära relationer från data. Djupa neurala nätverk är dock ofta svarta lådor och det framgår inte av modellparametrarna vilka delar av indata som är viktigast för en förutsägelse. Som ett resultat av detta är det svårt att förstå om modellerna tar hänsyn till fysiskt relevant information och de ger inte heller någon teoretisk förståelse för fenomenet som modelleras av det djupa nätverket. I detta arbete tillämpas metoder från området för förklarabar AI, baserade på approximation av så kallde Shapley värden, för att beräkna vilka delar av indata som är viktigst för de prediktioner som görs. Detta görs för djupa neurala faltningsnätverk som tränats för att förutsäga hastighetsfluktuationer i ett turbulent flöde i en öppen kanal med hjälp av väggkvantiteter som indata. Resultaten visar att vissa regioner i indata till modellen har större betydelse för en förutsägelse. Detta verifieras av beräkningsexperiment som bekräftar att modellerna är mer känsliga för dessa indata jämfört med slumpmässigt valda indata, baserat på det resulterande felet i förutsägelser som görs av det tränade nätverket. Dessa regioner motsvarar vissa starkt särskiljbara egenskaper (synliga strukturer) i indata till modellen. Korrelationerna mellan regionerna med hög betydelse och synliga strukturer i indata undersöks med linjär regressionsanalys. Resultaten indikerar att vissa fysiska egenskaper hos dessa strukturer är starkt korrelerade med de approximativa Shapley värden som beräknats för dessa delar av indata.
24

Human-AI Sensemaking with Semantic Interaction and Deep Learning

Bian, Yali 07 March 2022 (has links)
Human-AI interaction can improve overall performance, exceeding the performance that either humans or AI could achieve separately, thus producing a whole greater than the sum of the parts. Visual analytics enables collaboration between humans and AI through interactive visual interfaces. Semantic interaction is a design methodology to enhance visual analytics systems for sensemaking tasks. It is widely applied for sensemaking in high-stakes domains such as intelligence analysis and academic research. However, existing semantic interaction systems support collaboration between humans and traditional machine learning models only; they do not apply state-of-the-art deep learning techniques. The contribution of this work is the effective integration of deep neural networks into visual analytics systems with semantic interaction. More specifically, I explore how to redesign the semantic interaction pipeline to enable collaboration between human and deep learning models for sensemaking tasks. First, I validate that semantic interaction systems with pre-trained deep learning better support sensemaking than existing semantic interaction systems with traditional machine learning. Second, I integrate interactive deep learning into the semantic interaction pipeline to enhance inference ability in capturing analysts' precise intents, thereby promoting sensemaking. Third, I add semantic explanation into the pipeline to interpret the interactively steered deep learning model. With a clear understanding of DL, analysts can make better decisions. Finally, I present a neural design of the semantic interaction pipeline to further boost collaboration between humans and deep learning for sensemaking. / Doctor of Philosophy / Human AI interaction can harness the separate strengths of human and machine intelligence to accomplish tasks neither can solve alone. Analysts are good at making high-level hypotheses and reasoning from their domain knowledge. AI models are better at data computation based on low-level input features. Successful human-AI interactions can perform real-world, high-stakes tasks, such as issuing medical diagnoses, making credit assessments, and determining cases of discrimination. Semantic interaction is a visual methodology providing intuitive communications between analysts and traditional machine learning models. It is commonly utilized to enhance visual analytics systems for sensemaking tasks, such as intelligence analysis and scientific research. The contribution of this work is to explore how to use semantic interaction to achieve collaboration between humans and state-of-the-art deep learning models for complex sensemaking tasks. To do this, I first evaluate the straightforward solution of integrating the pretrained deep learning model into the traditional semantic interaction pipeline. Results show that the deep learning representation matches human cognition better than hand engineering features via semantic interaction. Next, I look at methods for supporting semantic interaction systems with interactive and interpretable deep learning. The new pipeline provides effective communication between human and deep learning models. Interactive deep learning enables the system to better capture users' intents. Interpretable deep learning lets users have a clear understanding of models. Finally, I improve the pipeline to better support collaboration using a neural design. I hope this work can contribute to future designs for the human-in-the-loop analysis with deep learning and visual analytics techniques.
25

Which product description phrases affect sales forecasting? An explainable AI framework by integrating WaveNet neural network models with multiple regression

Chen, S., Ke, S., Han, S., Gupta, S., Sivarajah, Uthayasankar 03 September 2023 (has links)
Yes / The rapid rise of many e-commerce platforms for individual consumers has generated a large amount of text-based data, and thus researchers have begun to experiment with text mining techniques to extract information from the large amount of textual data to assist in sales forecasting. The existing literature focuses textual data on product reviews; however, consumer reviews are not something that companies can directly control, here we argue that textual product descriptions are also important determinants of consumer choice. We construct an artificial intelligence (AI) framework that combines text mining, WaveNet neural networks, multiple regression, and SHAP model to explain the impact of product descriptions on sales forecasting. Using data from nearly 200,000 sales records obtained from a cross-border e-commerce firm, an empirical study showed that the product description presented to customers can influence sales forecasting, and about 44% of the key phrases greatly affect sales forecasting results, the sales forecasting models that added key product description phrases had improved forecasting accuracy. This paper provides explainable results of sales forecasting, which can provide guidance for firms to design product descriptions with reference to the market demand reflected by these phrases, and adding these phrases to product descriptions can help win more customers. / The full-text of this article will be released for public view at the end of the publisher embargo on 24 Feb 2025.
26

Designing Explainable In-vehicle Agents for Conditionally Automated Driving: A Holistic Examination with Mixed Method Approaches

Wang, Manhua 16 August 2024 (has links)
Automated vehicles (AVs) are promising applications of artificial intelligence (AI). While human drivers benefit from AVs, including long-distance support and collision prevention, we do not always understand how AV systems function and make decisions. Consequently, drivers might develop inaccurate mental models and form unrealistic expectations of these systems, leading to unwanted incidents. Although efforts have been made to support drivers' understanding of AVs through in-vehicle visual and auditory interfaces and warnings, these may not be sufficient or effective in addressing user confusion and overtrust in in-vehicle technologies, sometimes even creating negative experiences. To address this challenge, this dissertation conducts a series of studies to explore the possibility of using the in-vehicle intelligent agent (IVIA) in the form of the speech user interface to support drivers, aiming to enhance safety, performance, and satisfaction in conditionally automated vehicles. First, two expert workshops were conducted to identify design considerations for general IVIAs in the driving context. Next, to better understand the effectiveness of different IVIA designs in conditionally automated driving, a driving simulator study (n=24) was conducted to evaluate four types of IVIA designs varying by embodiment conditions and speech styles. The findings indicated that conversational agents were preferred and yielded better driving performance, while robot agents caused greater visual distraction. Then, contextual inquiries with 10 drivers owning vehicles with advanced driver assistance systems (ADAS) were conducted to identify user needs and the learning process when interacting with in-vehicle technologies, focusing on interface feedback and warnings. Subsequently, through expert interviews with seven experts from AI, social science, and human-computer interaction domains, design considerations were synthesized for improving the explainability of AVs and preventing associated risks. With information gathered from the first four studies, three types of adaptive IVIAs were developed based on human-automation function allocation and investigated in terms of their effectiveness on drivers' response time, driving performance, and subjective evaluations through a driving simulator study (n=39). The findings indicated that although drivers preferred more information provided to them, their response time to road hazards might be degraded when receiving more information, indicating the importance of the balance between safety and satisfaction. Taken together, this dissertation indicates the potential of adopting IVIAs to enhance the explainability of future AVs. It also provides key design guidelines for developing IVIAs and constructing explanations critical for safer and more satisfying AVs. / Doctor of Philosophy / Automated vehicles (AVs) are an exciting application of artificial intelligence (AI). While these vehicles offer benefits like helping with long-distance driving and preventing accidents, people often do not understand how they work or make decisions. This lack of understanding can lead to unrealistic expectations and potentially dangerous situations. Even though there are visual and sound alerts in these cars to help drivers, they are not always sufficient to prevent confusion and over-reliance on technology, sometimes making the driving experience worse. To address this challenge, this dissertation explores the use of in-vehicle intelligent agents (IVIAs), in the form of speech assistant, to help drivers better understand and interact with AVs, aiming to improve safety, performance, and overall satisfaction in semi-automated vehicles. First, two expert workshops helped identify key design features for IVIAs. Then, a driving simulator study with 24 participants tested four different designs of IVIAs varying in appearance and how they spoke. The results showed that people preferred conversational agents, which led to better driving behaviors, while robot-like agents caused more visual distractions. Then, through contextual inquiries with 10 drivers who own vehicles with advanced driver assistance systems (ADAS), I identified user needs and how they learn to interact with in-car technologies, focusing on feedback and warnings. Subsequently, I conducted expert interviews with seven professionals from AI, social science, and human-computer interaction fields, which provided further insights into facilitating the explainability of AVs and preventing associated risks. With the information gathered, three types of adaptive IVIAs were developed based on whether the driver was actively in control of the vehicle, or the driving automation system was in control. The effectiveness of these agents was evaluated through drivers' brake and steer response time, driving performance, and user satisfaction through another driving simulator study with 39 participants. The findings indicate that although drivers appreciated more detailed explanations, their response time to road hazards slowed down, highlighting the need to balance safety and satisfaction. Overall, this research shows the potential of using IVIAs to make AVs easier to understand and safer to use. It also offers important design guidelines for creating these IVIAs and their speech contents to improve the driving experience.
27

Human-Centered Explainability Attributes In Ai-Powered Eco-Driving : Understanding Truck Drivers' Perspective

Gjona, Ermela January 2023 (has links)
The growing presence of algorithm-generated recommendations in AI-powered services highlights the importance of responsible systems that explain outputs in a human-understandable form, especially in an automotive context. Implementing explainability in recommendations of AI-powered eco-driving is important in ensuring that drivers understand the underlying reasoning behind the recommendations. Previous literature on explainable AI (XAI) has been primarily technological-centered, and only a few studies involve the end-user perspective. There is a lack of knowledge of drivers' needs and requirements for explainability in an AI-powered eco-driving context. This study addresses the attributes that make a “satisfactory” explanation, i,e., a satisfactory interface between humans and AI. This study uses scenario-based interviews to understand the explainability attributes that influence truck drivers' intention to use eco-driving recommendations. The study used thematic analysis to categorize seven attributes into context-dependent (Format, Completeness, Accuracy, Timeliness, Communication) and generic (Reliability, Feedback loop) categories. The study contributes context-dependent attributes along three design dimensions: Presentational, Content-related, and Temporal aspects of explainability. The findings of this study present an empirical foundation into end-users' explainability needs and provide valuable insights for UX and system designers in eliciting end-user requirements.
28

Interpreting embedding models of knowledge bases. / Interpretando modelos de embedding de bases de conhecimento.

Arthur Colombini Gusmão 26 November 2018 (has links)
Knowledge bases are employed in a variety of applications, from natural language processing to semantic web search; alas, in practice, their usefulness is hurt by their incompleteness. To address this issue, several techniques aim at performing knowledge base completion, of which embedding models are efficient, attain state-of-the-art accuracy, and eliminate the need for feature engineering. However, embedding models predictions are notoriously hard to interpret. In this work, we propose model-agnostic methods that allow one to interpret embedding models by extracting weighted Horn rules from them. More specifically, we show how the so-called \"pedagogical techniques\", from the literature on neural networks, can be adapted to take into account the large-scale relational aspects of knowledge bases, and show experimentally their strengths and weaknesses. / Bases de conhecimento apresentam diversas aplicações, desde processamento de linguagem natural a pesquisa semântica da web; contudo, na prática, sua utilidade é prejudicada por não serem totalmente completas. Para solucionar esse problema, diversas técnicas focam em completar bases de conhecimento, das quais modelos de embedding são eficientes, atingem estado da arte em acurácia, e eliminam a necessidade de fazer-se engenharia de características dos dados de entrada. Entretanto, as predições dos modelos de embedding são notoriamente difíceis de serem interpretadas. Neste trabalho, propomos métodos agnósticos a modelo que permitem interpretar modelos de embedding através da extração de regras Horn ponderadas por pesos dos mesmos. Mais espeficicamente, mostramos como os chamados \"métodos pedagógicos\", da literatura de redes neurais, podem ser adaptados para lidar com os aspectos relacionais e de larga escala de bases de conhecimento, e mostramos experimentalmente seus pontos fortes e fracos.
29

Interpreting embedding models of knowledge bases. / Interpretando modelos de embedding de bases de conhecimento.

Gusmão, Arthur Colombini 26 November 2018 (has links)
Knowledge bases are employed in a variety of applications, from natural language processing to semantic web search; alas, in practice, their usefulness is hurt by their incompleteness. To address this issue, several techniques aim at performing knowledge base completion, of which embedding models are efficient, attain state-of-the-art accuracy, and eliminate the need for feature engineering. However, embedding models predictions are notoriously hard to interpret. In this work, we propose model-agnostic methods that allow one to interpret embedding models by extracting weighted Horn rules from them. More specifically, we show how the so-called \"pedagogical techniques\", from the literature on neural networks, can be adapted to take into account the large-scale relational aspects of knowledge bases, and show experimentally their strengths and weaknesses. / Bases de conhecimento apresentam diversas aplicações, desde processamento de linguagem natural a pesquisa semântica da web; contudo, na prática, sua utilidade é prejudicada por não serem totalmente completas. Para solucionar esse problema, diversas técnicas focam em completar bases de conhecimento, das quais modelos de embedding são eficientes, atingem estado da arte em acurácia, e eliminam a necessidade de fazer-se engenharia de características dos dados de entrada. Entretanto, as predições dos modelos de embedding são notoriamente difíceis de serem interpretadas. Neste trabalho, propomos métodos agnósticos a modelo que permitem interpretar modelos de embedding através da extração de regras Horn ponderadas por pesos dos mesmos. Mais espeficicamente, mostramos como os chamados \"métodos pedagógicos\", da literatura de redes neurais, podem ser adaptados para lidar com os aspectos relacionais e de larga escala de bases de conhecimento, e mostramos experimentalmente seus pontos fortes e fracos.
30

Explaining the output of a black box model and a white box model: an illustrative comparison

Joel, Viklund January 2020 (has links)
The thesis investigates how one should determine the appropriate transparency of an information processing system from a receiver perspective. Research in the past has suggested that the model should be maximally transparent for what is labeled as ”high stake decisions”. Instead of motivating the choice of a model’s transparency on the non-rigorous criterion that the model contributes to a high stake decision, this thesis explores an alternative method. The suggested method involves that one should let the transparency depend on how well an explanation of the model’s output satisfies the purpose of an explanation. As a result, we do not have to bother if it is a high stake decision, we should instead make sure the model is sufficiently transparent to provide an explanation that satisfies the expressed purpose of an explanation.

Page generated in 0.4441 seconds