• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 4
  • 1
  • Tagged with
  • 72
  • 72
  • 38
  • 34
  • 31
  • 21
  • 19
  • 18
  • 17
  • 17
  • 17
  • 14
  • 13
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Towards Explainable Decision-making Strategies of Deep Convolutional Neural Networks : An exploration into explainable AI and potential applications within cancer detection

Hammarström, Tobias January 2020 (has links)
The influence of Artificial Intelligence (AI) on society is increasing, with applications in highly sensitive and complicated areas. Examples include using Deep Convolutional Neural Networks within healthcare for diagnosing cancer. However, the inner workings of such models are often unknown, limiting the much-needed trust in the models. To combat this, Explainable AI (XAI) methods aim to provide explanations of the models' decision-making. Two such methods, Spectral Relevance Analysis (SpRAy) and Testing with Concept Activation Methods (TCAV), were evaluated on a deep learning model classifying cat and dog images that contained introduced artificial noise. The task was to assess the methods' capabilities to explain the importance of the introduced noise for the learnt model. The task was constructed as an exploratory step, with the future aim of using the methods on models diagnosing oral cancer. In addition to using the TCAV method as introduced by its authors, this study also utilizes the CAV-sensitivity to introduce and perform a sensitivity magnitude analysis. Both methods proved useful in discerning between the model’s two decision-making strategies based on either the animal or the noise. However, greater insight into the intricacies of said strategies is desired. Additionally, the methods provided a deeper understanding of the model’s learning, as the model did not seem to properly distinguish between the noise and the animal conceptually. The methods thus accentuated the limitations of the model, thereby increasing our trust in its abilities. In conclusion, the methods show promise regarding the task of detecting visually distinctive noise in images, which could extend to other distinctive features present in more complex problems. Consequently, more research should be conducted on applying these methods on more complex areas with specialized models and tasks, e.g. oral cancer.
42

Requirements Analysis for AI solutions : a study on how requirements analysis is executed when developing AI solutions

Olsson, Anton, Joelsson, Gustaf January 2019 (has links)
Requirements analysis is an essential part of the System Development Life Cycle (SDLC) in order to achieve success in a software development project. There are several methods, techniques and frameworks used when expressing, prioritizing and managing requirements in IT projects. It is widely established that it is difficult to determine requirements for traditional systems, so a question naturally arises on how the requirements analysis is executed as AI solutions (that even fewer individuals can grasp) are being developed. Little research has been made on how the vital requirements phase is executed during development of AI solutions. This research aims to investigate the requirements analysis phase during the development of AI solutions. To explore this topic, an extensive literature review was made, and in order to collect new information, a number of interviews were performed with five suitable organizations (i.e, organizations that develop AI solutions). The results from the research concludes that the requirements analysis does not differ between development of AI solutions in comparison to development of traditional systems. However, the research showed that there were some deviations that can be deemed to be particularly unique for the development of AI solutions that affects the requirements analysis. These are: (1) the need for an iterative and agile systems development process, with an associated iterative and agile requirements analysis, (2) the importance of having a large set of quality data, (3) the relative deprioritization of user involvement, and (4) the difficulty of establishing timeframe, results/feasibility and the behavior of the AI solution beforehand.
43

Exploring Human-Robot Interaction Through Explainable AI Poetry Generation

Strineholm, Philippe January 2021 (has links)
As the field of Artificial Intelligence continues to evolve into a tool of societal impact, a need of breaking its initial boundaries as a computer science discipline arises to also include different humanistic fields. The work presented in this thesis revolves around the role that explainable artificial intelligence has in human-robot interaction through the study of poetry generators. To better understand the scope of the project, a poetry generators study presents the steps involved in the development process and the evaluation methods. In the algorithmic development of poetry generators, the shift from traditional disciplines to transdisciplinarity is identified. In collaboration with researchers from the Research Institutes of Sweden, state-of-the-art generators are tested to showcase the power of artificially enhanced artifacts. A development plateau is discovered and with the inclusion of Design Thinking methods potential future human-robot interaction development is identified. A physical prototype capable of verbal interaction on top of a poetry generator is created with the new feature of changing the corpora to any given audio input. Lastly, the strengths of transdisciplinarity are connected with the open-sourced community in regards to creativity and self-expression, producing an online tool to address future work improvements and introduce nonexperts to the steps required to self-build an intelligent robotic companion, thus also encouraging public technological literacy. Explainable AI is shown to help with user involvement in the process of creation, alteration and deployment of AI enhanced applications.
44

Explainable AI techniques for sepsis diagnosis : Evaluating LIME and SHAP through a user study

Norrie, Christian January 2021 (has links)
Articial intelligence has had a large impact on many industries and transformed some domains quite radically. There is tremendous potential in applying AI to the eld of medical diagnostics. A major issue with applying these techniques to some domains is an inability for AI models to provide an explanation or justication for their predictions. This creates a problem wherein a user may not trust an AI prediction, or there are legal requirements for justifying decisions that are not met. This thesis overviews how two explainable AI techniques (Shapley Additive Explanations and Local Interpretable Model-Agnostic Explanations) can establish a degree of trust for the user in the medical diagnostics eld. These techniques are evaluated through a user study. User study results suggest that supplementing classications or predictions with a post-hoc visualization increases interpretability by a small margin. Further investigation and research utilizing a user study surveyor interview is suggested to increase interpretability and explainability of machine learning results.
45

Beyond Privacy Concerns: Examining Individual Interest in Privacy in the Machine Learning Era

Brown, Nicholas James 12 June 2023 (has links)
The deployment of human-augmented machine learning (ML) systems has become a recommended organizational best practice. ML systems use algorithms that rely on training data labeled by human annotators. However, human involvement in reviewing and labeling consumers' voice data to train speech recognition systems for Amazon Alexa, Microsoft Cortana, and the like has raised privacy concerns among consumers and privacy advocates. We use the enhanced APCO model as the theoretical lens to investigate how the disclosure of human involvement during the supervised machine learning process affects consumers' privacy decision making. In a scenario-based experiment with 499 participants, we present various company privacy policies to participants to examine their trust and privacy considerations, then ask them to share reasons why they would or would not opt in to share their voice data to train a companies' voice recognition software. We find that the perception of human involvement in the ML training process significantly influences participants' privacy-related concerns, which thereby mediate their decisions to share their voice data. Furthermore, we manipulate four factors of a privacy policy to operationalize various cognitive biases actively present in the minds of consumers and find that default trust and salience biases significantly affect participants' privacy decision making. Our results provide a deeper contextualized understanding of privacy-related concerns that may arise in human-augmented ML system configurations and highlight the managerial importance of considering the role of human involvement in supervised machine learning settings. Importantly, we introduce perceived human involvement as a new construct to the information privacy discourse. Although ubiquitous data collection and increased privacy breaches have elevated the reported concerns of consumers, consumers' behaviors do not always match their stated privacy concerns. Researchers refer to this as the privacy paradox, and decades of information privacy research have identified a myriad of explanations why this paradox occurs. Yet the underlying crux of the explanations presumes privacy concern to be the appropriate proxy to measure privacy attitude and compare with actual privacy behavior. Often, privacy concerns are situational and can be elicited through the setup of boundary conditions and the framing of different privacy scenarios. Drawing on the cognitive model of empowerment and interest, we propose a multidimensional privacy interest construct that captures consumers' situational and dispositional attitudes toward privacy, which can serve as a more robust measure in conditions leading to the privacy paradox. We define privacy interest as a consumer's general feeling toward reengaging particular behaviors that increase their information privacy. This construct comprises four dimensions—impact, awareness, meaningfulness, and competence—and is conceptualized as a consumer's assessment of contextual factors affecting their privacy perceptions and their global predisposition to respond to those factors. Importantly, interest was originally included in the privacy calculus but is largely absent in privacy studies and theoretical conceptualizations. Following MacKenzie et al. (2011), we developed and empirically validated a privacy interest scale. This study contributes to privacy research and practice by reconceptualizing a construct in the original privacy calculus theory and offering a renewed theoretical lens through which to view consumers' privacy attitudes and behaviors. / Doctor of Philosophy / The deployment of human-augmented machine learning (ML) systems has become a recommended organizational best practice. ML systems use algorithms that rely on training data labeled by human annotators. However, human involvement in reviewing and labeling consumers' voice data to train speech recognition systems for Amazon Alexa, Microsoft Cortana, and the like has raised privacy concerns among consumers and privacy advocates. We investigate how the disclosure of human involvement during the supervised machine learning process affects consumers' privacy decision making and find that the perception of human involvement in the ML training process significantly influences participants' privacy-related concerns. This thereby influences their decisions to share their voice data. Our results highlight the importance of understanding consumers' willingness to contribute their data to generate complete and diverse data sets to help companies reduce algorithmic biases and systematic unfairness in the decisions and outputs rendered by ML systems. Although ubiquitous data collection and increased privacy breaches have elevated the reported concerns of consumers, consumers' behaviors do not always match their stated privacy concerns. This is referred to as the privacy paradox, and decades of information privacy research have identified a myriad of explanations why this paradox occurs. Yet the underlying crux of the explanations presumes privacy concern to be the appropriate proxy to measure privacy attitude and compare with actual privacy behavior. We propose privacy interest as an alternative to privacy concern and assert that it can serve as a more robust measure in conditions leading to the privacy paradox. We define privacy interest as a consumer's general feeling toward reengaging particular behaviors that increase their information privacy. We found that privacy interest was more effective than privacy concern in predicting consumers' mobilization behaviors, such as publicly complaining about privacy issues to companies and third-party organizations, requesting to remove their information from company databases, and reducing their self-disclosure behaviors. By contrast, privacy concern was more effective than privacy interest in predicting consumers' behaviors to misrepresent their identity. By developing and empirically validating the privacy interest scale, we offer interest in privacy as a renewed theoretical lens through which to view consumers' privacy attitudes and behaviors.
46

Interactive Explanations in Quantitative Bipolar Argumentation Frameworks / Interaktiva förklaringar i kvantitativa bipolära argumentationsramar

Weng, Qingtao January 2021 (has links)
Argumentation framework is a common technique in Artificial Intelligence and related fields. It is a good way of formalizing, resolving conflicts and helping with defeasible reasoning. This thesis discusses the exploration of the quantitative bipolar argumentation framework applied in multi-agent systems. Different agents in a multi-agent systems have various capabilities, and they contribute in different ways to the system goal. The purpose of this study is to explore approaches of explaining the overall behavior and output from a multi-agent system and enable explainability in the multi-agent systems. By exploring the properties of the quantitative bipolar argumentation framework using some techniques from explainable Artificial Intelligence (AI), the system will generate output with explanations given by the argumentation framework. This thesis gives a general overview of argumentation frameworks and common techniques from explainable AI. The study mainly focuses on the exploration of properties and interactive algorithms of quantitative bipolar argumentation framework. It introduces explanation techniques to the quantitative bipolar argumentation framework. A Graphical User Interface (GUI) application is included in order to present the results of the explanation. / Argumentationsramar är en vanlig teknik inom artificiell intelligens och relaterade områden. Det är ett bra sätt att formalisera, lösa konflikter och hjälpa till med defekta resonemang. I den här avhandlingen diskuteras utforskningen av den kvantitativa bipolära argumentationsramen som tillämpas i fleragentsystem. Olika agenter i ett system med flera agenter har olika kapacitet och bidrar på olika sätt till systemets mål. Syftet med den här studien är att utforska metoder för att förklara det övergripande beteendet och resultatet från ett system med flera agenter och möjliggöra förklarbarhet i systemen med flera agenter. Genom att utforska egenskaperna hos den kvantitativa bipolära argumentationsramen med hjälp av vissa tekniker från förklaringsbara AI kommer systemet att generera utdata med förklaringar som ges av argumentationsramen. Denna avhandling ger en allmän översikt över argumentationsramar och vanliga tekniker från förklaringsbara AI. Studien fokuserar främst på utforskandet av egenskaper och interaktiva algoritmer för det kvantitativa bipolära argumentationsramverket och introducerar tillämpningen av förklaringstekniker på det kvantitativa bipolära argumentationsramverket. En GUI-applikation ingår för att presentera resultaten av förklaringen.
47

Explainable Antibiotics Prescriptions in NLP with Transformer Models

Contreras Zaragoza, Omar Emilio January 2021 (has links)
The overprescription of antibiotics has resulted in bacteria resistance, which is considered a global threat to global health. Deciding if antibiotics should be prescribed or not from individual visits of patients’ medical records in Swedish can be considered a text classification task, one of the applications of Natural Language Processing (NLP). However, medical experts and patients can not trust a model if explanations for its decision are not provided. In this work, multilingual and monolingual Transformer models are evaluated for the medical classification task. Furthermore, local explanations are obtained with SHapley Additive exPlanations and Integrated Gradients to compare the models’ predictions and evaluate the explainability methods. Finally, the local explanations are also aggregated to obtain global explanations and understand the features that contributed the most to the prediction of each class. / Felaktig utskrivning av antibiotika har resulterat i ökad antibiotikaresistens, vilket anses vara ett globalt hot mot global hälsa. Att avgöra om antibiotika ska ordineras eller inte från patientjournaler på svenska kan betraktas som ett textklassificeringproblem, en av tillämpningarna av Natural Language Processing (NLP). Men medicinska experter och patienter kan inte lita på en modell om förklaringar till modellens beslut inte ges. I detta arbete utvärderades flerspråkiga och enspråkiga Transformersmodeller för medisinska textklassificeringproblemet. Dessutom erhölls lokala förklaringar med SHapley Additive exPlanations och Integrated gradients för att jämföra modellernas förutsägelser och utvärdera metodernas förklarbarhet. Slutligen aggregerades de lokala förklaringarna för att få globala förklaringar och förstå de ord som bidrog mest till modellens förutsägelse för varje klass.
48

Användargränssnitt i självkörande fordon : En kvantitativ enkätundersökning bland potentiella användare / User interface in self-driving cars : A quantitative questionnaire study among potential user

Olofsson, Ludvig, Modjtabaei, Anna Louise January 2023 (has links)
Syftet med denna studie är att undersökavilket användargränssnitt som potentiella användare föredrar för att utbyta trafikrelateradinformation. Forskningsfrågan som ska besvaras är följande. Vilket användargränssnittföredras för kommunikation i ett självkörande fordon? Genom att läsa denna studie fårläsaren en fördjupad insikt för hur föredragna användargränssnitt kan öka acceptansen hospotentiella användare. En kvantitativ metod användes för att genomföra enstickprovsundersökning med hjälp av en webbaserad enkät som distribuerades på olika sättsom Facebook, Linkedin, m.m, för att besvara studiens syfte. Den empiriskadatainsamlingen resulterade i 201 insamlade svar. Resultatet visade att 41,3 % avrespondenterna föredrog skärmgränssnitt och 35,3% föredrog ett multimodalt gränssnitt föratt integrera med ett självkörande fordon. Sammanlagt 84,1% av respondenterna besvaradeatt användningen av det önskade gränssnittet skulle öka effektiviteten ochkommunikationen vid utbyte av information med fordonet. Slutsatsen är att valet avanvändargränssnitt kan påverkas av olika faktorer, såsom erfarenheter och teknologiskaförväntningar. Framtida utveckling av gränssnitt och teknologier bör sträva efter attinkludera en mångfald av alternativ för att tillgodose användarnas behov och preferensernär det gäller att kommunicera med fordon. / Syftet med denna studie är att undersöka vilket användargränssnitt som potentiella användare föredrar för att utbyta trafikrelaterad information. Forskningsfrågan som ska besvaras är följande. Vilket användargränssnitt föredras för kommunikation i ett självkörande fordon? Genom att läsa denna studie får läsaren en fördjupad insikt för hur föredragna användargränssnitt kan öka acceptansen hos potentiella användare. En kvantitativ metod användes för att genomföra en stickprovsundersökning med hjälp av en webbaserad enkät som distribuerades på olika sätt som Facebook, Linkedin, m.m, för att besvara studiens syfte. Den empiriska datainsamlingen resulterade i 201 insamlade svar. Resultatet visade att 41,3 % av respondenterna föredrog skärmgränssnitt och 35,3% föredrog ett multimodalt gränssnitt för att integrera med ett självkörande fordon. Sammanlagt 84,1% av respondenterna besvarade att användningen av det önskade gränssnittet skulle öka effektiviteten och kommunikationen vid utbyte av information med fordonet. Slutsatsen är att valet av användargränssnitt kan påverkas av olika faktorer, såsom erfarenheter och teknologiska förväntningar. Framtida utveckling av gränssnitt och teknologier bör sträva efter att inkludera en mångfald av alternativ för att tillgodose användarnas behov och preferenser när det gäller att kommunicera med fordon.
49

Primary stage Lung Cancer Prediction with Natural Language Processing-based Machine Learning / Tidig lungcancerprediktering genom maskininlärning för textbehandling

Sadek, Ahmad January 2022 (has links)
Early detection reduces mortality in lung cancer, but it is also considered as a challenge for oncologists and for healthcare systems. In addition, screening modalities like CT-scans come with undesired effects, many suspected patients are wrongly diagnosed with lung cancer. This thesis contributes to solve the challenge of early lung cancer detection by utilizing unique data consisting of self-reported symptoms. The proposed method is a predictive machine learning algorithm based on natural language processing, which handles the data as an unstructured data set. A replication of a previous study where a prediction model based on a conventional multivariate machine learning using the same data is done and presented, for comparison. After evaluation, validation and interpretation, a set of variables were highlighted as early predictors of lung cancer. The performance of the proposed approach managed to match the performance of the conventional approach. This promising result opens for further development where such an approach can be used in clinical decision support systems. Future work could then involve other modalities, in a multimodal machine learning approach. / Tidig lungcancerdiagnostisering kan öka chanserna för överlevnad hos lungcancerpatienter, men att upptäcka lungcancer i ett tidigt stadie är en av de större utmaningarna för onkologer och sjukvården. Idag undersöks patienter med riskfaktorer baserat på rökning och ålder, dessa undersökningar sker med hjälp av bland annat medicinskt avbildningssystem, då oftast CT-bilder, vilket medför felaktiga och kostsamma diagnoser. Detta arbete föreslår en maskininlärninig algoritm baserad på Natural language processing, som genom analys och bearbetning av ostrukturerade data, av patienternas egna anamneser, kan prediktera lungcancer. Arbetet har genomfört en jämförelse med en konventionell maskininlärning algoritm baserat på en replikering av ett annat studie där samma data behandlades som strukturerad. Den föreslagna metoden har visat ett likartat resultat samt prestanda, och har identifierat riskfaktorer samt symptom för lungcancer. Detta arbete öppnar upp för en utveckling mot ett kliniskt användande i form av beslutsstödsystem, som även kan hantera elektriska hälsojournaler. Andra arbeten kan vidareutveckla metoden för att hantera andra varianter av data, så som medicinska bilder och biomarkörer, och genom det förbättra prestandan.
50

EXPLAINABLE AI METHODS FOR ENHANCING AI-BASED NETWORK INTRUSION DETECTION SYSTEMS

Osvaldo Guilherme Arreche (18569509) 03 September 2024 (has links)
<p dir="ltr">In network security, the exponential growth of intrusions stimulates research toward developing advanced artificial intelligence (AI) techniques for intrusion detection systems (IDS). However, the reliance on AI for IDS presents challenges, including the performance variability of different AI models and the lack of explainability of their decisions, hindering the comprehension of outputs by human security analysts. Hence, this thesis proposes end-to-end explainable AI (XAI) frameworks tailored to enhance the understandability and performance of AI models in this context.</p><p><br></p><p dir="ltr">The first chapter benchmarks seven black-box AI models across one real-world and two benchmark network intrusion datasets, laying the foundation for subsequent analyses. Subsequent chapters delve into feature selection methods, recognizing their crucial role in enhancing IDS performance by extracting the most significant features for identifying anomalies in network security. Leveraging XAI techniques, novel feature selection methods are proposed, showcasing superior performance compared to traditional approaches.</p><p><br></p><p dir="ltr">Also, this thesis introduces an in-depth evaluation framework for black-box XAI-IDS, encompassing global and local scopes. Six evaluation metrics are analyzed, including descrip tive accuracy, sparsity, stability, efficiency, robustness, and completeness, providing insights into the limitations and strengths of current XAI methods.</p><p><br></p><p dir="ltr">Finally, the thesis addresses the potential of ensemble learning techniques in improving AI-based network intrusion detection by proposing a two-level ensemble learning framework comprising base learners and ensemble methods trained on input datasets to generate evalua tion metrics and new datasets for subsequent analysis. Feature selection is integrated into both levels, leveraging XAI-based and Information Gain-based techniques.</p><p><br></p><p dir="ltr">Holistically, this thesis offers a comprehensive approach to enhancing network intrusion detection through the synergy of AI, XAI, and ensemble learning techniques by providing open-source codes and insights into model performances. Therefore, it contributes to the security advancement of interpretable AI models for network security, empowering security analysts to make informed decisions in safeguarding networked systems.<br></p>

Page generated in 0.0507 seconds