• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Performing Diagnostics & Prognostics On Simulated Engine Failures Using Neural Networks

Macmann, Owen 28 June 2016 (has links)
No description available.
2

Building Trustworthy Machine Learning Models using Ensembled Explanations

Prajwal Balasubramani (9192782) 16 December 2024 (has links)
<p dir="ltr">Explainable AI (XAI) is a class of post-hoc analysis tools, which include a large selection of algorithms developed to increase transparency in the decision-making process of Machine Learning (ML) models. These tools aim to provide users with interpretations of the data and the model. However, despite the abundance of options and their potential in identifying and decomposing model behavior, XAI's inability to quantitatively assess trustworthiness, due to the lack of quantifiable metrics, has resulted in low adoption in real-world applications. In contrast, traditional methods to evaluate trust such as uncertainty quantification, robust testing, and user studies scale well with large models and datasets, thanks to their reliance on quantifiable metrics. However, they do not offer the same level of transparency and qualitative assessments as XAI to make the models more interpretable, which are a key component of the multi-faceted trustworthiness assessment.</p><p dir="ltr">To bridge this gap, I propose a framework in which explanations produced by XAI are ensembled across a portfolio of models. These ensembled explanations are then used for both quantitative and qualitative comparison to evaluate trust in the models. The goal is to leverage these explanations to assess trustworthiness driven by transparency. The framework also identifies areas of consensus or disagreement among the ensembled explanations. Further leverage the presence or absence of consensus to bin model reasoning to indicate weaknesses, misalignment to user expectations, and/or distribution shifts.</p><p dir="ltr">A preliminary investigation of the proposed framework is carried out on multivariate time-series data from NASA's Commercial Modular Aero-Propulsion System Simulation (CMAPSS) to model and predict turbojet engine degradation. This approach uses three distinct ML models to forecast the remaining useful life (RUL) of the engine. Using the proposed framework, influential system parameters contributing to engine degradation in each model are identified via XAI. These explanations are ensembled and compared to assess consensus. Ultimately, the models disagree on the extent of certain features contributing to the failure. However, experimental literature supports this finding as modeling engine degradation can be sensitive to the type of failure mode. Additionally, certain model architectures work better for certain types of data patterns, leading to recommendations on expert models. With these results and understanding of the intricacies of the framework, it is revised and implemented on a more complex application with a different data type and task: defect detection in robotic manipulation. The ARMBench (Amazon Robotic Manipulation Benchmark) dataset is used to train computer vision models for an image-based multi-classification problem and explained using activation maps. In this use case, both upstream and downstream influences and benefits of the framework are assessed while assessing the trustworthiness of the model and its predictions. The framework throws light on the strengths and weaknesses of the models, dataset, and deployment. Aiding in identifying strategies to mitigate weak and untrustworthy models. </p>

Page generated in 0.0841 seconds