Spelling suggestions: "subject:"trustworthy AI"" "subject:"rustworthy AI""
11 |
Trustworthy AI: Ensuring Explainability and AcceptanceDavinder Kaur (17508870) 03 January 2024 (has links)
<p dir="ltr">In the dynamic realm of Artificial Intelligence (AI), this study explores the multifaceted landscape of Trustworthy AI with a dedicated focus on achieving both explainability and acceptance. The research addresses the evolving dynamics of AI, emphasizing the essential role of human involvement in shaping its trajectory.</p><p dir="ltr">A primary contribution of this work is the introduction of a novel "Trustworthy Explainability Acceptance Metric", tailored for the evaluation of AI-based systems by field experts. Grounded in a versatile distance acceptance approach, this metric provides a reliable measure of acceptance value. Practical applications of this metric are illustrated, particularly in a critical domain like medical diagnostics. Another significant contribution is the proposal of a trust-based security framework for 5G social networks. This framework enhances security and reliability by incorporating community insights and leveraging trust mechanisms, presenting a valuable advancement in social network security.</p><p dir="ltr">The study also introduces an artificial conscience-control module model, innovating with the concept of "Artificial Feeling." This model is designed to enhance AI system adaptability based on user preferences, ensuring controllability, safety, reliability, and trustworthiness in AI decision-making. This innovation contributes to fostering increased societal acceptance of AI technologies. Additionally, the research conducts a comprehensive survey of foundational requirements for establishing trustworthiness in AI. Emphasizing fairness, accountability, privacy, acceptance, and verification/validation, this survey lays the groundwork for understanding and addressing ethical considerations in AI applications. The study concludes with exploring quantum alternatives, offering fresh perspectives on algorithmic approaches in trustworthy AI systems. This exploration broadens the horizons of AI research, pushing the boundaries of traditional algorithms.</p><p dir="ltr">In summary, this work significantly contributes to the discourse on Trustworthy AI, ensuring both explainability and acceptance in the intricate interplay between humans and AI systems. Through its diverse contributions, the research offers valuable insights and practical frameworks for the responsible and ethical deployment of AI in various applications.</p>
|
12 |
ANALYSIS OF LATENT SPACE REPRESENTATIONS FOR OBJECT DETECTIONAshley S Dale (8771429) 03 September 2024 (has links)
<p dir="ltr">Deep Neural Networks (DNNs) successfully perform object detection tasks, and the Con- volutional Neural Network (CNN) backbone is a commonly used feature extractor before secondary tasks such as detection, classification, or segmentation. In a DNN model, the relationship between the features learned by the model from the training data and the features leveraged by the model during test and deployment has motivated the area of feature interpretability studies. The work presented here applies equally to white-box and black-box models and to any DNN architecture. The metrics developed do not require any information beyond the feature vector generated by the feature extraction backbone. These methods are therefore the first methods capable of estimating black-box model robustness in terms of latent space complexity and the first methods capable of examining feature representations in the latent space of black box models.</p><p dir="ltr">This work contributes the following four novel methodologies and results. First, a method for quantifying the invariance and/or equivariance of a model using the training data shows that the representation of a feature in the model impacts model performance. Second, a method for quantifying an observed domain gap in a dataset using the latent feature vectors of an object detection model is paired with pixel-level augmentation techniques to close the gap between real and synthetic data. This results in an improvement in the model’s F1 score on a test set of outliers from 0.5 to 0.9. Third, a method for visualizing and quantifying similarities of the latent manifolds of two black-box models is used to correlate similar feature representation with increase success in the transferability of gradient-based attacks. Finally, a method for examining the global complexity of decision boundaries in black-box models is presented, where more complex decision boundaries are shown to correlate with increased model robustness to gradient-based and random attacks.</p>
|
13 |
Transparent and Scalable Knowledge-based Geospatial Mapping Systems for Trustworthy Urban StudiesHunsoo Song (18508821) 07 May 2024 (has links)
<p dir="ltr">This dissertation explores the integration of remote sensing and artificial intelligence (AI) in geospatial mapping, specifically through the development of knowledge-based mapping systems. Remote sensing has revolutionized Earth observation by providing data that far surpasses traditional in-situ measurements. Over the last decade, significant advancements in inferential capabilities have been achieved through the fusion of geospatial sciences and AI (GeoAI), particularly with the application of deep learning. Despite its benefits, the reliance on data-driven AI has introduced challenges, including unpredictable errors and biases due to imperfect labeling and the opaque nature of the processes involved.</p><p dir="ltr">The research highlights the limitations of solely using data-driven AI methods for geospatial mapping, which tend to produce spatially heterogeneous errors and lack transparency, thus compromising the trustworthiness of the outputs. In response, it proposes novel knowledge-based mapping systems that prioritize transparency and scalability. This research has developed comprehensive techniques to extract key Earth and urban features and has introduced a 3D urban land cover mapping system, including a 3D Landscape Clustering framework aimed at enhancing urban climate studies. The developed systems utilize universally applicable physical knowledge of targets, captured through remote sensing, to enhance mapping accuracy and reliability without the typical drawbacks of data-driven approaches.</p><p dir="ltr">The dissertation emphasizes the importance of moving beyond mere accuracy to consider the broader implications of error patterns in geospatial mappings. It demonstrates the value of integrating generalizable target knowledge, explicitly represented in remote sensing data, into geospatial mapping to address the trustworthiness challenges in AI mapping systems. By developing mapping systems that are open, transparent, and scalable, this work aims to mitigate the effects of spatially heterogeneous errors, thereby improving the trustworthiness of geospatial mapping and analysis across various fields. Additionally, the dissertation introduces methodologies to support urban pathway accessibility and flood management studies through dependable geospatial systems. These efforts aim to establish a robust foundation for informed urban planning, efficient resource allocation, and enriched environmental insights, contributing to the development of more sustainable, resilient, and smart cities.</p>
|
Page generated in 0.0345 seconds