• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Interpretable Frameworks for User-Centered Structured AI Models

Ying-Chun Lin (20371953) 17 December 2024 (has links)
<p dir="ltr">User-centered structured models are designed to enhance user experience and assist individuals in making more informed decisions. In these models, user behavior is typically represented through graphs that illustrate the relationships between users (graph models) or through conversations that depict user interactions with AI systems (language models). However, even with their success, these complex models often remain opaque in many instances. As a result, there has been a growing focus on the rapid development of interpretable machine learning. Interpretable machine learning takes insights from a model and translates complex model concepts, e.g., node or sentence representations, or decisions, e.g., predicted labels, into concepts that are understandable to humans. Our goal of this thesis is to enhance the interpretability of <i>user-centered structured AI models</i> (graph and language models) through the provision of interpretations and explanations, while simultaneously enhancing the performance of these models on downstream tasks.</p><p dir="ltr">In the field of graphs, nodes represent real-world entities, and their relationships are depicted by edges. Graph models usually produce node representations, which are meaningful and low-dimension vectors that encapsulate the characteristics of the nodes. However, existing research on the interpretation of node representations is limited and lacks empirical validation, raising concerns about the reliability of interpretation methods. To solve this problem, we first introduce a novel evaluation method, IME Process to assess interpretation methods. Subsequently, we propose representations-Node Coherence Rate for Representation Interpretation (NCI)–which provides more accurate interpretation results compared to previous interpretation methods. After understanding the information captured in node representations, we further introduce Task-Aware Contrastive Learning (Task-aware CL) which aims to enhance downstream task performance for graph models by maximizing the mutual information between the downstream task and node representations with a contrastive learning process. Our experimental results demonstrate that Task-aware CL significantly enhances performance across downstream tasks.</p><p dir="ltr">In the context of conversations, user satisfaction estimation (USE) for conversational systems is crucial for ensuring the reliability and safety of the language models involved. We further emphasize that USE should be <i>interpretable </i>to guide continuous improvement during development of these models. Therefore, we propose Supervised Prompting for User satisfaction Rubrics (<i>SPUR</i>) to learn the reasons behind user satisfaction and dissatisfaction with an AI agent and to estimate user satisfaction with Large Language Models. In our experiment results, we demonstrate that <i>SPUR </i>not only offers enhanced interpretability by learning rubrics to understand user satisfaction/dissatisfaction with supervised signals, but it also exhibits superior accuracy via domain-specific in-context learning.</p>

Page generated in 0.0626 seconds