• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Prompt-learning and Zero-shot Text Classification with Domain-specific Textual Data

Luo, Hengyu January 2023 (has links)
The rapid growth of textual data in the digital age presents unique challenges in domain-specific text classification, particularly the scarcity of labeled data for many applications, due to expensive cost of manual labeling work. In this thesis, we explore the applicability of prompt-learning method, which is well-known for being suitable in few-shot scenarios and much less data-consuming, as an emerging alternative to traditional fine-tuning methods, for domain-specific text classification in the context of customer-agent interactions in the retail sector. Specifically, we implemented the entire prompt-learning pipeline for the classification task, and, our investigation encompasses various strategies of prompt-learning, including fixed-prompt language model tuning strategy and tuning-free prompting strategy, along with an examination of language model selection, few-shot sampling strategy, prompt template design, and verbalizer design. In this manner, we assessed the overall performance of the prompt-learning method in the classification task. Through a systematic evaluation, we demonstrate that with the fixed-prompt language model tuning strategy, based on relatively smaller language models (e.g. T5-base with around 220M parameters), prompt-learning can achieve competitive performance (close to 75% accuracy) even with limited labeled data (up to merely 15% of full data). And besides, with the tuning-free prompting strategy, based on a regular-size language model (e.g. FLAN-T5-large with around 770M parameters), the performance can be up to around 30% accuracy with detailed prompt templates and zero-shot setting (no extra training data involved). These results can offer valuable insights for researchers and practitioners working with domain-specific textual data, prompt-learning and few-shot / zero-shot learning. The findings of this thesis highlight the potential of prompt-learning as a practical solution for classification problems across diverse domains and set the stage for future research in this area.

Page generated in 0.0757 seconds