Return to search

Turn of Phrase: Contrastive Pre-Training for Discourse-Aware Conversation Models

Understanding long conversations requires recognizing a discourse flow unique to conversation. Recent advances in unsupervised representation learning of text have been attained primarily through language modeling, which models discourse only implicitly and within a small window. These representations are in turn evaluated chiefly on sentence pair or paragraph-question pair benchmarks, which measure only local discourse coherence. In order to improve performance on discourse-reliant, long conversation tasks, we propose Turn-of-Phrase pre-training, an objective designed to encode long conversation discourse flow. We leverage tree-structured Reddit conversations in English to, relative to a chosen conversation path through the tree, select paths of varying degrees of relatedness. The final utterance of the chosen path is appended to the related paths and the model learns to identify the most coherent conversation path. We demonstrate that our pre-training objective encodes conversational discourse awareness by improving performance on a dialogue act classification task. We then demonstrate the value of transferring discourse awareness with a comprehensive array of conversation-level classification tasks evaluating persuasion, conflict, and deception.

Identiferoai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-10672
Date16 August 2021
CreatorsLaboulaye, Roland
PublisherBYU ScholarsArchive
Source SetsBrigham Young University
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceTheses and Dissertations
Rightshttps://lib.byu.edu/about/copyright/

Page generated in 0.0022 seconds