Understanding long conversations requires recognizing a discourse flow unique to conversation. Recent advances in unsupervised representation learning of text have been attained primarily through language modeling, which models discourse only implicitly and within a small window. These representations are in turn evaluated chiefly on sentence pair or paragraph-question pair benchmarks, which measure only local discourse coherence. In order to improve performance on discourse-reliant, long conversation tasks, we propose Turn-of-Phrase pre-training, an objective designed to encode long conversation discourse flow. We leverage tree-structured Reddit conversations in English to, relative to a chosen conversation path through the tree, select paths of varying degrees of relatedness. The final utterance of the chosen path is appended to the related paths and the model learns to identify the most coherent conversation path. We demonstrate that our pre-training objective encodes conversational discourse awareness by improving performance on a dialogue act classification task. We then demonstrate the value of transferring discourse awareness with a comprehensive array of conversation-level classification tasks evaluating persuasion, conflict, and deception.
Identifer | oai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-10672 |
Date | 16 August 2021 |
Creators | Laboulaye, Roland |
Publisher | BYU ScholarsArchive |
Source Sets | Brigham Young University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | https://lib.byu.edu/about/copyright/ |
Page generated in 0.0022 seconds