Return to search

Flexible Structured Prediction in Natural Language Processing with Partially Annotated Corpora

<div>Structured prediction makes coherent decisions as structured objects to present the interrelations of these predicted variables. They have been widely used in many areas, such as bioinformatics, computer vision, speech recognition, and natural language processing. Machine Learning with reduced supervision aims to leverage the laborious and error-prone annotation effects and benefit the low-resource languages. In this dissertation we study structured prediction with reduced supervision for two sets of problems, sequence labeling and dependency parsing, both of which are representatives of structured prediction problems in NLP. We investigate three different approaches.</div><div> </div><div>The first approach is learning with modular architecture by task decomposition. By decomposing the labels into location sub-label and type sub-label, we designed neural modules to tackle these sub-labels respectively, with an additional module to infuse the information. The experiments on the benchmark datasets show the modular architecture outperforms existing models and can make use of partially labeled data together with fully labeled data to improve on the performance of using fully labeled data alone.</div><div><br></div><div>The second approach builds the neural CRF autoencoder (NCRFAE) model that combines a discriminative component and a generative component for semi-supervised sequence labeling. The model has a unified structure of shared parameters, using different loss functions for labeled and unlabeled data. We developed a variant of the EM algorithm for optimizing the model with tractable inference. The experiments on several languages in the POS tagging task show the model outperforms existing systems in both supervised and semi-supervised setup.</div><div><br></div><div>The third approach builds two models for semi-supervised dependency parsing, namely local autoencoding parser (LAP) and global autoencoding parser (GAP). LAP assumes the chain-structured sentence has a latent representation and uses this representation to construct the dependency tree, while GAP treats the dependency tree itself as a latent variable. Both models have unified structures for sentence with and without annotated parse tree. The experiments on several languages show both parsers can use unlabeled sentences to improve on the performance with labeled sentences alone, and LAP is faster while GAP outperforms existing models.</div>

  1. 10.25394/pgs.12211055.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/12211055
Date29 April 2020
CreatorsXiao Zhang (8776265)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY-NC-SA 4.0
Relationhttps://figshare.com/articles/Flexible_Structured_Prediction_in_Natural_Language_Processing_with_Partially_Annotated_Corpora/12211055

Page generated in 0.0019 seconds