Return to search

Large-scale semi-supervised learning for natural language processing

Natural Language Processing (NLP) develops computational approaches to processing language data. Supervised machine learning has become the dominant methodology of modern NLP. The performance of a supervised NLP system crucially depends on the amount of data available for training. In the standard supervised framework, if a sequence of words was not encountered in the training set, the system can only guess at its label at test time. The cost of producing labeled training examples is a bottleneck for current NLP technology. On the other hand, a vast quantity of unlabeled data is freely available.

This dissertation proposes effective, efficient, versatile methodologies for 1) extracting useful information from very large (potentially web-scale) volumes of unlabeled data and 2) combining such information with standard supervised machine learning for NLP. We demonstrate novel ways to exploit unlabeled data, we scale these approaches to make use of all the text on the web, and we show improvements on a variety of challenging NLP tasks. This combination of learning from both labeled and unlabeled data is often referred to as semi-supervised learning.

Although lacking manually-provided labels, the statistics of unlabeled patterns can often distinguish the correct label for an ambiguous test instance. In the first part of this dissertation, we propose to use the counts of unlabeled patterns as features in supervised classifiers, with these classifiers trained on varying amounts of labeled data. We propose a general approach for integrating information from multiple, overlapping sequences of context for lexical disambiguation problems. We also show how standard machine learning algorithms can be modified to incorporate a particular kind of prior knowledge: knowledge of effective weightings for count-based features. We also evaluate performance within and across domains for two generation and two analysis tasks, assessing the impact of combining web-scale counts with conventional features. In the second part of this dissertation, rather than using the aggregate statistics as features, we propose to use them to generate labeled training examples. By automatically labeling a large number of examples, we can train powerful discriminative models, leveraging fine-grained features of input words.

Identiferoai:union.ndltd.org:LACETR/oai:collectionscanada.gc.ca:AEU.10048/1522
Date11 1900
CreatorsBergsma, Shane A
ContributorsGoebel, Randy (Computing Science), Lin, Dekang (Computing Science), Kondrak, Greg (Computing Science), Schuurmans, Dale (Computing Science), Westbury, Chris (Psychology), Hovy, Eduard (Information Sciences Institute, University of Southern California)
Source SetsLibrary and Archives Canada ETDs Repository / Centre d'archives des thèses électroniques de Bibliothèque et Archives Canada
LanguageEnglish
Detected LanguageEnglish
TypeThesis
Format937353 bytes, application/pdf
RelationShane Bergsma, Emily Pitler and Dekang Lin, Creating Robust Supervised Classifiers via Web-Scale N-gram Data, In ACL 2010, Uppsala, Sweden, July 2010., Shane Bergsma, Dekang Lin and Dale Schuurmans, Improved Natural Language Learning via Variance-Regularization Support Vector Machines, In CoNLL 2010, Uppsala, Sweden, July 2010., Shane Bergsma, Dekang Lin and Randy Goebel, Web-Scale N-gram Models for Lexical Disambiguation, In IJCAI 2009, Pasadena, California, July, 2009., Shane Bergsma, Dekang Lin and Randy Goebel, Distributional Identification of Non-Referential Pronouns, In ACL-HLT 2008, Columbus, Ohio, June 2008., Shane Bergsma, Dekang Lin and Randy Goebel, Discriminative Learning of Selectional Preference from Unlabeled Text, In EMNLP 2008, Waikiki, Honolulu, Hawaii, October 2008., Shane Bergsma and Grzegorz Kondrak, Alignment-Based Discriminative String Similarity, In ACL 2007, Prague, Czech Republic, June 2007.

Page generated in 0.0028 seconds