Return to search

Factorial Hidden Markov Models for full and weakly supervised supertagging

For many sequence prediction tasks in Natural Language Processing, modeling dependencies between individual predictions can be used to improve
prediction accuracy of the sequence as a whole. Supertagging, involves assigning lexical entries to words based on lexicalized grammatical theory such as Combinatory Categorial Grammar (CCG).

Previous work has used Bayesian HMMs to learn taggers for both POS tagging and supertagging separately. Modeling them jointly has the potential to produce more robust and accurate supertaggers trained with less supervision and thereby potentially help in the creation of useful models for new languages and domains.

Factorial Hidden Markov Models (FHMM) support joint inference for multiple sequence prediction tasks. Here, I use them to jointly
predict part-of-speech tag and supertag sequences with varying levels of supervision. I show that supervised training of FHMM models
improves performance compared to standard HMMs, especially when labeled training material is scarce. Secondly, FHMMs trained from tag
dictionaries rather than labeled examples also perform better than a standard HMM. Finally, I show that an FHMM and a maximum entropy
Markov model can complement each other in a single step co-training setup that improves the performance of both models when there is
limited labeled training material available. / text

Identiferoai:union.ndltd.org:UTEXAS/oai:repositories.lib.utexas.edu:2152/ETD-UT-2009-08-350
Date2009 August 1900
CreatorsRamanujam, Srivatsan
Source SetsUniversity of Texas
LanguageEnglish
Detected LanguageEnglish
Typethesis
Formatapplication/pdf

Page generated in 0.0024 seconds