Return to search

Deep learning applications for transition-based dependency parsing

Dependency Parsing is a method that builds dependency trees consisting of binary relations that describe the syntactic role of words in sentences. Recently, dependency parsing has seen large improvements due to deep learning, which enabled richer feature representations and flexible architectures. In this thesis we focus on the application of these methods to Transition-based parsing, which is a faster variant. We explore current architectures and examine ways to improve their representation capabilities and final accuracies. Our first contribution is an improvement on the basic architecture at the heart of many current parsers. We show that using Recurrent Neural Network hidden layers, initialised with pretrained weights from a feed forward network, provides significant accuracy improvements. Second, we examine the best parser architecture. We show that separate classifiers for dependency parsing and labelling, with a shared input layer provides the best accuracy. We also show that a parser and labeller can be successfully trained separately. Finally, we propose Recursive LSTM Trees, which can represent an entire tree as a single dense vector, and achieve competitive accuracy with minimal features. The parsers that we develop in this thesis cover many aspects of this task, and are easy to integrate with current methods.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:760485
Date January 2018
CreatorsElkaref, Mohab
PublisherUniversity of Birmingham
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Sourcehttp://etheses.bham.ac.uk//id/eprint/8620/

Page generated in 0.0052 seconds