Return to search

Chinese Zero Pronoun Resolution with Neural Networks

In this thesis, I explored several neural network-based models to resolve the issues of zero pronoun in Chinese English translation tasks. I reviewed previous work that attempts to take the resolution as a classification task, such as determining if a candidate in a given set is the antecedent of a zero pronoun, which can be categorized as rule-based and supervised methods. Existing methods either did not take the relationship between potential zero pronoun candidates into consideration or did not fully utilize attention to zero pronoun representations. In my experiments, I investigated attention-based neural network models as well as its application in reinforcement learning environment building on an existing neural model. In particular, I integrated an LSTM-tree-based module into the attention network, which encodes syntax information for zero pronoun resolution tasks. In addition, I apply Bi-Attention layers between modules to interactively learn the syntax and semantic alignment. Furthermore, I leveraged a reinforcement learning framework to fine-tune the proposed model, and experiment with different encoding strategies, i.e., FastText, BERT, and trained RNN-based embedding. I found that attention-based model with LSTM-tree- based module, fine-tuned under reinforcement learning framework that utilized FastText embedding achieves the best performance, superior to the baseline models. I evaluated the model performance on different categories of resources, of which FastText shows great potential in encoding web blog text.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-487047
Date January 2022
CreatorsYang, Yifan
PublisherUppsala universitet, Institutionen för lingvistik och filologi
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0213 seconds