Cross-lingual transfer has been shown effective for dependency parsing of some low-resource languages. It typically requires closely related high-resource languages. Pre-trained deep language models significantly improve model performance in cross-lingual tasks. We evaluate cross-lingual model transfer on parsing Marathi, a low-resource language that does not have a closely related highresource language. In addition, we investigate monolingual modeling for comparison. We experiment with two state-of-the-art language models: mBERT and XLM-R. Our experimental results illustrate that the cross-lingual model transfer approach still holds with distantly related source languages, and models benefit most from XLM-R. We also evaluate the impact of multi-task learning by training all UD tasks simultaneously and find that it yields mixed results for dependency parsing and degrades the transfer performance of the best performing source language Ancient Greek.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-477572 |
Date | January 2022 |
Creators | Zhang, Wenwen |
Publisher | Uppsala universitet, Institutionen för lingvistik och filologi |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0018 seconds