隨著機器學習在越來越多任務中有突破性的發展,特別是在自然語言處理問題上,得到越來越多的關注,近年來,詞向量是自然語言處理研究中最令人興奮的部分之一。在這篇論文中,我們討論了兩種主要的詞向量學習方法。一種是傳統的矩陣分解,如奇異值分解,另一種是基於神經網絡模型(具有負採樣的Skip-gram模型(Mikolov等人提出,2013),它是一種迭代演算法。我們提出一種方法來挑選初始值,透過使用奇異值分解得到的詞向量當作是Skip-gram模型的初始直,結果發現替換較佳的初始值,在某些自然語言處理的任務中得到明顯的提升。 / Recently, word embedding is one of the most exciting part of research in natural language processing. In this thesis, we discuss the two major learning approaches for word embedding. One is traditional matrix factorization like singular value decomposition, the other is based on neural network model (e.g. the Skip-gram model with negative sampling (Mikolov et al., 2013b)) which is an iterative algorithm. It is known that an iterative process is sensitive to initial starting values. We present an approach for implementing the Skip-gram model with negative sampling from a given initial value that is using singular value decomposition. Furthermore, we show that refined initial starting points improve the analogy task and succeed in capturing fine-gained semantic and syntactic regularities using vector arithmetic.
Identifer | oai:union.ndltd.org:CHENGCHI/G0103354027 |
Creators | 張文嘉, Jhang, Wun Jia |
Publisher | 國立政治大學 |
Source Sets | National Chengchi University Libraries |
Language | 英文 |
Detected Language | English |
Type | text |
Rights | Copyright © nccu library on behalf of the copyright holders |
Page generated in 0.0017 seconds