1 |
深度學習於中文句子之表示法學習 / Deep learning techniques for Chinese sentence representation learning管芸辰, Kuan, Yun Chen Unknown Date (has links)
本篇論文主要在探討如何利用近期發展之深度學習技術在於中文句子分散式表示法學習。近期深度學習受到極大的注目,相關技術也隨之蓬勃發展。然而相關的分散式表示方式,大多以英文為主的其他印歐語系作為主要的衡量對象,也據其特性發展。除了印歐語系外,另外漢藏語系及阿爾泰語系等也有眾多使用人口。還有獨立語系的像日語、韓語等語系存在,各自也有其不同的特性。中文本身屬於漢藏語系,本身具有相當不同的特性,像是孤立語、聲調、量詞等。近來也有許多論文使用多語系的資料集作為評量標準,但鮮少去討論各語言間表現的差異。
本論文利用句子情緒分類之實驗,來比較近期所發展之深度學習之技術與傳統詞向量表示法的差異,我們將以TF-IDF為基準比較其他三個PVDM、Siamese-CBOW及Fasttext的表現差異,也深入探討此些模型對於中文句子情緒分類之表現。 / The paper demonstrates how the deep learning methods published in recent years applied in Chinese sentence representation learning.
Recently, the deep learning techniques have attracted the great attention. Related areas also grow enormously.
However, the most techniques use Indo-European languages mainly as evaluation objective and developed corresponding to their properties. Besides Indo-European languages, there are Sino-Tibetan language and Altaic language, which also spoken widely. There are only some independent languages like Japanese or Korean, which have their own properties. Chinese itself is belonged to Sino-Tibetan language family and has some characters like isolating language, tone, count word...etc.Recently, many publications also use the multilingual dataset to evaluate their performance, but few of them discuss the differences among different languages.
This thesis demonstrates that we perform the sentiment analysis on Chinese Weibo dataset to quantize the effectiveness of different deep learning techniques. We compared the traditional TF-IDF model with PVDM, Siamese-CBOW, and FastText, and evaluate the model they created.
|
2 |
利用機器學習技術找出眼動軌跡與情緒之間的關聯性潘威翰 Unknown Date (has links)
目前偵測一般人情緒的方式大部分在研究人的行為,例如:臉部表情,以及分析人體的各項生理數值,例如:心跳、體溫以及呼吸頻率。然而這些研究只單純探討人的外在行為或生理訊號在不同情緒下的變化,而人的眼睛包含外在行為跟生理訊號,本研究將探討不同情緒下眼睛有什麼特別的反應。
我們先制訂一套實驗流程,在流程中我們以不一樣的情緒圖片給予受測者刺激,然後記錄受測者的眼動反應,並且讓受測者回報自己的情緒狀態。本研究也記錄受測者在情緒刺激下的眼動反應,並將眼動之反應轉換成序列資料,再針對不同情緒下的序列建立隱藏馬可夫模型(Hidden Markov Models:HMM)。希望藉著情緒模型,從眼動行為中偵測受刺激者處於何種情緒狀態。
本研究發現人在看圖時會依據對圖片內容的好惡,產生有意義的眼動反應。我們利用相對應的眼動反應建立情緒辨識系統,在辨識三種情緒時,辨識率能夠達到六成。
|
3 |
透過圖片標籤觀察情緒字詞與事物概念之關聯 / An analysis on association between emotion words and concept words based on image tags彭聲揚, Peng, Sheng-Yang Unknown Date (has links)
本研究試圖從心理學出發,探究描述情緒狀態的分類方法為何,
為了進行情緒與語意的連結,我們試圖將影像當作情緒狀態的刺激
來源,針對Flickr網路社群所共建共享的內容進行抽樣與觀察,使
用心理學研究中基礎的情緒字詞與詞性變化,提取12,000張帶有字
詞標籤的照片,進行標籤字詞與情緒分類字詞共現的計算、關聯規則
計算。同時,透過語意差異量表,提出了新的偏向與強度的座標分類
方法。
透過頻率門檻的過濾、詞性加註與詞幹合併字詞的方法,從
65983個不重複的文字標籤中,最後得到272個帶有情緒偏向的事物
概念字詞,以及正負偏向的情緒關聯規則。為了透過影像驗證這些字
詞是否與影像內容帶給人們的情緒狀態有關聯,我們透過三種查詢
管道:Flickr單詞查詢、google image單詞查詢、以及我們透過照片
標籤綜合指標:情緒字詞比例、社群過濾參數來選定最後要比較的
42張照片。透過語意差異量表,測量三組照片在136位使用者的答案
中,是否能吻合先前提出的強度-偏向模型。
實驗結果發現,我們的方法和google image回傳的結果類似,
使用者問卷調查結果支持我們的方法對於正負偏向的判定,且比
google有更佳的強弱分離程度。 / This study attempts to proceed from psychology to explore the emotional
state of the classification method described why, in order to be emotional and
semantic links, images as we try to stimulate the emotional state of the source,
the Internet community for sharing Flickr content sampling and observation,
using basic psychological research in terms of mood changes with the parts of
speech, with word labels extracted 12,000 photos, label and classification of
words and word co-occurrence of emotional computing, computing association
rules. At the same time, through the semantic differential scale, tend to put
forward a new classification of the coordinates and intensity.
Through the frequency threshold filter, filling part of speech combined
with the terms of the method stems from the 65,983 non-duplicate text labels,
the last 272 to get things with the concept of emotional bias term, and positive
and negative emotions tend to association rules. In order to verify these words
through images is to bring people's emotional state associated with our pipeline
through the three sources: Flickr , google image , and photos through our index
labels: the proportion of emotional words, the community filtering parameters to
select the final 42 photos to compare. Through the semantic differential scale,
measuring three photos in 136 users of answers, whether the agreement made
earlier strength - bias model. Experimental results showed that our methods and
google image similar to the results returned, the user survey results support our
approach to determine the positive and negative bias, and the strength of better
than google degree of separation.
|
Page generated in 0.0206 seconds