本研究試圖從心理學出發,探究描述情緒狀態的分類方法為何,
為了進行情緒與語意的連結,我們試圖將影像當作情緒狀態的刺激
來源,針對Flickr網路社群所共建共享的內容進行抽樣與觀察,使
用心理學研究中基礎的情緒字詞與詞性變化,提取12,000張帶有字
詞標籤的照片,進行標籤字詞與情緒分類字詞共現的計算、關聯規則
計算。同時,透過語意差異量表,提出了新的偏向與強度的座標分類
方法。
透過頻率門檻的過濾、詞性加註與詞幹合併字詞的方法,從
65983個不重複的文字標籤中,最後得到272個帶有情緒偏向的事物
概念字詞,以及正負偏向的情緒關聯規則。為了透過影像驗證這些字
詞是否與影像內容帶給人們的情緒狀態有關聯,我們透過三種查詢
管道:Flickr單詞查詢、google image單詞查詢、以及我們透過照片
標籤綜合指標:情緒字詞比例、社群過濾參數來選定最後要比較的
42張照片。透過語意差異量表,測量三組照片在136位使用者的答案
中,是否能吻合先前提出的強度-偏向模型。
實驗結果發現,我們的方法和google image回傳的結果類似,
使用者問卷調查結果支持我們的方法對於正負偏向的判定,且比
google有更佳的強弱分離程度。 / This study attempts to proceed from psychology to explore the emotional
state of the classification method described why, in order to be emotional and
semantic links, images as we try to stimulate the emotional state of the source,
the Internet community for sharing Flickr content sampling and observation,
using basic psychological research in terms of mood changes with the parts of
speech, with word labels extracted 12,000 photos, label and classification of
words and word co-occurrence of emotional computing, computing association
rules. At the same time, through the semantic differential scale, tend to put
forward a new classification of the coordinates and intensity.
Through the frequency threshold filter, filling part of speech combined
with the terms of the method stems from the 65,983 non-duplicate text labels,
the last 272 to get things with the concept of emotional bias term, and positive
and negative emotions tend to association rules. In order to verify these words
through images is to bring people's emotional state associated with our pipeline
through the three sources: Flickr , google image , and photos through our index
labels: the proportion of emotional words, the community filtering parameters to
select the final 42 photos to compare. Through the semantic differential scale,
measuring three photos in 136 users of answers, whether the agreement made
earlier strength - bias model. Experimental results showed that our methods and
google image similar to the results returned, the user survey results support our
approach to determine the positive and negative bias, and the strength of better
than google degree of separation.
Identifer | oai:union.ndltd.org:CHENGCHI/G0095753015 |
Creators | 彭聲揚, Peng, Sheng-Yang |
Publisher | 國立政治大學 |
Source Sets | National Chengchi University Libraries |
Language | 中文 |
Detected Language | English |
Type | text |
Rights | Copyright © nccu library on behalf of the copyright holders |
Page generated in 0.0068 seconds