• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Instagram相片之色彩分析及應用 / Color analysis of Instagram photos and its application

林儀婷, Lin, Yi-Ting Unknown Date (has links)
近來Instagram成為流行的分享照片社交平台。在上傳影像到網路社交平台時,人們透過套用不同的濾鏡來表達他們的感受。然而,對於修改過的影像,我們不太可能逆向推回得知影像套用了什麼樣的濾鏡。本研究嘗試透過定義出十種影像風格,對應於一些最常應用的濾鏡,來解決這種逆向工程問題。因此,原始問題被轉化為分類問題,並可以使用機器學習方法來解決。為了生成訓練數據,我們根據用戶投票收集標記的結果。根據我們的實驗,在調查中概述的十個類別中,投票的結果有很高的共識。我們在HSV空間中使用分析出的顏色特徵來區分影像風格,並採用支持向量機(SVM)做分類。驗證我們數據集中的Top 1和Top 3準確度分別為64%和96%,顯示機器分類的效能與人類觀察者的效能相當。最後,我們導入數位著名攝影師的作品,進行個案研究,以測試風格識別和情感分析結果。 / Recently, Instagram has become a very popular social media platform for sharing photos. People apply different type of filters to express their feelings when posting photos on social networking sites. Given a filtered image, it is difficult, if not possible, to determine which filter has been applied to obtain the observed effects. This study attempts to address this reverse engineering problem by defining ten image styles corresponding to some of the most frequently applied filters. As such, the original question is cast into a classification problem which can be solved using machine learning approaches. To generate training data, we collected the labeled results based on user votes. Consensuses among users are found to be high in the ten categories outlined in our investigation. We employ color features in the HSV space to characterize image styles. Support vector machine (SVM) is then used for classification. The accuracies for top-1 and top-3 category using our dataset are 64% and 96%, respectively. The performance of machine classification is comparable to that of human observers. Finally, works by famous photographers are brought in to validate the style recognition and sentiment analysis results.
2

透過圖片標籤觀察情緒字詞與事物概念之關聯 / An analysis on association between emotion words and concept words based on image tags

彭聲揚, Peng, Sheng-Yang Unknown Date (has links)
本研究試圖從心理學出發,探究描述情緒狀態的分類方法為何, 為了進行情緒與語意的連結,我們試圖將影像當作情緒狀態的刺激 來源,針對Flickr網路社群所共建共享的內容進行抽樣與觀察,使 用心理學研究中基礎的情緒字詞與詞性變化,提取12,000張帶有字 詞標籤的照片,進行標籤字詞與情緒分類字詞共現的計算、關聯規則 計算。同時,透過語意差異量表,提出了新的偏向與強度的座標分類 方法。 透過頻率門檻的過濾、詞性加註與詞幹合併字詞的方法,從 65983個不重複的文字標籤中,最後得到272個帶有情緒偏向的事物 概念字詞,以及正負偏向的情緒關聯規則。為了透過影像驗證這些字 詞是否與影像內容帶給人們的情緒狀態有關聯,我們透過三種查詢 管道:Flickr單詞查詢、google image單詞查詢、以及我們透過照片 標籤綜合指標:情緒字詞比例、社群過濾參數來選定最後要比較的 42張照片。透過語意差異量表,測量三組照片在136位使用者的答案 中,是否能吻合先前提出的強度-偏向模型。 實驗結果發現,我們的方法和google image回傳的結果類似, 使用者問卷調查結果支持我們的方法對於正負偏向的判定,且比 google有更佳的強弱分離程度。 / This study attempts to proceed from psychology to explore the emotional state of the classification method described why, in order to be emotional and semantic links, images as we try to stimulate the emotional state of the source, the Internet community for sharing Flickr content sampling and observation, using basic psychological research in terms of mood changes with the parts of speech, with word labels extracted 12,000 photos, label and classification of words and word co-occurrence of emotional computing, computing association rules. At the same time, through the semantic differential scale, tend to put forward a new classification of the coordinates and intensity. Through the frequency threshold filter, filling part of speech combined with the terms of the method stems from the 65,983 non-duplicate text labels, the last 272 to get things with the concept of emotional bias term, and positive and negative emotions tend to association rules. In order to verify these words through images is to bring people's emotional state associated with our pipeline through the three sources: Flickr , google image , and photos through our index labels: the proportion of emotional words, the community filtering parameters to select the final 42 photos to compare. Through the semantic differential scale, measuring three photos in 136 users of answers, whether the agreement made earlier strength - bias model. Experimental results showed that our methods and google image similar to the results returned, the user survey results support our approach to determine the positive and negative bias, and the strength of better than google degree of separation.

Page generated in 0.0816 seconds