• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

基於圖像資訊之音樂資訊檢索研究 / A study of image-based music information retrieval

夏致群 Unknown Date (has links)
以往的音樂資訊檢索方法多使用歌詞、曲風、演奏的樂器或一段音頻訊號來當作查詢的媒介,然而,在某些情況下,使用者沒有辦法清楚描述他們想要尋找的歌曲,如:情境式的音樂檢索。本論文提出了一種基於圖像的情境式音樂資訊檢索方法,可以透過輸入圖片來找尋相應的音樂。此方法中我們使用了卷積神經網絡(Convolutional Neural Network)技術來處理圖片,將其轉為低維度的表示法。為了將異質性的多媒體訊息映射到同一個向量空間,資訊網路表示法學習(Network Embedding)技術也被使用,如此一來,可以使用距離計算找回和輸入圖片有關的多媒體訊息。我們相信這樣的方法可以改善異質性資訊間的隔閡(Heterogeneous Gap),也就是指不同種類的多媒體檔案之間無法互相轉換或詮釋。在實驗與評估方面,首先利用從歌詞與歌名得到的關鍵字來搜尋大量圖片當作訓練資料集,接著實作提出的檢索方法,並針對實驗結果做評估。除了對此方法的有效性做測試外,使用者的回饋也顯示此檢索方法和其他方法相比是有效的。同時我們也實作了一個網路原型,使用者可以上傳圖片並得到檢索後的歌曲,實際的使用案例也將在本論文中被展示與介紹。 / Listening to music is indispensable to everyone. Music information retrieval systems help users find their favorite music. A common scenario of music information retrieval systems is to search songs based on user's query. Most existing methods use descriptions (e.g., genre, instrument and lyric) or audio signal of music as the query; then the songs related to the query will be retrieved. The limitation of this scenario is that users might be difficult to describe what they really want to search for. In this paper, we propose a novel method, called ``image2song,'' which allows users to input an image to retrieve the related songs. The proposed method consists of three modules: convolutional neural network (CNN) module, network embedding module, and similarity calculation module. For the processing of the images, in our work the CNN is adopted to learn the representations for images. To map each entity (e.g., image, song, and keyword) into a same embedding space, the heterogeneous representation is learned by network embedding algorithm from the information graph. This method is flexible because it is easy to join other types of multimedia data into the information graph. In similarity calculation module, the Euclidean distance and cosine distance is used as our criterion to compare the similarity. Then we can retrieve the most relevant songs according to the similarity calculation. The experimental results show that the proposed method has a good performance. Furthermore, we also build an online image-based music information retrieval prototype system, which can showcase some examples of our experiments.

Page generated in 0.018 seconds