本研究於個人數位註釋應用程式iPARIS上,建立影像標籤註釋之功能,稱為iPARIS-PLUS。它提供不同於以往文字註釋的新方法,讓使用者可以有另一種選擇,也同時解決在面臨多國語系時的註釋問題,並有效的降低記錄所花費之時間。iPARIS-Plus能讓使用者保有在行動裝置上紀實之便利性的同時,也能兼顧記錄的完整性,讓人們不再將記錄視為一種麻煩。除此之外,我們透過群眾外包的力量將用於註釋的影像標籤轉換為文字後儲存於資料庫中,解決原先因多國語系註釋問題讓使用者無法輸入文字,導致資料庫缺少該筆資料而造成資料空缺。在評估方面,受測者認為影像標籤註釋之方法可以有效的解決多國語系註釋之問題,以及有效節省在行動裝置上打字之時間,更加強了記錄的便利性與完整性,同時也帶來不同以往的新鮮感。而我們藉由群眾外包得到良好的解析率,並且從歷程記錄中發現群眾外包於運作上,越多專業之群眾並不一定帶來越好的成果,只仰賴少部分專業之群眾提供貢獻,反而能減少問題產生,進而得到較好之結果。 / In this study, we created the function of image tags annotating in the application, iPARIS-Plus. It provided a new method of annotation which is different from the text annotations, therefore, users could have another choice. This function could solve the problem of multilingual annotation and reduce the time effectively when users take for the record. iPARIS-Plus allows users to retain the convenience of recording on their mobile device, at the same time, it also considers the integrity of the records, so let people will no longer feel recording is a trouble. In addition, we converted the image tags that used to annotate into text through the crowdsourcing system to solve the problem which users couldn’t enter text because of the multilingual annotation, it resulted in a lack of databases. In the evaluation, users argued that the image tags annotation method could solve the problem of multilingual annotation effectively, as well as saving the time they typing on their moblie devices, even more it can enhance the integrity and convenience of records. We got a good resolution rate of converting the image tags into text by crowdsourcing system and found that more professional crowds do not bring better results. On the contrary, we could rely on a few of professional crowds to reduce the problems, then got a better results.
Identifer | oai:union.ndltd.org:CHENGCHI/G0100753019 |
Creators | 林睦叡, Lin, Mu Rui |
Publisher | 國立政治大學 |
Source Sets | National Chengchi University Libraries |
Language | 中文 |
Detected Language | English |
Type | text |
Rights | Copyright © nccu library on behalf of the copyright holders |
Page generated in 0.002 seconds