• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

雲端運算服務環境下運用文字探勘於語意註解網頁文件分析之研究 / Extraction of semantic annotation document using text mining techniques in cloud computing environment

黃孝文 Unknown Date (has links)
隨著網路的快速成長,資料探勘(Data Mining)及文字探勘(Text Mining)所須分析的資料集越來越龐大,透過單一機器執行資料探勘分析受限於記憶體大小及其計算能力,不僅運算時間大幅增加,分析資料集的檔案大小也因而受到限制;語意註解萃取出文件的重要內容,凸顯主題加強資料探勘及文字探勘的效果,而資料探勘、文字探勘和語意註解背後都牽涉到大規模的資料處理,透過雲端運算的技術使負載平衡,將運算工作分散至運算叢集中的每一台電腦,不僅加快運算和儲存的速度,更可降低整體的風險。 本研究使用Hadoop軟體實作雲端文字探勘平台,用於分散式文字探勘及結果分析,採用涵蓋21578篇新聞文件的路透社資料集(Reuters 21578)進行實證分析,依照Mod Apte切分法分為訓練資料集及測試資料集用以進行文件分類,文件分類的步驟分為數個部分,分別為進行資料格式轉換的資料前置處理、針對文件內容加註更詳盡的連結及描述的語意註解、用以產生分類預測模型的分類器(簡單貝氏分類器、餘集簡單貝氏分類器)與評估文件分類結果的評估器;路透社資料集經過去除停用字、附加語意註解資料及文本詞彙長度統計分類,再進行簡單貝氏分類器及餘集簡單貝氏分類器的訓練,比較測試資料集的分類正確率作為文件分類實證結果。 本研究根據實驗結果發現,探討去除停用字、語意註解、文件分類演算法及文本詞彙長度對於文件分類正確率的影響:(1)去除停用字使出現頻率高的停用字對於分類預測產生負面影響;(2)語意註解作為詮釋資料的取得方式,可增加文件分類的效果;(3)餘集簡單貝氏分類器,可用以減少偏斜資料對於分類預測結果的誤判;(4)文本詞彙長度較長的文章則會某種程度主導分類預測結果,造成誤判的產生,降低分類正確率;透過上述各影響因子的調整使文件分類的結果得到改善,使得文件分類正確率獲得較佳的效果。 本研究提出之系統以雲端運算環境運行文件分類演算法,使得大型資料集得以更為迅速取得分析結果,使用語意註解作為詮釋資料的來源,使得文件分類模型產生過程中有更多資訊可分析,使得機器判斷的正確程度獲得改善,亦可將文件轉換為語意網文件,供語意網搜尋引擎查詢檢索,未來應加入Twitter或Facebook等擁有大量非結構化資料的網站之資料,使本平台得以分析更大規模的資料,並且考慮資料集類別分佈的集中程度對分類正確率的影響程度,同時應實作效果更佳的分類演算法,進而改善系統整體的結果。 / Nowadays, businesses perform data mining and text mining need to handle large scale dataset. The computational resources of servers are often limited and lack of efficient to compute analytical jobs. But if they could run their data mining jobs under cloud computing clusters, they are able to get results very quickly on a large dataset without "out of memory" problems. In this paper, a series of experiments are conducted to measure and analyze the accuracy of the classification algorithms implemented on Hadoop using Reuters-21578 dataset; the process of text mining consisted of four stages: (1)data preprocessing, (2)semantic annotation, (3)classifier, (4)evaluator. Reuters-21578 had divided into training set and testing set based on Mod Apte Split, processed by stopwords removal, appended semantic annotations as metadata and splitted into several subsets according to different document sizes. Experiments outlined several issues that will need to be considered when conducting text mining. According to the experiment results, the researcher found that stopwords removal, semantic annotation, different classification algorithms and different document sizes could improve the classification accuracy. First, stopwords removal avoids common words from becoming noises that will do harm to classification result. Second, semantic annotation as the extra information could improve the result. Third, complementary naive bayes algorithm could solve the decision boundary problem which naive bayesian cannot handle. Fourth, long documents could dominate the classification results. Sixth, the class imbalance problem could cause a drop of classification accuracy. Text mining result could be improved by adjusting the parameters found above.
2

在Spark大數據平台上分析DBpedia開放式資料:以電影票房預測為例 / Analyzing DBpedia Linked Open Data (LOD) on Spark:Movie Box Office Prediction as an Example

劉文友, Liu, Wen Yu Unknown Date (has links)
近年來鏈結開放式資料 (Linked Open Data,簡稱LOD) 被認定含有大量潛在價值。如何蒐集與整合多元化的LOD並提供給資料分析人員進行資料的萃取與分析,已成為當前研究的重要挑戰。LOD資料是RDF (Resource Description Framework) 的資料格式。我們可以利用SPARQL來查詢RDF資料,但是目前對於大量RDF的資料除了缺少一個高性能且易擴展的儲存和查詢分析整合性系統之外,對於RDF大數據資料分析流程的研究也不夠完備。本研究以預測電影票房為例,使用DBpedia LOD資料集並連結外部電影資料庫 (例如:IMDb),並在Spark大數據平台上進行巨量圖形的分析。首先利用簡單貝氏分類與貝氏網路兩種演算法進行電影票房預測模型實例的建構,並使用貝氏訊息準則 (Bayesian Information Criterion,簡稱BIC) 找到最佳的貝氏網路結構。接著計算多元分類的ROC曲線與AUC值來評估本案例預測模型的準確率。 / Recent years, Linked Open Data (LOD) has been identified as containing large amount of potential value. How to collect and integrate multiple LOD contents for effective analytics has become a research challenge. LOD is represented as a Resource Description Framework (RDF) format, which can be queried through SPARQL language. But large amount of RDF data is lack of a high performance and scalable storage analysis system. Moreover, big RDF data analytics pipeline is far from perfect. The purpose of this study is to exploit the above research issue. A movie box office sale prediction scenario is demonstrated by using DBpedia with external IMDb movie database. We perform the DBpedia big graph analytics on the Apache Spark platform. The movie box office prediction for optimal model selection is first evaluated by BIC. Then, Naïve Bayes and Bayesian Network optimal model’s ROC and AUC values are obtained to justify our approach.

Page generated in 0.0159 seconds