181 |
應用資料探勘技術於食譜分享社群網站進行內容分群之研究 / A user-based content clustering system using data mining techniques on a recipe sharing website林宜儒 Unknown Date (has links)
本研究以一個食譜分享社群網站為研究對象,針對網站上所提供的食譜建立了運用 kNN 分群演算法的自動分群機制,並利用該網站上使用者的使用行為進行分群後群集的特徵描述參考。
本研究以三個階段建立了一針對食譜領域進行自動分群的資訊系統。第一階段為資料處理,在取得食譜網站上所提供的食譜資料後,雖然已經有相對結構化的格式可直接進行分群運算,然而由使用者所輸入的內容,仍有錯別字、贅詞、與食譜本身直接關連性不高等情形,因此必須進行處理。第二階段為資料分群,利用文字探勘進行內容特徵值的萃取,接著再以資料探勘的技術進行分群,分群的結果將會依群內的特徵、群間的相似度作為分群品質的主要指標。第三階段則為群集特徵分析,利用網站上使用者收藏食譜並加以分類的行為,運用統計的方式找出該群集的可能分類名稱。
本研究實際以 500 篇食譜進行分群實驗,在最佳的一次分群結果中,可得到 10 個食譜群集、平均群內相似度為 0.4482,每個群集可觀察出明顯的相似特徵,並且可藉由網站上使用者的收藏行為,標註出其群集特徵,例如湯品、甜點、麵包、中式料理等類別。
由於網站依照schema.org 所提供的食譜格式標準,針對網站上每一篇食譜內容進行了內容欄位的標記,本研究所實作之食譜分群機制,未來亦可運用在其他同樣採用 schema.org 所提供標準之同類型網站。
|
182 |
Cross-language Ontology Learning : Incorporating and Exploiting Cross-language Data in the Ontology Learning ProcessHjelm, Hans January 2009 (has links)
An ontology is a knowledge-representation structure, where words, terms or concepts are defined by their mutual hierarchical relations. Ontologies are becoming ever more prevalent in the world of natural language processing, where we currently see a tendency towards using semantics for solving a variety of tasks, particularly tasks related to information access. Ontologies, taxonomies and thesauri (all related notions) are also used in various variants by humans, to standardize business transactions or for finding conceptual relations between terms in, e.g., the medical domain. The acquisition of machine-readable, domain-specific semantic knowledge is time consuming and prone to inconsistencies. The field of ontology learning therefore provides tools for automating the construction of domain ontologies (ontologies describing the entities and relations within a particular field of interest), by analyzing large quantities of domain-specific texts. This thesis studies three main topics within the field of ontology learning. First, we examine which sources of information are useful within an ontology learning system and how the information sources can be combined effectively. Secondly, we do this with a special focus on cross-language text collections, to see if we can learn more from studying several languages at once, than we can from a single-language text collection. Finally, we investigate new approaches to formal and automatic evaluation of the quality of a learned ontology. We demonstrate how to combine information sources from different languages and use them to train automatic classifiers to recognize lexico-semantic relations. The cross-language data is shown to have a positive effect on the quality of the learned ontologies. We also give theoretical and experimental results, showing that our ontology evaluation method is a good complement to and in some aspects improves on the evaluation measures in use today. / För att köpa boken skicka en beställning till exp@ling.su.se/ To order the book send an e-mail to exp@ling.su.se
|
183 |
Enhanced Web Search Engines with Query-Concept Bipartite GraphsChen, Yan 16 August 2010 (has links)
With rapid growth of information on the Web, Web search engines have gained great momentum for exploiting valuable Web resources. Although keywords-based Web search engines provide relevant search results in response to users’ queries, future enhancement is still needed. Three important issues include (1) search results can be diverse because ambiguous keywords in queries can be interpreted to different meanings; (2) indentifying keywords in long queries is difficult for search engines; and (3) generating query-specific Web page summaries is desirable for Web search results’ previews. Based on clickthrough data, this thesis proposes a query-concept bipartite graph for representing queries’ relations, and applies the queries’ relations to applications such as (1) personalized query suggestions, (2) long queries Web searches and (3) query-specific Web page summarization. Experimental results show that query-concept bipartite graphs are useful for performance improvement for the three applications.
|
184 |
Graph Similarity, Parallel Texts, and Automatic Bilingual Lexicon AcquisitionTörnfeldt, Tobias January 2008 (has links)
In this masters’ thesis report we present a graph theoretical method used for automatic bilingual lexicon acquisition with parallel texts. We analyze the concept of graph similarity and give an interpretation, of the parallel texts, connected to the vector space model. We represent the parallel texts by a directed, tripartite graph and from here use the corresponding adjacency matrix, A, to compute the similarity of the graph. By solving the eigenvalue problem ρS = ASAT + ATSA we obtain the self-similarity matrix S and the Perron root ρ. A rank k approximation of the self-similarity matrix is computed by implementations of the singular value decomposition and the non-negative matrix factorization algorithm GD-CLS. We construct an algorithm in order to extract the bilingual lexicon from the self-similarity matrix and apply a statistical model to estimate the precision, the correctness, of the translations in the bilingual lexicon. The best result is achieved with an application of the vector space model with a precision of about 80 %. This is a good result and can be compared with the precision of about 60 % found in the literature.
|
185 |
Text Mining Biomedical Literature for Genomic Knowledge DiscoveryLiu, Ying 20 July 2005 (has links)
The last decade has been marked by unprecedented growth in both the production of biomedical data and the amount of published literature discussing it. Almost every known or postulated piece of information pertaining to genes, proteins, and their role in biological processes is reported somewhere in the vast amount of published biomedical literature. We believe the ability to rapidly survey and analyze this literature and extract pertinent information constitutes a necessary step toward both the design and the interpretation of any large-scale experiment. Moreover, automated literature mining offers a yet untapped opportunity to integrate many fragments of information gathered by researchers from multiple fields of expertise into a complete picture exposing the interrelated roles of various genes, proteins, and chemical reactions in cells and organisms. In this thesis, we show that functional keywords in biomedical literature, particularly Medline, represent very valuable information and can be used to discover new genomic knowledge. To validate our claim we present an investigation into text mining biomedical literature to assist microarray data analysis, yeast gene function classification, and biomedical literature categorization. We conduct following studies:
1. We test sets of genes to discover common functional keywords among them and use these keywords to cluster them into groups; 2. We show that it is possible to link genes to diseases by an expert human interpretation of the functional keywords for the genes- none of these diseases are as yet mentioned in public databases; 3. By clustering genes based on commonality of functional keywords it is possible to group genes into meaningful clusters that reveal more information about their functions, link to diseases and roles in metabolism pathways; 4. Using extracted functional keywords, we are able to demonstrate that for yeast genes, we can make a better functional grouping of genes in comparison to available public microarray and phylogenetic databases; 5. We show an application of our approach to literature classification. Using functional keywords as features, we are able to extract epidemiological abstracts automatically from Medline with higher sensitivity and accuracy than a human expert.
|
186 |
An Ensemble Approach for Text Categorization with Positive and Unlabeled ExamplesChen, Hsueh-Ching 29 July 2005 (has links)
Text categorization is the process of assigning new documents to predefined document categories on the basis of a classification model(s) induced from a set of pre-categorized training documents. In a typical dichotomous classification scenario, the set of training documents includes both positive and negative examples; that is, each of the two categories is associated with training documents. However, in many real-world text categorization applications, positive and unlabeled documents are readily available, whereas the acquisition of samples of negative documents is extremely expensive or even impossible. In this study, we propose and develop an ensemble approach, referred to as E2, to address the limitations of existing algorithms for learning from positive and unlabeled training documents. Using the spam email filtering as the evaluation application, our empirical evaluation results suggest that the proposed E2 technique exhibits more stable and reliable performance than PNB and PEBL.
|
187 |
Preference-Anchored Document clustering Technique for Supporting Effective Knowledge and Document ManagementWang, Shin 03 August 2005 (has links)
Effective knowledge management of proliferating volume of documents within a knowledge repository is vital to knowledge sharing, reuse, and assimilation. In order to facilitate accesses to documents in a knowledge repository, use of a knowledge map to organize these documents represents a prevailing approach. Document clustering techniques typically are employed to produce knowledge maps. However, existing document clustering techniques are not tailored to individuals¡¦ preferences and therefore are unable to facilitate the generation of knowledge maps from various preferential perspectives. In response, we propose the Preference-Anchored Document Clustering (PAC) technique that takes a user¡¦s categorization preference (represented as a list of anchoring terms) into consideration to generate a knowledge map (or a set of document clusters) from this specific preferential perspective. Our empirical evaluation results show that our proposed technique outperforms the traditional content-based document clustering technique in the high cluster precision area. Furthermore, benchmarked with Oracle Categorizer, our proposed technique also achieves better clustering effectiveness in the high cluster precision area. Overall, our evaluation results demonstrate the feasibility and potential superiority of the proposed PAC technique.
|
188 |
Event Episode Discovery from Document Sequences: A Temporal-based ApproachChiang, Yu-Sheng 07 September 2005 (has links)
Recent advances in information and networking technologies have contributed significantly to global connectivity and greatly facilitated and fostered information creation, distribution, and access. The resultant ever-increasing volume of online textual documents creates an urgent need for new text mining techniques that can intelligently and automatically extract implicit and potentially useful knowledge from these documents for decision support. This research focuses on identifying and discovering event episodes together with their temporal relationships that occur frequently (referred to as evolution patterns in this study) in sequences of documents. The discovery of such evolution patterns can be applied in such domains as knowledge management and used to facilitate existing document management and retrieval techniques (e.g., event tracking). Specifically, we propose and design an evolution pattern (EP) discovery technique for mining evolution patterns from sequences of documents. We experimentally evaluate our proposed EP technique in the context of facilitating event tracking. Measured by miss and false alarm rates, the evolution-pattern supported event-tracking (EPET) technique exhibits better tracking effectiveness than a traditional event-tracking technique. The encouraging performance of the EPET technique demonstrates the potential usefulness of evolution patterns in supporting event tracking and suggests that the proposed EP technique could effectively discover event episodes and evolution patterns in sequences of documents.
|
189 |
Clustering Multilingual Documents: A Latent Semantic Indexing Based ApproachLin, Chia-min 09 February 2006 (has links)
Document clustering automatically organizes a document collection into distinct groups of similar documents on the basis of their contents. Most of existing document clustering techniques deal with monolingual documents (i.e., documents written in one language). However, with the trend of globalization and advances in Internet technology, an organization or individual often generates/acquires and subsequently archives documents in different languages, thus creating the need for multilingual document clustering (MLDC). Motivated by its significance and need, this study designs a Latent Semantic Indexing (LSI) based MLDC technique. Our empirical evaluation results show that the proposed LSI-based multilingual document clustering technique achieves satisfactory clustering effectiveness, measured by both cluster recall and cluster precision.
|
190 |
Development of Personalized Document Clustering Technique for Accommodating Hierarchical Categorization PreferencesLee, Kuan-yi 27 July 2006 (has links)
With the advances in information and networking technologies and the proliferation of e-commerce and knowledge management applications, individuals and organizations generate and acquire tremendous amount of online information that is typically available as textual documents. To manage the ever-increasing volume of documents, an individual or organization frequently organizes his/her documents into a set or hierarchy of categories in order to facilitate document management and subsequent information access and browsing. Furthermore, document clustering is an intentional act that reflects individual preferences with regard to the semantic coherency and relevant categorization of documents. Hence, effective document-clustering must consider individual preferences for supporting personalization in document categorization and should be capable of organizing documents into a category hierarchy. However, document-clustering research traditionally has been anchored in analyses of document content. As a consequence, most of existing document-clustering techniques are not tailored to individuals¡¦ preferences and therefore are unable to facilitate personalization. On the other hand, existing document-clustering techniques generally are designed to generate from a document collection a set of document clusters rather than a hierarchy of document clusters. In response, we develop in this study a hierarchical personalized document-clustering (HPEC) technique that takes into account an individual¡¦s folder hierarchy representing the individual¡¦s categorization preferences and produces document-clusters in a hierarchical structure for the target individual. Our empirical evaluation results suggest that the proposed HPEC technique outperformed its benchmark technique (i.e., HAC+P) in cluster recall while maintaining the same level of cluster precision and location discrepancy as its benchmark technique did.
|
Page generated in 0.3426 seconds