• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 113
  • 39
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 193
  • 193
  • 83
  • 72
  • 63
  • 54
  • 45
  • 45
  • 39
  • 36
  • 31
  • 27
  • 25
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Deterministic and Flexible Parallel Latent Feature Models Learning Framework for Probabilistic Knowledge Graph

Guan, Xiao January 2018 (has links)
Knowledge Graph is a rising topic in the field of Artificial Intelligence. As the current trend of knowledge representation, Knowledge graph research is utilizing the large knowledge base freely available on the internet. Knowledge graph also allows inspection, analysis, the reasoning of all knowledge in reality. To enable the ambitious idea of modeling the knowledge of the world, different theory and implementation emerges. Nowadays, we have the opportunity to use freely available information from Wikipedia and Wikidata. The thesis investigates and formulates a theory about learning from Knowledge Graph. The thesis researches probabilistic knowledge graph. It only focuses on a branch called latent feature models in learning probabilistic knowledge graph. These models aim to predict possible relationships of connected entities and relations. There are many models for such a task. The metrics and training process is detailed described and improved in the thesis work. The efficiency and correctness enable us to build a more complex model with confidence. The thesis also covers possible problems in finding and proposes future work.
32

Text readability and summarisation for non-native reading comprehension

Xia, Menglin January 2019 (has links)
This thesis focuses on two important aspects of non-native reading comprehension: text readability assessment, which estimates the reading difficulty of a given text for L2 learners, and learner summarisation assessment, which evaluates the quality of learner summaries to assess their reading comprehension. We approach both tasks as supervised machine learning problems and present automated assessment systems that achieve state-of-the-art performance. We first address the task of text readability assessment for L2 learners. One of the major challenges for a data-driven approach to text readability assessment is the lack of significantly-sized level-annotated data aimed at L2 learners. We present a dataset of CEFR-graded texts tailored for L2 learners and look into a range of linguistic features affecting text readability. We compare the text readability measures for native and L2 learners and explore methods that make use of the more plentiful data aimed at native readers to help improve L2 readability assessment. We then present a summarisation task for evaluating non-native reading comprehension and demonstrate an automated summarisation assessment system aimed at evaluating the quality of learner summaries. We propose three novel machine learning approaches to assessing learner summaries. In the first approach, we examine using several NLP techniques to extract features to measure the content similarity between the reading passage and the summary. In the second approach, we calculate a similarity matrix and apply a convolutional neural network (CNN) model to assess the summary quality using the similarity matrix. In the third approach, we build an end-to-end summarisation assessment model using recurrent neural networks (RNNs). Further, we combine the three approaches to a single system using a parallel ensemble modelling technique. We show that our models outperform traditional approaches that rely on exact word match on the task and that our best model produces quality assessments close to professional examiners.
33

Análise de sentimentos baseada em aspectos e atribuições de polaridade / Aspect-based sentiment analysis and polarity assignment

Kauer, Anderson Uilian January 2016 (has links)
Com a crescente expansão da Web, cada vez mais usuários compartilham suas opiniões sobre experiências vividas. Essas opiniões estão, na maioria das vezes, representadas sob a forma de texto não estruturado. A Análise de Sentimentos (ou Mineração de Opinião) é a área dedicada ao estudo computacional das opiniões e sentimentos expressos em textos, tipicamente classificando-os de acordo com a sua polaridade (i.e., como positivos ou negativos). Ao mesmo tempo em que sites de vendas e redes sociais tornam-se grandes fontes de opiniões, cresce a busca por ferramentas que, de forma automática, classifiquem as opiniões e identifiquem a qual aspecto da entidade avaliada elas se referem. Neste trabalho, propomos métodos direcionados a dois pontos fundamentais para o tratamento dessas opiniões: (i) análise de sentimentos baseada em aspectos e (ii) atribuição de polaridade. Para a análise de sentimentos baseada em aspectos, desenvolvemos um método que identifica expressões que mencionem aspectos e entidades em um texto, utilizando ferramentas de processamento de linguagem natural combinadas com algoritmos de aprendizagem de máquina. Para a atribuição de polaridade, desenvolvemos um método que utiliza 24 atributos extraídos a partir do ranking gerado por um motor de busca e para gerar modelos de aprendizagem de máquina. Além disso, o método não depende de recursos linguísticos e pode ser aplicado sobre dados com ruídos. Experimentos realizados sobre datasets reais demonstram que, em ambas as contribuições, conseguimos resultados próximos aos dos baselines mesmo com um número pequeno de atributos. Ainda, para a atribuição de polaridade, os resultados são comparáveis aos de métodos do estado da arte que utilizam técnicas mais complexas. / With the growing expansion of the Web, more and more users share their views on experiences they have had. These views are, in most cases, represented in the form of unstructured text. The Sentiment Analysis (or Opinion Mining) is a research area dedicated to the computational study of the opinions and feelings expressed in texts, typically categorizing them according to their polarity (i.e., as positive or negative). As on-line sales and social networking sites become great sources of opinions, there is a growing need for tools that classify opinions and identify to which aspect of the evaluated entity they refer to. In this work, we propose methods aimed at two key points for the treatment of such opinions: (i) aspect-based sentiment analysis and (ii) polarity assignment. For aspect-based sentiment analysis, we developed a method that identifies expressions mentioning aspects and entities in text, using natural language processing tools combined with machine learning algorithms. For the identification of polarity, we developed a method that uses 24 attributes extracted from the ranking generated by a search engine to generate machine learning models. Furthermore, the method does not rely on linguistic resources and can be applied to noisy data. Experiments on real datasets show that, in both contributions, our results using a small number of attributes were similar to the baselines. Still, for assigning polarity, the results are comparable to prior art methods that use more complex techniques.
34

Classificação automática de textos por meio de aprendizado de máquina baseado em redes / Text automatic classification through machine learning based on networks

Rossi, Rafael Geraldeli 26 October 2015 (has links)
Nos dias atuais há uma quantidade massiva de dados textuais sendo produzida e armazenada diariamente na forma de e-mails, relatórios, artigos e postagens em redes sociais ou blogs. Processar, organizar ou gerenciar essa grande quantidade de dados textuais manualmente exige um grande esforço humano, sendo muitas vezes impossível de ser realizado. Além disso, há conhecimento embutido nos dados textuais, e analisar e extrair conhecimento de forma manual também torna-se inviável devido à grande quantidade de textos. Com isso, técnicas computacionais que requerem pouca intervenção humana e que permitem a organização, gerenciamento e extração de conhecimento de grandes quantidades de textos têm ganhado destaque nos últimos anos e vêm sendo aplicadas tanto na academia quanto em empresas e organizações. Dentre as técnicas, destaca-se a classificação automática de textos, cujo objetivo é atribuir rótulos (identificadores de categorias pré-definidos) à documentos textuais ou porções de texto. Uma forma viável de realizar a classificação automática de textos é por meio de algoritmos de aprendizado de máquina, que são capazes de aprender, generalizar, ou ainda extrair padrões das classes das coleções com base no conteúdo e rótulos de documentos textuais. O aprendizado de máquina para a tarefa de classificação automática pode ser de 3 tipos: (i) indutivo supervisionado, que considera apenas documentos rotulados para induzir um modelo de classificação e classificar novos documentos; (ii) transdutivo semissupervisionado, que classifica documentos não rotulados de uma coleção com base em documentos rotulados; e (iii) indutivo semissupervisionado, que considera documentos rotulados e não rotulados para induzir um modelo de classificação e utiliza esse modelo para classificar novos documentos. Independente do tipo, é necessário que as coleções de documentos textuais estejam representadas em um formato estruturado para os algoritmos de aprendizado de máquina. Normalmente os documentos são representados em um modelo espaço-vetorial, no qual cada documento é representado por um vetor, e cada posição desse vetor corresponde a um termo ou atributo da coleção de documentos. Algoritmos baseados no modelo espaço-vetorial consideram que tanto os documentos quanto os termos ou atributos são independentes, o que pode degradar a qualidade da classificação. Uma alternativa à representação no modelo espaço-vetorial é a representação em redes, que permite modelar relações entre entidades de uma coleção de textos, como documento e termos. Esse tipo de representação permite extrair padrões das classes que dificilmente são extraídos por algoritmos baseados no modelo espaço-vetorial, permitindo assim aumentar a performance de classificação. Além disso, a representação em redes permite representar coleções de textos utilizando diferentes tipos de objetos bem como diferentes tipos de relações, o que permite capturar diferentes características das coleções. Entretanto, observa-se na literatura alguns desafios para que se possam combinar algoritmos de aprendizado de máquina e representações de coleções de textos em redes para realizar efetivamente a classificação automática de textos. Os principais desafios abordados neste projeto de doutorado são (i) o desenvolvimento de representações em redes que possam ser geradas eficientemente e que também permitam realizar um aprendizado de maneira eficiente; (ii) redes que considerem diferentes tipos de objetos e relações; (iii) representações em redes de coleções de textos de diferentes línguas e domínios; e (iv) algoritmos de aprendizado de máquina eficientes e que façam um melhor uso das representações em redes para aumentar a qualidade da classificação automática. Neste projeto de doutorado foram propostos e desenvolvidos métodos para gerar redes que representem coleções de textos, independente de domínio e idioma, considerando diferentes tipos de objetos e relações entre esses objetos. Também foram propostos e desenvolvidos algoritmos de aprendizado de máquina indutivo supervisionado, indutivo semissupervisionado e transdutivo semissupervisionado, uma vez que não foram encontrados na literatura algoritmos para lidar com determinados tipos de relações, além de sanar a deficiência dos algoritmos existentes em relação à performance e/ou tempo de classificação. É apresentado nesta tese (i) uma extensa avaliação empírica demonstrando o benefício do uso das representações em redes para a classificação de textos em relação ao modelo espaço-vetorial, (ii) o impacto da combinação de diferentes tipos de relações em uma única rede e (iii) que os algoritmos propostos baseados em redes são capazes de superar a performance de classificação de algoritmos tradicionais e estado da arte tanto considerando algoritmos de aprendizado supervisionado quanto semissupervisionado. As soluções propostas nesta tese demonstraram ser úteis e aconselháveis para serem utilizadas em diversas aplicações que envolvam classificação de textos de diferentes domínios, diferentes características ou para diferentes quantidades de documentos rotulados. / A massive amount of textual data, such as e-mails, reports, articles and posts in social networks or blogs, has been generated and stored on a daily basis. The manual processing, organization and management of this huge amount of texts require a considerable human effort and sometimes these tasks are impossible to carry out in practice. Besides, the manual extraction of knowledge embedded in textual data is also unfeasible due to the large amount of texts. Thus, computational techniques which require little human intervention and allow the organization, management and knowledge extraction from large amounts of texts have gained attention in the last years and have been applied in academia, companies and organizations. The tasks mentioned above can be carried out through text automatic classification, in which labels (identifiers of predefined categories) are assigned to texts or portions of texts. A viable way to perform text automatic classification is through machine learning algorithms, which are able to learn, generalize or extract patterns from classes of text collections based on the content and labels of the texts. There are three types of machine learning algorithms for automatic classification: (i) inductive supervised, in which only labeled documents are considered to induce a classification model and this model are used to classify new documents; (ii) transductive semi-supervised, in which all known unlabeled documents are classified based on some labeled documents; and (iii) inductive semi-supervised, in which labeled and unlabeled documents are considered to induce a classification model in order to classify new documents. Regardless of the learning algorithm type, the texts of a collection must be represented in a structured format to be interpreted by the algorithms. Usually, the texts are represented in a vector space model, in which each text is represented by a vector and each dimension of the vector corresponds to a term or feature of the text collection. Algorithms based on vector space model consider that texts, terms or features are independent and this assumption can degrade the classification performance. Networks can be used as an alternative to vector space model representations. Networks allow the representations of relations among the entities of a text collection, such as documents and terms. This type of representation allows the extraction patterns which are not extracted by algorithms based on vector-space model. Moreover, text collections can be represented by networks composed of different types of entities and relations, which provide the extraction of different patterns from the texts. However, there are some challenges to be solved in order to allow the combination of machine learning algorithms and network-based representations to perform text automatic classification in an efficient way. The main challenges addressed in this doctoral project are (i) the development of network-based representations efficiently generated which also allows an efficient learning; (ii) the development of networks which represent different types of entities and relations; (iii) the development of networks which can represent texts written in different languages and about different domains; and (iv) the development of efficient learning algorithms which make a better use of the network-based representations and increase the classification performance. In this doctoral project we proposed and developed methods to represent text collections into networks considering different types of entities and relations and also allowing the representation of texts written in any language or from any domain. We also proposed and developed supervised inductive, semi-supervised transductive and semi-supervised inductive learning algorithms to interpret and learn from the proposed network-based representations since there were no algorithms to handle certain types of relations considered in this thesis. Besides, the proposed algorithms also attempt to obtain a higher classification performance and a faster classification than the existing network-based algorithms. In this doctoral thesis we present (i) an extensive empirical evaluation demonstrating the benefits about the use of network-based representations for text classification, (ii) the impact of the combination of different types of relations in a single network and (iii) that the proposed network-based algorithms are able to surpass the classification performance of traditional and state-of-the-art algorithms considering both supervised and semi-supervised learning. The solutions proposed in this doctoral project have proved to be advisable to be used in many applications involving classification of texts from different domains, areas, characteristics or considering different numbers of labeled documents.
35

Classificação automática de texto por meio de similaridade de palavras: um algoritmo mais eficiente. / Automatic text classification using word similarities: a more efficient algorithm.

Fabricio Shigueru Catae 08 January 2013 (has links)
A análise da semântica latente é uma técnica de processamento de linguagem natural, que busca simplificar a tarefa de encontrar palavras e sentenças por similaridade. Através da representação de texto em um espaço multidimensional, selecionam-se os valores mais significativos para sua reconstrução em uma dimensão reduzida. Essa simplificação lhe confere a capacidade de generalizar modelos, movendo as palavras e os textos para uma representação semântica. Dessa forma, essa técnica identifica um conjunto de significados ou conceitos ocultos sem a necessidade do conhecimento prévio da gramática. O objetivo desse trabalho foi determinar a dimensionalidade ideal do espaço semântico em uma tarefa de classificação de texto. A solução proposta corresponde a um algoritmo semi-supervisionado que, a partir de exemplos conhecidos, aplica o método de classificação pelo vizinho mais próximo e determina uma curva estimada da taxa de acerto. Como esse processamento é demorado, os vetores são projetados em um espaço no qual o cálculo se torna incremental. Devido à isometria dos espaços, a similaridade entre documentos se mantém equivalente. Esta proposta permite determinar a dimensão ideal do espaço semântico com pouco esforço além do tempo requerido pela análise da semântica latente tradicional. Os resultados mostraram ganhos significativos em adotar o número correto de dimensões. / The latent semantic analysis is a technique in natural language processing, which aims to simplify the task of finding words and sentences similarity. Using a vector space model for the text representation, it selects the most significant values for the space reconstruction into a smaller dimension. This simplification allows it to generalize models, moving words and texts towards a semantic representation. Thus, it identifies a set of underlying meanings or hidden concepts without prior knowledge of grammar. The goal of this study was to determine the optimal dimensionality of the semantic space in a text classification task. The proposed solution corresponds to a semi-supervised algorithm that applies the method of the nearest neighbor classification on known examples, and plots the estimated accuracy on a graph. Because it is a very time consuming process, the vectors are projected on a space in such a way the calculation becomes incremental. Since the spaces are isometric, the similarity between documents remains equivalent. This proposal determines the optimal dimension of the semantic space with little effort, not much beyond the time required by traditional latent semantic analysis. The results showed significant gains in adopting the correct number of dimensions.
36

Classification de textes : de nouvelles pondérations adaptées aux petits volumes / Text Classification : new weights suitable for small dataset

Bouillot, Flavien 16 April 2015 (has links)
Au quotidien, le réflexe de classifier est omniprésent et inconscient. Par exemple dans le processus de prise de décision où face à un élément (un objet, un événement, une personne) nous allons instinctivement chercher à rapprocher cet élément d'autres similaires afin d'adapter nos choix et nos comportements. Ce rangement dans telle ou telle catégorie repose sur les expériences passées et les caractéristiques de l'élément. Plus les expériences seront nombreuses et les caractéristiques détaillées, plus fine et pertinente sera la décision. Il en est de même lorsqu'il nous faut catégoriser un document en fonction de son contenu. Par exemple détecter s'il s'agit d'un conte pour enfants ou d'un traité de philosophie. Ce traitement est bien sûr d'autant plus efficace si nous possédons un grand nombre d'ouvrages de ces deux catégories et que l'ouvrage à classifier possède un nombre important de mots.Dans ce manuscrit nous nous intéressons à la problématique de la prise de décision lorsque justement nous disposons de peu de documents d'apprentissage et que le document possède un nombre de mots limité. Nous proposons pour cela une nouvelle approche qui repose sur de nouvelles pondérations. Elle nous permet de déterminer avec précision l'importance à accorder aux mots composant le document.Afin d'optimiser les traitements, nous proposons une approche paramétrable. Cinq paramètres rendent notre approche adaptable, quel que soit le problème de classification donné. De très nombreuses expérimentations ont été menées sur différents types de documents, dans différentes langues et dans différentes configurations. Selon les corpus, elles mettent en évidence que notre proposition nous permet d'obtenir des résultats supérieurs en comparaison avec les meilleures approches de la littérature pour traiter les problématiques de petits volumes.L'utilisation de paramètres introduit bien sur une complexité supplémentaire puisqu'il faut alors déterminer les valeurs optimales. Détecter les meilleurs paramètres et les meilleurs algorithmes est une tâche compliquée dont la difficulté est théorisée au travers du théorème du No-Free-Lunch. Nous traitons cette seconde problématique en proposant une nouvelle approche de méta-classification reposant sur les notions de distances et de similarités sémantiques. Plus précisément nous proposons de nouveaux méta-descripteurs adaptés dans un contexte de classification de documents. Cette approche originale nous permet d'obtenir des résultats similaires aux meilleures approches de la littérature tout en offrant des qualités supplémentaires.Pour conclure, les travaux présentés dans ce manuscrit ont fait l'objet de diverses implémentations techniques, une dans le logiciel Weka, une dans un prototype industriel et enfin une troisième dans le logiciel de la société ayant financé ces travaux. / Every day, classification is omnipresent and unconscious. For example in the process of decision when faced with something (an object, an event, a person), we will instinctively think of similar elements in order to adapt our choices and behaviors. This storage in a particular category is based on past experiences and characteristics of the element. The largest and the most accurate will be experiments, the most relevant will be the decision. It is the same when we need to categorize a document based on its content. For example detect if there is a children's story or a philosophical treatise. This treatment is of course more effective if we have a large number of works of these two categories and if books had a large number of words. In this thesis we address the problem of decision making precisely when we have few learning documents and when the documents had a limited number of words. For this we propose a new approach based on new weights. It enables us to accurately determine the weight to be given to the words which compose the document.To optimize treatment, we propose a configurable approach. Five parameters make our adaptable approach, regardless of the classification given problem. Numerous experiments have been conducted on various types of documents in different languages and in different configurations. According to the corpus, they highlight that our proposal allows us to achieve superior results in comparison with the best approaches in the literature to address the problems of small dataset. The use of parameters adds complexity since it is then necessary to determine optimitales values. Detect the best settings and best algorithms is a complicated task whose difficulty is theorized through the theorem of No-Free-Lunch. We treat this second problem by proposing a new meta-classification approach based on the concepts of distance and semantic similarities. Specifically we propose new meta-features to deal in the context of classification of documents. This original approach allows us to achieve similar results with the best approaches to literature while providing additional features. In conclusion, the work presented in this manuscript has been integrated into various technical implementations, one in the Weka software, one in a industrial prototype and a third in the product of the company that funded this work.
37

[pt] CLASSIFICAÇÃO DE SENTIMENTO PARA NOTÍCIAS SOBRE A PETROBRAS NO MERCADO FINANCEIRO / [en] SENTIMENT ANALYSIS FOR FINANCIAL NEWS ABOUT PETROBRAS COMPANY

PAULA DE CASTRO SONNENFELD VILELA 21 December 2011 (has links)
[pt] Hoje em dia, encontramos uma grande quantidade de informações na internet, em particular, notícias sobre o mercado financeiro. Diversas pesquisas mostram que notícias sobre o mercado financeiro possuem uma grande relação com variáveis de mercado como volume de transações, volatilidade e preço das ações. Nesse trabalho, investigamos o problema de Análise de Sentimentos de notícias jornalísticas do mercado financeiro. Nosso objetivo é classificar notícias como favoráveis ou não a Petrobras. Utilizamos técnicas de Processamento de Linguagem Natural para melhorar a acurácia do modelo clássico de saco-de-palavras. Filtramos frases sobre a Petrobras e inserimos novos atributos linguísticos, tanto sintáticos como estilísticos. Para a classifição do sentimento é utilizado o algoritmo de aprendizado Support Vector Machine, sendo aplicados ainda quatro seletores de atributos e um comitê dos melhores modelos. Apresentamos aqui o Petronews, um corpus com notícias em português sobre a Petrobras, anotado manualmente com a informação de sentimento. Esse corpus é composto de mil e cinquenta notícias online de 02/06/2006 a 29/01/2010. Nossos experimentos mostram uma melhora de 5.29 por cento com relação ao modelo saco-de-palavras, atingindo uma acurácia de 87.14 por cento. / [en] A huge amount of information is available online, in particular regarding financial news. Current research indicate that stock news have a strong correlation to market variables such as trade volumes, volatility, stock prices and firm earnings. Here, we investigate a Sentiment Analysis problem for financial news. Our goal is to classify financial news as favorable or unfavorable to Petrobras, an oil and gas company with stocks in the Stock Exchange market. We explore Natural Language Processing techniques in a way to improve the sentiment classification accuracy of a classical bag of words approach. We filter on topic phrases for each Petrobras related news and build syntactic and stylistic input features. For sentiment classification, Support Vector Machines algorithm is used. Moreover we apply four feature selection methods and build a committee of SVM models. Additionally, we introduce Petronews, a Portuguese financial news annotated corpus about Petrobras. It is composed by a collection of one thousand and fifty online financial news from 06/02/2006 to 01/29/2010. Our experiments indicate that our method is 5.29 per cent better than a standard bag-of-words approach, reaching 87.14 per cent accuracy rate for this domain.
38

Improving Feature Selection Techniques for Machine Learning

Tan, Feng 27 November 2007 (has links)
As a commonly used technique in data preprocessing for machine learning, feature selection identifies important features and removes irrelevant, redundant or noise features to reduce the dimensionality of feature space. It improves efficiency, accuracy and comprehensibility of the models built by learning algorithms. Feature selection techniques have been widely employed in a variety of applications, such as genomic analysis, information retrieval, and text categorization. Researchers have introduced many feature selection algorithms with different selection criteria. However, it has been discovered that no single criterion is best for all applications. We proposed a hybrid feature selection framework called based on genetic algorithms (GAs) that employs a target learning algorithm to evaluate features, a wrapper method. We call it hybrid genetic feature selection (HGFS) framework. The advantages of this approach include the ability to accommodate multiple feature selection criteria and find small subsets of features that perform well for the target algorithm. The experiments on genomic data demonstrate that ours is a robust and effective approach that can find subsets of features with higher classification accuracy and/or smaller size compared to each individual feature selection algorithm. A common characteristic of text categorization tasks is multi-label classification with a great number of features, which makes wrapper methods time-consuming and impractical. We proposed a simple filter (non-wrapper) approach called Relation Strength and Frequency Variance (RSFV) measure. The basic idea is that informative features are those that are highly correlated with the class and distribute most differently among all classes. The approach is compared with two well-known feature selection methods in the experiments on two standard text corpora. The experiments show that RSFV generate equal or better performance than the others in many cases.
39

Feature Reduction and Multi-label Classification Approaches for Document Data

Jiang, Jung-Yi 08 August 2011 (has links)
This thesis proposes some novel approaches for feature reduction and multi-label classification for text datasets. In text processing, the bag-of-words model is commonly used, with each document modeled as a vector in a high dimensional space. This model is often called the vector-space model. Usually, the dimensionality of the document vector is huge. Such high-dimensionality can be a severe obstacle for text processing algorithms. To improve the performance of text processing algorithms, we propose a feature clustering approach to reduce the dimensionality of document vectors. We also propose an efficient algorithm for text classification. Feature clustering is a powerful method to reduce the dimensionality of feature vectors for text classification. We propose a fuzzy similarity-based self-constructing algorithm for feature clustering. The words in the feature vector of a document set are grouped into clusters based on similarity test. Words that are similar to each other are grouped into the same cluster. Each cluster is characterized by a membership function with statistical mean and deviation. When all the words have been fed in, a desired number of clusters are formed automatically. We then have one extracted feature for each cluster. The extracted feature corresponding to a cluster is a weighted combination of the words contained in the cluster. By this algorithm, the derived membership functions match closely with and describe properly the real distribution of the training data. Besides, the user need not specify the number of extracted features in advance, and trial-and-error for determining the appropriate number of extracted features can then be avoided. Experimental results show that our method can run faster and obtain better extracted features than other methods. We also propose a fuzzy similarity clustering scheme for multi-label text categorization in which a document can belong to one or more than one category. Firstly, feature transformation is performed. An input document is transformed to a fuzzy-similarity vector. Next, the relevance degrees of the input document to a collection of clusters are calculated, which are then combined to obtain the relevance degree of the input document to each participating category. Finally, the input document is classified to a certain category if the associated relevance degree exceeds a threshold. In text categorization, the number of the involved terms is usually huge. An automatic classification system may suffer from large memory requirements and poor efficiency. Our scheme can do without these difficulties. Besides, we allow the region a category covers to be a combination of several sub-regions that are not necessarily connected. The effectiveness of our proposed scheme is demonstrated by the results of several experiments.
40

Recommending Travel Threads Based on Information Need Model

Chen, Po-ling 29 July 2012 (has links)
Recommendation techniques are developed to discover user¡¦s real information need among large amounts of information. Recommendation systems help users filter out information and attempt to present those similar items according to user¡¦s tastes. In our work, we focus on discussion threads recommendation in the tourism domain. We assume that when users have traveling information need, they will try to search related information on the websites. In addition to browsing others suggestions and opinions, users are allowed to express their need as a question. Hence, we focus on recommending users previous discussion threads that may provide good answers to the users¡¦ questions by considering the question input as well as their browsing records. We propose a model, which consists of four perspectives: goal similarity, content similarity, freshness and quality. To validate and the effectiveness of our model on recommendation performance, we collected 14348 threads from TripAdvisor.com, the largest travel website, and recruited ten volunteers, who have interests in the tourism, to verify our approach. The four perspectives are utilized by two methods. The first is Question-based method, which makes use of content similarity, freshness and quality and the second is Session-based method, which involves goal similarity. We also integrate the two methods into a hybrid method. The experiment results show that the hybrid method generally has better performance than the other two methods.

Page generated in 0.5333 seconds