• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 39
  • 39
  • 12
  • 10
  • 10
  • 9
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An Ensemble Approach for Text Categorization with Positive and Unlabeled Examples

Chen, Hsueh-Ching 29 July 2005 (has links)
Text categorization is the process of assigning new documents to predefined document categories on the basis of a classification model(s) induced from a set of pre-categorized training documents. In a typical dichotomous classification scenario, the set of training documents includes both positive and negative examples; that is, each of the two categories is associated with training documents. However, in many real-world text categorization applications, positive and unlabeled documents are readily available, whereas the acquisition of samples of negative documents is extremely expensive or even impossible. In this study, we propose and develop an ensemble approach, referred to as E2, to address the limitations of existing algorithms for learning from positive and unlabeled training documents. Using the spam email filtering as the evaluation application, our empirical evaluation results suggest that the proposed E2 technique exhibits more stable and reliable performance than PNB and PEBL.
22

Text Mining: A Burgeoning Quality Improvement Tool

J. Mohammad, Mohammad Alkin Cihad 01 November 2007 (has links) (PDF)
While the amount of textual data available to us is constantly increasing, managing the texts by human effort is clearly inadequate for the volume and complexity of the information involved. Consequently, requirement for automated extraction of useful knowledge from huge amounts of textual data to assist human analysis is apparent. Text mining (TM) is mostly an automated technique that aims to discover knowledge from textual data. In this thesis, the notion of text mining, its techniques, applications are presented. In particular, the study provides the definition and overview of concepts in text categorization. This would include document representation models, weighting schemes, feature selection methods, feature extraction, performance measure and machine learning techniques. The thesis details the functionality of text mining as a quality improvement tool. It carries out an extensive survey of text mining applications within service sector and manufacturing industry. It presents two broad experimental studies tackling the potential use of text mining for the hotel industry (the comment card analysis), and in automobile manufacturer (miles per gallon analysis). Keywords: Text Mining, Text Categorization, Quality Improvement, Service Sector, Manufacturing Industry.
23

Investigations of Term Expansion on Text Mining Techniques

Yang, Chin-Sheng 02 August 2002 (has links)
Recent advances in computer and network technologies have contributed significantly to global connectivity and stimulated the amount of online textual document to grow extremely rapidly. The rapid accumulation of textual documents on the Web or within an organization requires effective document management techniques, covering from information retrieval, information filtering and text mining. The word mismatch problem represents a challenging issue to be addressed by the document management research. Word mismatch has been extensively investigated in information retrieval (IR) research by the use of term expansion (or specifically query expansion). However, a review of text mining literature suggests that the word mismatch problem has seldom been addressed by text mining techniques. Thus, this thesis aims at investigating the use of term expansion on some text mining techniques, specifically including text categorization, document clustering and event detection. Accordingly, we developed term expansion extensions to these three text mining techniques. The empirical evaluation results showed that term expansion increased the categorization effectiveness when the correlation coefficient feature selection was employed. With respect to document clustering, techniques extended with term expansion achieved comparable clustering effectiveness to existing techniques and showed its superiority in improving clustering specificity measure. Finally, the use of term expansion for supporting event detection has degraded the detection effectiveness as compared to the traditional event detection technique.
24

Algoritmos de seleção de características personalizados por classe para categorização de texto

FRAGOSO, Rogério César Peixoto 26 August 2016 (has links)
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-08-31T19:39:48Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Rogerio_Fragoso.pdf: 1117500 bytes, checksum: 3e7915ee5c34322de3a8358d59679961 (MD5) / Made available in DSpace on 2017-08-31T19:39:48Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Rogerio_Fragoso.pdf: 1117500 bytes, checksum: 3e7915ee5c34322de3a8358d59679961 (MD5) Previous issue date: 2016-08-26 / A categorização de textos é uma importante ferramenta para organização e recuperação de informações em documentos digitais. Uma abordagem comum é representar cada palavra como uma característica. Entretanto, a maior parte das características em um documento textual são irrelevantes para sua categorização. Assim, a redução de dimensionalidade é um passo fundamental para melhorar o desempenho de classificação e reduzir o alto custo computacional inerente a problemas de alta dimensionalidade, como é o caso da categorização de textos. A estratégia mais utilizada para redução de dimensionalidade em categorização de textos passa por métodos de seleção de características baseados em filtragem. Métodos deste tipo exigem um esforço para configurar o tamanho do vetor final de características. Este trabalho propõe métodos de filtragem com o intuito melhorar o desempenho de classificação em comparação com os métodos atuais e de tornar possível a automatização da escolha do tamanho do vetor final de características. O primeiro método proposto, chamado Category-dependent Maximum f Features per Document-Reduced (cMFDR), define um limiar para cada categoria para determinar quais documentos serão considerados no processo de seleção de características. O método utiliza um parâmetro para definir quantas características são selecionadas por documento. Esta abordagem apresenta algumas vantagens, como a simplificação do processo de escolha do subconjunto mais efetivo através de uma drástica redução da quantidade de possíveis configurações. O segundo método proposto, Automatic Feature Subsets Analyzer (AFSA), introduz um procedimento para determinar, de maneira guiada por dados, o melhor subconjunto de características dentre um número de subconjuntos gerados. Este método utiliza o mesmo parâmetro usado por cMFDR para definir a quantidade de características no vetor final. Isto permite que a busca pelo melhor subconjunto tenha um baixo custo computacional. O desempenho dos métodos propostos foram avaliados nas bases de dados WebKB, Reuters, 20 Newsgroup e TDT2, utilizando as funções de avaliação de características Bi-Normal Separation, Class Discriminating Measure e Chi-Squared Statistics. Os resultados dos experimentos demonstraram uma maior efetividade dos métodos propostos em relação aos métodos do estado da arte. / Text categorization is an important technic to organize and retrieve information from digital documents. A common approach is to represent each word as a feature. However most of the features in a textual document is irrelevant to its categorization. Thus, dimensionality reduction is a fundamental step to improve classification performance and diminish the high computational cost inherent to high dimensional problems, such as text categorization. The most commonly adopted strategy for dimensionality reduction in text categorization undergoes feature selection methods based on filtering. This kind of method requires an effort to configure the size of the final feature vector. This work proposes filtering methods aiming to improve categorization performence comparing to state-of-the-art methods and to provide a possibility of automitic determination of the size of the final feature set. The first proposed method, namely Category-dependent Maximum f Features per Document-Reduced (cMFDR), sets a threshold for each category that determines which documents are considered in feature selection process. The method uses a parameter to arbitrate how many features are selected per document. This approach presents some advantages, such as simplifying the process of choosing the most effective subset through a strong reduction of the number of possible configurations. The second proposed method, Automatic Feature Subsets Analyzer (AFSA), presents a procedure to determine, in a data driven way, the most effective subset among a number of generated subsets. This method uses the same parameter used by cMFDR to define the size of the final feature vector. This fact leads to lower computational costs to find the most effective set. The performance of the proposed methods was assessed in WebKB, Reuters, 20 Newsgroup and TDT2 datasets, using Bi-Normal Separation, Class Discriminating Measure and Chi-Squared Statistics feature evaluations functions. The experimental results demonstrates that the proposed methods are more effective than state-of-art methods.
25

Multi-Class Classification of Textual Data: Detection and Mitigation of Cheating in Massively Multiplayer Online Role Playing Games

Maguluri, Naga Sai Nikhil 10 May 2017 (has links)
No description available.
26

With or without context : Automatic text categorization using semantic kernels

Eklund, Johan January 2016 (has links)
In this thesis text categorization is investigated in four dimensions of analysis: theoretically as well as empirically, and as a manual as well as a machine-based process. In the first four chapters we look at the theoretical foundation of subject classification of text documents, with a certain focus on classification as a procedure for organizing documents in libraries. A working hypothesis used in the theoretical analysis is that classification of documents is a process that involves translations between statements in different languages, both natural and artificial. We further investigate the close relationships between structures in classification languages and the order relations and topological structures that arise from classification. A classification algorithm that gets a special focus in the subsequent chapters is the support vector machine (SVM), which in its original formulation is a binary classifier in linear vector spaces, but has been extended to handle classification problems for which the categories are not linearly separable. To this end the algorithm utilizes a category of functions called kernels, which induce feature spaces by means of high-dimensional and often non-linear maps. For the empirical part of this study we investigate the classification performance of semantic kernels generated by different measures of semantic similarity. One category of such measures is based on the latent semantic analysis and the random indexing methods, which generates term vectors by using co-occurrence data from text collections. Another semantic measure used in this study is pointwise mutual information. In addition to the empirical study of semantic kernels we also investigate the performance of a term weighting scheme called divergence from randomness, that has hitherto received little attention within the area of automatic text categorization. The result of the empirical part of this study shows that the semantic kernels generally outperform the “standard” (non-semantic) linear kernel, especially for small training sets. A conclusion that can be drawn with respect to the investigated datasets is therefore that semantic information in the kernel in general improves its classification performance, and that the difference between the standard kernel and the semantic kernels is particularly large for small training sets. Another clear trend in the result is that the divergence from randomness weighting scheme yields a classification performance surpassing that of the common tf-idf weighting scheme.
27

Objektivita médií / Media Bias in Czech Newspapers

Žiačiková, Zuzana January 2011 (has links)
The purpose of this thesis is to examine the media bias of the selected Czech newspapers. The study is conducted through a modified method for the measurement of the media bias, introduced by Gentzkow and Shapiro (2010). The media bias here means the use of the politically loaded language in the newspapers. The method is based on the comparison between the partisan language of the political parties represented in Parliament and language of the selected newspapers. This is carried out by comparing the phrases frequencies in the parliamentary speeches assigned to a coalition or an opposition and in the newspaper content. The analysis of the statistical dependence of newspaper's language and the partisan speech provides the evidence that Hospodářské noviny and Lidové noviny use language more similar to a coalition. Content of Mladá fronta Dnes, Blesk, Právo a Haló appear to be closer to an opposition parties.
28

Statistical pattern recognition approaches for retrieval-based machine translation systems

Mansjur, Dwi Sianto 01 November 2011 (has links)
This dissertation addresses the problem of Machine Translation (MT), which is defined as an automated translation of a document written in one language (the source language) to another (the target language) by a computer. The MT task requires various types of knowledge of both the source and target language, e.g., linguistic rules and linguistic exceptions. Traditional MT systems rely on an extensive parsing strategy to decode the linguistic rules and use a knowledge base to encode those linguistic exceptions. However, the construction of the knowledge base becomes an issue as the translation system grows. To overcome this difficulty, real translation examples are used instead of a manually-crafted knowledge base. This design strategy is known as the Example-Based Machine Translation (EBMT) principle. Traditional EBMT systems utilize a database of word or phrase translation pairs. The main challenge of this approach is the difficulty of combining the word or phrase translation units into a meaningful and fluent target text. A novel Retrieval-Based Machine Translation (RBMT) system, which uses a sentence-level translation unit, is proposed in this study. An advantage of using the sentence-level translation unit is that the boundary of a sentence is explicitly defined and the semantic, or meaning, is precise in both the source and target language. The main challenge of using a sentential translation unit is the limited coverage, i.e., the difficulty of finding an exact match between a user query and sentences in the source database. Using an electronic dictionary and a topic modeling procedure, we develop a procedure to obtain clusters of sensible variations for each example in the source database. The coverage of our MT system improves because an input query text is matched against a cluster of sensible variations of translation examples instead of being matched against an original source example. In addition, pattern recognition techniques are used to improve the matching procedure, i.e., the design of optimal pattern classifiers and the incorporation of subjective judgments. A high performance statistical pattern classifier is used to identify the target sentences from an input query sentence in our MT system. The proposed classifier is different from the conventional classifier in terms of the way it addresses the generalization capability. A conventional classifier addresses the generalization issue using the parsimony principle and may encounter the possibility of choosing an oversimplified statistical model. The proposed classifier directly addresses the generalization issue in terms of training (empirical) data. Our classifier is expected to generalize better than the conventional classifiers because our classifier is less likely to use over-simplified statistical models based on the available training data. We further improve the matching procedure by the incorporation of subjective judgments. We formulate a novel cost function that combines subjective judgments and the degree of matching between translation examples and an input query. In addition, we provide an optimization strategy for the novel cost function so that the statistical model can be optimized according to the subjective judgments.
29

Medida de certeza na categorização multi-rótulo de texto e sua utilização como estratégia de poda do ranking de categorias

Souza, Caribe Zampirolli de 27 August 2010 (has links)
Made available in DSpace on 2016-12-23T14:33:42Z (GMT). No. of bitstreams: 1 Dissertacao de Caribe Zampirolli de Souza.pdf: 1221547 bytes, checksum: 1e22f89c93c3423e4143b9ac13eeb1c6 (MD5) Previous issue date: 2010-08-27 / A multi-label text categorization system typically computes degrees of belief when it comes to the categories of a pre-defined set, orders the categories by degree of belief, and attributes to the document categories with a higher degree of belief to determined threshold cut. It would be ideal if the degree of belief could inform the probability of the document be part of this category. Unfortunately, there isn t a categorization system that computes such probabilities and to map degrees of belief in probabilities is still a problem that isn`t well explored in IR. In this paper we propose a method based on Bayes rules to map degrees of belief in terms of multi-label text measures of categorization. There are other contributions in this work such as an strategy to determine the limits of threshold cut based on bayesian cut (BCut) and a variant for PBCut (position based bayesian CUT ). As an experience, we evaluated the impact of the proposed methods when performing the two techniques of the multi-label text categorization. The first technique is called knearest neighbor multi-label (ML-KNN) and the second technique is called VG-RAM weightless Neural Networks. Theses evaluations were made in the context of the categorization of economic activities description of Brazilian enterprises, according to the Economic Activities Classification in Brazil (CNAE). In this work we also investigated the impact in the performance of multi-label text categorization of the three cut methods commonly used in the IR literature: RCut, PCut, SCut and RTCut. Moreover, we propose a new variant for the so called PCut* and a new variant for SCut*. Finally, this work shows that the cut approach proposed, BCut and PBCut, produces a categorization performance superior to the other strategies presented in the literature of IR / Dado um documento de entrada, um sistema de categorização multi-rótulo de texto tipicamente computa graus de crença para as categorias de um conjunto pré-definido, ordena as categorias por grau de crença, e atribui ao documento as categorias com grau de crença superior a um determinado limiar de poda. Idealmente, o grau de crença deveria informar a probabilidade do documento de fato pertencer à categoria. Infelizmente, ainda não existem categorizadores que computam tais probabilidades e mapear graus de crença em probabilidades é um problema ainda pouco explorado na área de RI. Neste trabalho, propomos um método baseado na regra de Bayes para mapear graus de crença em medidas de certeza de categorização multi-rótulo de texto. Propomos também uma estratégia para determinar limiares de poda baseada na medida de certeza de categorização - bayesian cut (BCut) - e uma variante para BCut - position based bayesian CUT (PBCut). Avaliamos experimentalmente o impacto dos métodos propostos no desempenho de duas técnicas de categorização multi-rótulo de texto, k-vizinhos mais próximos multi-rótulo (MLkNN) e rede neural sem peso do tipo VG-RAM com correlação de dados (VG-RAM WNNCOR), no contexto da categorização de descrições de atividades econômicas de empresas brasileiras segundo a Classificação Nacional de Atividades Econômicas (CNAE). Investigamos também o impacto no desempenho de categorização multi-rótulo de texto de três métodos de poda comumente usados na literatura de RI - RCut, PCut, e SCut e uma variante de RCut - RTCut. Além disso, propomos novas variantes para PCut e SCut PCut* e SCut*, respectivamente para tratar problemas existentes nestas abordagens. Nossos resultados experimentais mostram que, usando nosso método de geração de medidas de certeza de categorização, é possível prever o quão certo está o categorizador de que as ategorias por ele preditas são de fato pertinentes para um dado documento. Nossos resultados mostram também que o uso de nossas estratégias de poda BCut e PBCut produz desempenho de categorização superior ao de todas as outras estratégias consideradas em termos de precisão
30

A Framework of Automatic Subject Term Assignment: An Indexing Conception-Based Approach

Chung, EunKyung 12 1900 (has links)
The purpose of dissertation is to examine whether the understandings of subject indexing processes conducted by human indexers have a positive impact on the effectiveness of automatic subject term assignment through text categorization (TC). More specifically, human indexers' subject indexing approaches or conceptions in conjunction with semantic sources were explored in the context of a typical scientific journal article data set. Based on the premise that subject indexing approaches or conceptions with semantic sources are important for automatic subject term assignment through TC, this study proposed an indexing conception-based framework. For the purpose of this study, three hypotheses were tested: 1) the effectiveness of semantic sources, 2) the effectiveness of an indexing conception-based framework, and 3) the effectiveness of each of three indexing conception-based approaches (the content-oriented, the document-oriented, and the domain-oriented approaches). The experiments were conducted using a support vector machine implementation in WEKA (Witten, & Frank, 2000). The experiment results pointed out that cited works, source title, and title were as effective as the full text, while keyword was found more effective than the full text. In addition, the findings showed that an indexing conception-based framework was more effective than the full text. Especially, the content-oriented and the document-oriented indexing approaches were found more effective than the full text. Among three indexing conception-based approaches, the content-oriented approach and the document-oriented approach were more effective than the domain-oriented approach. In other words, in the context of a typical scientific journal article data set, the objective contents and authors' intentions were more focused that the possible users' needs. The research findings of this study support that incorporation of human indexers' indexing approaches or conception in conjunction with semantic sources has a positive impact on the effectiveness of automatic subject term assignment.

Page generated in 0.1416 seconds