• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 113
  • 39
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 193
  • 193
  • 83
  • 72
  • 63
  • 54
  • 45
  • 45
  • 39
  • 36
  • 31
  • 27
  • 25
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Text Classification of Legitimate and Rogue online Privacy Policies : Manual Analysis and a Machine Learning Experimental Approach

Rekanar, Kaavya January 2016 (has links)
No description available.
52

Classificação de textos com redes complexas / Using complex networks to classify texts

Diego Raphael Amancio 29 October 2013 (has links)
A classificação automática de textos em categorias pré-estabelecidas tem despertado grande interesse nos últimos anos devido à necessidade de organização do número crescente de documentos. A abordagem dominante para classificação é baseada na análise de conteúdo dos textos. Nesta tese, investigamos a aplicabilidade de atributos de estilo em tarefas tradicionais de classificação, usando a modelagem de textos como redes complexas, em que os vértices representam palavras e arestas representam relações de adjacência. Estudamos como métricas topológicas podem ser úteis no processamento de línguas naturais, sendo a tarefa de classificação apoiada por métodos de aprendizado de máquina, supervisionado e não supervisionado. Um estudo detalhado das métricas topológicas revelou que várias delas são informativas, por permitirem distinguir textos escritos em língua natural de textos com palavras distribuídas aleatoriamente. Mostramos também que a maioria das medidas de rede depende de fatores sintáticos, enquanto medidas de intermitência são mais sensíveis à semântica. Com relação à aplicabilidade da modelagem de textos como redes complexas, mostramos que existe uma dependência significativa entre estilo de autores e topologia da rede. Para a tarefa de reconhecimento de autoria de 40 romances escritos por 8 autores, uma taxa de acerto de 65% foi obtida com métricas de rede e intermitência de palavras. Ainda na análise de estilo, descobrimos que livros pertencentes ao mesmo estilo literário tendem a possuir estruturas topológicas similares. A modelagem de textos como redes também foi útil para discriminar sentidos de palavras ambíguas, a partir apenas de informação topológica dos vértices, evidenciando uma relação não trivial entre sintaxe e semântica. Para algumas palavras, a discriminação com redes complexas foi ainda melhor que a estratégia baseada em padrões de recorrência contextual de palavras polissêmicas. Os estudos desenvolvidos nesta tese confirmam que aspectos de estilo e semânticos influenciam na organização estrutural de conceitos em textos modelados como rede. Assim, a modelagem de textos como redes de adjacência de palavras pode ser útil não apenas para entender mecanismos fundamentais da linguagem, mas também para aperfeiçoar aplicações reais quando combinada com métodos tradicionais de processamento de texto. / The automatic classification of texts in pre-established categories is drawing increasing interest owing to the need to organize the ever growing number of electronic documents. The prevailing approach for classification is based on analysis of textual contents. In this thesis, we investigate the applicability of attributes based on textual style using the complex network (CN) representation, where nodes represent words and edges are adjacency relations. We studied the suitability of CN measurements for natural language processing tasks, with classification being assisted by supervised and unsupervised machine learning methods. A detailed study of topological measurements in texts revealed that several measurements are informative in the sense that they are able to distinguish meaningful from shuffled texts. Moreover, most measurements depend on syntactic factors, while intermittency measurements are more sensitive to semantic factors. As for the use of the CN model in practical scenarios, there is significant correlation between authors style and network topology. We achieved an accuracy rate of 65% in discriminating eight authors of novels with the use of network and intermittency measurements. During the stylistic analysis, we also found that books belonging to the same literary movement could be identified from their similar topological features. The network model also proved useful for disambiguating word senses. Upon employing only topological information to characterize nodes representing polysemous words, we found a strong relationship between syntax and semantics. For several words, the CN approach performed surprisingly better than the method based on recurrence patterns of neighboring words. The studies carried out in this thesis confirm that stylistic and semantic aspects play a crucial role in the structural organization of word adjacency networks. The word adjacency model investigated here might be useful not only to provide insight into the underlying mechanisms of the language, but also to enhance the performance of real applications implementing both CN and traditional approaches.
53

Automating Text Categorization with Machine Learning : Error Responsibility in a multi-layer hierarchy

Helén, Ludvig January 2017 (has links)
The company Ericsson is taking steps towards embracing automating techniques and applying them to their product development cycle. Ericsson wants to apply machine learning techniques to automate the evaluation of a text categorization problem of error reports, or trouble reports (TRs). An excess of 100,000 TRs are handled annually. This thesis presents two possible solutions for solving the routing problems where one technique uses traditional classifiers (Multinomial Naive Bayes and Support Vector Machines) for deciding the route through the company hierarchy where a specific TR belongs. The other solution utilizes a Convolutional Neural Network for translating the TRs into low-dimensional word vectors, or word embeddings, in order to be able to classify what group within the company should be responsible for the handling of the TR. The traditional classifiers achieve up to 83% accuracy and the Convolutional Neural Network achieve up to 71% accuracy in the task of predicting the correct class for a specific TR.
54

Translationese and Swedish-English Statistical Machine Translation

Joelsson, Jakob January 2016 (has links)
This thesis investigates how well machine learned classifiers can identify translated text, and the effect translationese may have in Statistical Machine Translation -- all in a Swedish-to-English, and reverse, context. Translationese is a term used to describe the dialect of a target language that is produced when a source text is translated. The systems trained for this thesis are SVM-based classifiers for identifying translationese, as well as translation and language models for Statistical Machine Translation. The classifiers successfully identified translationese in relation to non-translated text, and to some extent, also what source language the texts were translated from. In the SMT experiments, variation of the translation model was whataffected the results the most in the BLEU evaluation. Systems configured with non-translated source text and translationese target text performed better than their reversed counter parts. The language model experiments showed that those trained on known translationese and classified translationese performed better than known non-translated text, though classified translationese did not perform as well as the known translationese. Ultimately, the thesis shows that translationese can be identified by machine learned classifiers and may affect the results of SMT systems.
55

Classifying receipts or invoices from images based on text extraction

Kaci, Iuliia January 2016 (has links)
Nowadays, most of the documents are stored in electronic form and there is a high demand to organize and categorize them efficiently. Therefore, the field of automated text classification has gained a significant attention both from science and industry. This technology has been applied to information retrieval, information filtering, news classification, etc. The goal of this project is the automated text classification of photos as invoices or receipts in Visma Mobile Scanner, based on the previously extracted text. Firstly, several OCR tools available on the market have been evaluated in order to find the most accurate to be used for the text extraction, which turned out to be ABBYY FineReader. The machine learning tool WEKA has been used for the text classification, with the focus on the Naïve Bayes classifier. Since the Naïve Bayes implementation provided by WEKA does not support some advances in the text classification field such as N-gram, Laplace smoothing, etc., an improved version of Naïve Bayes classifier which is more specialized for the text classification and the invoice/receipt classification has been implemented. Improving the Naive Bayes classifier, investigating how it can be improved for the problem domain and evaluating the obtained classification accuracy compared to the generic Naïve Bayes are the main parts of this research. Experimental results show that the specialized Naïve Bayes classifier has the highest accuracy. By applying the Fixed penalty feature, the best result of 95.6522% accuracy on cross-validation mode has been achieved. In case of more accurate text extraction, the accuracy is even higher.
56

A Multi-label Text Classification Framework: Using Supervised and Unsupervised Feature Selection Strategy

Ma, Long 08 August 2017 (has links)
Text classification, the task of metadata to documents, requires significant time and effort when performed by humans. Moreover, with online-generated content explosively growing, it becomes a challenge for manually annotating with large scale and unstructured data. Currently, lots of state-or-art text mining methods have been applied to classification process, many of them based on the key word extraction. However, when using these key words as features in classification task, it is common that feature dimension is huge. In addition, how to select key words from tons of documents as features in classification task is also a challenge. Especially when using tradition machine learning algorithm in the large data set, the computation cost would be high. In addition, almost 80% of real data is unstructured and non-labeled. The advanced supervised feature selection methods cannot be used directly in selecting entities from massive of data. Usually, extracting features from unlabeled data for classification tasks, statistical strategies have been utilized to discover key features. However, we propose a nova method to extract important features effectively before feeding them into the classification assignment. There is another challenge in the text classification is the multi-label problem, the assignment of multiple non-exclusive labels to the documents. This problem makes text classification more complicated when compared with single label classification. Considering above issues, we develop a framework for extracting and eliminating data dimensionality, solving the multi-label problem on labeled and unlabeled data set. To reduce data dimension, we provide 1) a hybrid feature selection method that extracts meaningful features according to the importance of each feature. 2) we apply the Word2Vec to represent each document with a lower feature dimension when doing the document categorization for the big data set. 3) An unsupervised approach to extract features from real online generated data for text classification and prediction. On the other hand, to solve the multi-label classification task, we design a new Multi-Instance Multi-Label (MIML) algorithm in the proposed framework.
57

Stance Detection and Analysis in Social Media

Sobhani, Parinaz January 2017 (has links)
Computational approaches to opinion mining have mostly focused on polarity detection of product reviews by classifying the given text as positive, negative or neutral. While, there is less effort in the direction of socio-political opinion mining to determine favorability towards given targets of interest, particularly for social media data like news comments and tweets. In this research, we explore the task of automatically determining from the text whether the author of the text is in favor of, against, or neutral towards a proposition or target. The target may be a person, an organization, a government policy, a movement, a product, etc. Moreover, we are interested in detecting the reasons behind authors’ positions. This thesis is organized into three main parts: the first part on Twitter stance detection and interaction of stance and sentiment labels, the second part on detecting stance and the reasons behind it in online news comments, and the third part on multi-target stance classification. One may express favor (or disfavor) towards a target by using positive or negative language. Here, for the first time, we present a dataset of tweets annotated for whether the tweeter is in favor of or against pre-chosen targets, as well as for sentiment. These targets may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. We develop a simple stance detection system that outperforms all 19 teams that participated in a recent shared task competition on the same dataset (SemEval-2016 Task #6). Additionally, access to both stance and sentiment annotations allows us to conduct several experiments to tease out their interactions. Next, we proposed a novel framework for joint learning of stance and reasons behind it. This framework relies on topic modeling. Unlike other machine learning approaches for argument tagging which often require a large set of labeled data, our approach is minimally supervised. The extracted arguments are subsequently employed for stance classification. Furthermore, we create and make available the first dataset of online news comments manually annotated for stance and arguments. Experiments on this dataset demonstrate the benefits of using topic modeling, particularly Non-Negative Matrix Factorization, for argument detection. Previous models for stance classification often treat each target independently, ignoring the potential (sometimes very strong) dependency that could exist among targets. However, in many applications, there exist natural dependencies among targets. In this research, we relieve such independence assumptions in order to jointly model the stance expressed towards multiple targets. We present a new dataset that we built for this task and make it publicly available. Next, we show that an attention-based encoder-decoder framework is very effective for this problem, outperforming several alternatives that jointly learn dependent subjectivity through cascading classification or multi-task learning.
58

Effect of ontology hierarchy on a concept vector machine's ability to classify web documents

Graham, Jeffrey A. 01 January 2009 (has links)
As the quantity of text documents created on the web grows the ability of experts to manually classify them has decreased. Because people need to find and organize this information, interest has grown in developing automatic means of categorizing these documents. In this effort, ontologies have been developed that capture domain specific knowledge in the form of a hierarchy of concepts. Support Vector Machines are machine learning methods that are widely used for automated document categorization. Recent studies suggest that the classification accuracy of a Support Vector Machine may be improved by using concepts defined by a domain ontology instead of using the words that appear in the document. However, such studies have not taken into account the hierarchy inherent in the relationship between concepts. The goal of this dissertation was to investigate whether the hierarchical relationships among concepts in ontologies can be exploited to improve the classification accuracy of web documents by a Support Vector Machine. Concept vectors that capture the hierarchy of domain ontologies were created and used to train a Support Vector Machine. Tests conducted using the benchmark Reuters-21578 data set indicate that the Support Vector Machines achieve higher classification accuracy when they make use of the hierarchical relationships among concepts in ontologies.
59

Classificação automática de textos por meio de aprendizado de máquina baseado em redes / Text automatic classification through machine learning based on networks

Rafael Geraldeli Rossi 26 October 2015 (has links)
Nos dias atuais há uma quantidade massiva de dados textuais sendo produzida e armazenada diariamente na forma de e-mails, relatórios, artigos e postagens em redes sociais ou blogs. Processar, organizar ou gerenciar essa grande quantidade de dados textuais manualmente exige um grande esforço humano, sendo muitas vezes impossível de ser realizado. Além disso, há conhecimento embutido nos dados textuais, e analisar e extrair conhecimento de forma manual também torna-se inviável devido à grande quantidade de textos. Com isso, técnicas computacionais que requerem pouca intervenção humana e que permitem a organização, gerenciamento e extração de conhecimento de grandes quantidades de textos têm ganhado destaque nos últimos anos e vêm sendo aplicadas tanto na academia quanto em empresas e organizações. Dentre as técnicas, destaca-se a classificação automática de textos, cujo objetivo é atribuir rótulos (identificadores de categorias pré-definidos) à documentos textuais ou porções de texto. Uma forma viável de realizar a classificação automática de textos é por meio de algoritmos de aprendizado de máquina, que são capazes de aprender, generalizar, ou ainda extrair padrões das classes das coleções com base no conteúdo e rótulos de documentos textuais. O aprendizado de máquina para a tarefa de classificação automática pode ser de 3 tipos: (i) indutivo supervisionado, que considera apenas documentos rotulados para induzir um modelo de classificação e classificar novos documentos; (ii) transdutivo semissupervisionado, que classifica documentos não rotulados de uma coleção com base em documentos rotulados; e (iii) indutivo semissupervisionado, que considera documentos rotulados e não rotulados para induzir um modelo de classificação e utiliza esse modelo para classificar novos documentos. Independente do tipo, é necessário que as coleções de documentos textuais estejam representadas em um formato estruturado para os algoritmos de aprendizado de máquina. Normalmente os documentos são representados em um modelo espaço-vetorial, no qual cada documento é representado por um vetor, e cada posição desse vetor corresponde a um termo ou atributo da coleção de documentos. Algoritmos baseados no modelo espaço-vetorial consideram que tanto os documentos quanto os termos ou atributos são independentes, o que pode degradar a qualidade da classificação. Uma alternativa à representação no modelo espaço-vetorial é a representação em redes, que permite modelar relações entre entidades de uma coleção de textos, como documento e termos. Esse tipo de representação permite extrair padrões das classes que dificilmente são extraídos por algoritmos baseados no modelo espaço-vetorial, permitindo assim aumentar a performance de classificação. Além disso, a representação em redes permite representar coleções de textos utilizando diferentes tipos de objetos bem como diferentes tipos de relações, o que permite capturar diferentes características das coleções. Entretanto, observa-se na literatura alguns desafios para que se possam combinar algoritmos de aprendizado de máquina e representações de coleções de textos em redes para realizar efetivamente a classificação automática de textos. Os principais desafios abordados neste projeto de doutorado são (i) o desenvolvimento de representações em redes que possam ser geradas eficientemente e que também permitam realizar um aprendizado de maneira eficiente; (ii) redes que considerem diferentes tipos de objetos e relações; (iii) representações em redes de coleções de textos de diferentes línguas e domínios; e (iv) algoritmos de aprendizado de máquina eficientes e que façam um melhor uso das representações em redes para aumentar a qualidade da classificação automática. Neste projeto de doutorado foram propostos e desenvolvidos métodos para gerar redes que representem coleções de textos, independente de domínio e idioma, considerando diferentes tipos de objetos e relações entre esses objetos. Também foram propostos e desenvolvidos algoritmos de aprendizado de máquina indutivo supervisionado, indutivo semissupervisionado e transdutivo semissupervisionado, uma vez que não foram encontrados na literatura algoritmos para lidar com determinados tipos de relações, além de sanar a deficiência dos algoritmos existentes em relação à performance e/ou tempo de classificação. É apresentado nesta tese (i) uma extensa avaliação empírica demonstrando o benefício do uso das representações em redes para a classificação de textos em relação ao modelo espaço-vetorial, (ii) o impacto da combinação de diferentes tipos de relações em uma única rede e (iii) que os algoritmos propostos baseados em redes são capazes de superar a performance de classificação de algoritmos tradicionais e estado da arte tanto considerando algoritmos de aprendizado supervisionado quanto semissupervisionado. As soluções propostas nesta tese demonstraram ser úteis e aconselháveis para serem utilizadas em diversas aplicações que envolvam classificação de textos de diferentes domínios, diferentes características ou para diferentes quantidades de documentos rotulados. / A massive amount of textual data, such as e-mails, reports, articles and posts in social networks or blogs, has been generated and stored on a daily basis. The manual processing, organization and management of this huge amount of texts require a considerable human effort and sometimes these tasks are impossible to carry out in practice. Besides, the manual extraction of knowledge embedded in textual data is also unfeasible due to the large amount of texts. Thus, computational techniques which require little human intervention and allow the organization, management and knowledge extraction from large amounts of texts have gained attention in the last years and have been applied in academia, companies and organizations. The tasks mentioned above can be carried out through text automatic classification, in which labels (identifiers of predefined categories) are assigned to texts or portions of texts. A viable way to perform text automatic classification is through machine learning algorithms, which are able to learn, generalize or extract patterns from classes of text collections based on the content and labels of the texts. There are three types of machine learning algorithms for automatic classification: (i) inductive supervised, in which only labeled documents are considered to induce a classification model and this model are used to classify new documents; (ii) transductive semi-supervised, in which all known unlabeled documents are classified based on some labeled documents; and (iii) inductive semi-supervised, in which labeled and unlabeled documents are considered to induce a classification model in order to classify new documents. Regardless of the learning algorithm type, the texts of a collection must be represented in a structured format to be interpreted by the algorithms. Usually, the texts are represented in a vector space model, in which each text is represented by a vector and each dimension of the vector corresponds to a term or feature of the text collection. Algorithms based on vector space model consider that texts, terms or features are independent and this assumption can degrade the classification performance. Networks can be used as an alternative to vector space model representations. Networks allow the representations of relations among the entities of a text collection, such as documents and terms. This type of representation allows the extraction patterns which are not extracted by algorithms based on vector-space model. Moreover, text collections can be represented by networks composed of different types of entities and relations, which provide the extraction of different patterns from the texts. However, there are some challenges to be solved in order to allow the combination of machine learning algorithms and network-based representations to perform text automatic classification in an efficient way. The main challenges addressed in this doctoral project are (i) the development of network-based representations efficiently generated which also allows an efficient learning; (ii) the development of networks which represent different types of entities and relations; (iii) the development of networks which can represent texts written in different languages and about different domains; and (iv) the development of efficient learning algorithms which make a better use of the network-based representations and increase the classification performance. In this doctoral project we proposed and developed methods to represent text collections into networks considering different types of entities and relations and also allowing the representation of texts written in any language or from any domain. We also proposed and developed supervised inductive, semi-supervised transductive and semi-supervised inductive learning algorithms to interpret and learn from the proposed network-based representations since there were no algorithms to handle certain types of relations considered in this thesis. Besides, the proposed algorithms also attempt to obtain a higher classification performance and a faster classification than the existing network-based algorithms. In this doctoral thesis we present (i) an extensive empirical evaluation demonstrating the benefits about the use of network-based representations for text classification, (ii) the impact of the combination of different types of relations in a single network and (iii) that the proposed network-based algorithms are able to surpass the classification performance of traditional and state-of-the-art algorithms considering both supervised and semi-supervised learning. The solutions proposed in this doctoral project have proved to be advisable to be used in many applications involving classification of texts from different domains, areas, characteristics or considering different numbers of labeled documents.
60

Aspectos semânticos na representação de textos para classificação automática / Semantic aspects in the representation of texts for automatic classification

Roberta Akemi Sinoara 24 May 2018 (has links)
Dada a grande quantidade e diversidade de dados textuais sendo criados diariamente, as aplicações do processo de Mineração de Textos são inúmeras e variadas. Nesse processo, a qualidade da solução final depende, em parte, do modelo de representação de textos adotado. Por se tratar de textos em língua natural, relações sintáticas e semânticas influenciam o seu significado. No entanto, modelos tradicionais de representação de textos se limitam às palavras, não sendo possível diferenciar documentos que possuem o mesmo vocabulário, mas que apresentam visões diferentes sobre um mesmo assunto. Nesse contexto, este trabalho foi motivado pela diversidade das aplicações da tarefa de classificação automática de textos, pelo potencial das representações no modelo espaço-vetorial e pela lacuna referente ao tratamento da semântica inerente aos dados em língua natural. O seu desenvolvimento teve o propósito geral de avançar as pesquisas da área de Mineração de Textos em relação à incorporação de aspectos semânticos na representação de coleções de documentos. Um mapeamento sistemático da literatura da área foi realizado e os problemas de classificação foram categorizados em relação à complexidade semântica envolvida. Aspectos semânticos foram abordados com a proposta, bem como o desenvolvimento e a avaliação de sete modelos de representação de textos: (i) gBoED, modelo que incorpora a semântica obtida por meio de conhecimento do domínio; (ii) Uni-based, modelo que incorpora a semântica por meio da desambiguação lexical de sentidos e hiperônimos de conceitos; (iii) SR-based Terms e SR-based Sentences, modelos que incorporam a semântica por meio de anotações de papéis semânticos; (iv) NASARIdocs, Babel2Vec e NASARI+Babel2Vec, modelos que incorporam a semântica por meio de desambiguação lexical de sentidos e embeddings de palavras e conceitos. Representações de coleções de documentos geradas com os modelos propostos e outros da literatura foram analisadas e avaliadas na classificação automática de textos, considerando datasets de diferentes níveis de complexidade semântica. As propostas gBoED, Uni-based, SR-based Terms e SR-based Sentences apresentam atributos mais expressivos e possibilitam uma melhor interpretação da representação dos documentos. Já as propostas NASARIdocs, Babel2Vec e NASARI+Babel2Vec incorporam, de maneira latente, a semântica obtida de embeddings geradas a partir de uma grande quantidade de documentos externos. Essa propriedade tem um impacto positivo na performance de classificação. / Text Mining applications are numerous and varied since a huge amount of textual data are created daily. The quality of the final solution of a Text Mining process depends, among other factors, on the adopted text representation model. Despite the fact that syntactic and semantic relations influence natural language meaning, traditional text representation models are limited to words. The use of such models does not allow the differentiation of documents that use the same vocabulary but present different ideas about the same subject. The motivation of this work relies on the diversity of text classification applications, the potential of vector space model representations and the challenge of dealing with text semantics. Having the general purpose of advance the field of semantic representation of documents, we first conducted a systematic mapping study of semantics-concerned Text Mining studies and we categorized classification problems according to their semantic complexity. Then, we approached semantic aspects of texts through the proposal, analysis, and evaluation of seven text representation models: (i) gBoED, which incorporates text semantics by the use of domain expressions; (ii) Uni-based, which takes advantage of word sense disambiguation and hypernym relations; (iii) SR-based Terms and SR-based Sentences, which make use of semantic role labels; (iv) NASARIdocs, Babel2Vec and NASARI+Babel2Vec, which take advantage of word sense disambiguation and embeddings of words and senses.We analyzed the expressiveness and interpretability of the proposed text representation models and evaluated their classification performance against different literature models. While the proposed models gBoED, Uni-based, SR-based Terms and SR-based Sentences have improved expressiveness, the proposals NASARIdocs, Babel2Vec and NASARI+Babel2Vec are latently enriched by the embeddings semantics, obtained from the large training corpus. This property has a positive impact on text classification performance.

Page generated in 0.5428 seconds