• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 124
  • 39
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 207
  • 207
  • 90
  • 78
  • 66
  • 62
  • 48
  • 47
  • 45
  • 42
  • 35
  • 28
  • 25
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A comperative study of text classification models on invoices : The feasibility of different machine learning algorithms and their accuracy

Ekström, Linus, Augustsson, Andreas January 2018 (has links)
Text classification for companies is becoming more important in a world where an increasing amount of digital data are made available. The aim is to research whether five different machine learning algorithms can be used to automate the process of classification of invoice data and see which one gets the highest accuracy. Algorithms are in a later stage combined for an attempt to achieve higher results. N-grams are used, and results are compared in form of total accuracy of classification for each algorithm. A library in Python, called scikit-learn, implementing the chosen algorithms, was used. Data is collected and generated to represent data present on a real invoice where data has been extracted. Results from this thesis show that it is possible to use machine learning for this type of problem. The highest scoring algorithm (LinearSVC from scikit-learn) classifies 86% of all samples correctly. This is a margin of 16% above the acceptable level of 70%.
52

Análise de sentimentos baseada em aspectos e atribuições de polaridade / Aspect-based sentiment analysis and polarity assignment

Kauer, Anderson Uilian January 2016 (has links)
Com a crescente expansão da Web, cada vez mais usuários compartilham suas opiniões sobre experiências vividas. Essas opiniões estão, na maioria das vezes, representadas sob a forma de texto não estruturado. A Análise de Sentimentos (ou Mineração de Opinião) é a área dedicada ao estudo computacional das opiniões e sentimentos expressos em textos, tipicamente classificando-os de acordo com a sua polaridade (i.e., como positivos ou negativos). Ao mesmo tempo em que sites de vendas e redes sociais tornam-se grandes fontes de opiniões, cresce a busca por ferramentas que, de forma automática, classifiquem as opiniões e identifiquem a qual aspecto da entidade avaliada elas se referem. Neste trabalho, propomos métodos direcionados a dois pontos fundamentais para o tratamento dessas opiniões: (i) análise de sentimentos baseada em aspectos e (ii) atribuição de polaridade. Para a análise de sentimentos baseada em aspectos, desenvolvemos um método que identifica expressões que mencionem aspectos e entidades em um texto, utilizando ferramentas de processamento de linguagem natural combinadas com algoritmos de aprendizagem de máquina. Para a atribuição de polaridade, desenvolvemos um método que utiliza 24 atributos extraídos a partir do ranking gerado por um motor de busca e para gerar modelos de aprendizagem de máquina. Além disso, o método não depende de recursos linguísticos e pode ser aplicado sobre dados com ruídos. Experimentos realizados sobre datasets reais demonstram que, em ambas as contribuições, conseguimos resultados próximos aos dos baselines mesmo com um número pequeno de atributos. Ainda, para a atribuição de polaridade, os resultados são comparáveis aos de métodos do estado da arte que utilizam técnicas mais complexas. / With the growing expansion of the Web, more and more users share their views on experiences they have had. These views are, in most cases, represented in the form of unstructured text. The Sentiment Analysis (or Opinion Mining) is a research area dedicated to the computational study of the opinions and feelings expressed in texts, typically categorizing them according to their polarity (i.e., as positive or negative). As on-line sales and social networking sites become great sources of opinions, there is a growing need for tools that classify opinions and identify to which aspect of the evaluated entity they refer to. In this work, we propose methods aimed at two key points for the treatment of such opinions: (i) aspect-based sentiment analysis and (ii) polarity assignment. For aspect-based sentiment analysis, we developed a method that identifies expressions mentioning aspects and entities in text, using natural language processing tools combined with machine learning algorithms. For the identification of polarity, we developed a method that uses 24 attributes extracted from the ranking generated by a search engine to generate machine learning models. Furthermore, the method does not rely on linguistic resources and can be applied to noisy data. Experiments on real datasets show that, in both contributions, our results using a small number of attributes were similar to the baselines. Still, for assigning polarity, the results are comparable to prior art methods that use more complex techniques.
53

Text Classification of Legitimate and Rogue online Privacy Policies : Manual Analysis and a Machine Learning Experimental Approach

Rekanar, Kaavya January 2016 (has links)
No description available.
54

Classificação de textos com redes complexas / Using complex networks to classify texts

Diego Raphael Amancio 29 October 2013 (has links)
A classificação automática de textos em categorias pré-estabelecidas tem despertado grande interesse nos últimos anos devido à necessidade de organização do número crescente de documentos. A abordagem dominante para classificação é baseada na análise de conteúdo dos textos. Nesta tese, investigamos a aplicabilidade de atributos de estilo em tarefas tradicionais de classificação, usando a modelagem de textos como redes complexas, em que os vértices representam palavras e arestas representam relações de adjacência. Estudamos como métricas topológicas podem ser úteis no processamento de línguas naturais, sendo a tarefa de classificação apoiada por métodos de aprendizado de máquina, supervisionado e não supervisionado. Um estudo detalhado das métricas topológicas revelou que várias delas são informativas, por permitirem distinguir textos escritos em língua natural de textos com palavras distribuídas aleatoriamente. Mostramos também que a maioria das medidas de rede depende de fatores sintáticos, enquanto medidas de intermitência são mais sensíveis à semântica. Com relação à aplicabilidade da modelagem de textos como redes complexas, mostramos que existe uma dependência significativa entre estilo de autores e topologia da rede. Para a tarefa de reconhecimento de autoria de 40 romances escritos por 8 autores, uma taxa de acerto de 65% foi obtida com métricas de rede e intermitência de palavras. Ainda na análise de estilo, descobrimos que livros pertencentes ao mesmo estilo literário tendem a possuir estruturas topológicas similares. A modelagem de textos como redes também foi útil para discriminar sentidos de palavras ambíguas, a partir apenas de informação topológica dos vértices, evidenciando uma relação não trivial entre sintaxe e semântica. Para algumas palavras, a discriminação com redes complexas foi ainda melhor que a estratégia baseada em padrões de recorrência contextual de palavras polissêmicas. Os estudos desenvolvidos nesta tese confirmam que aspectos de estilo e semânticos influenciam na organização estrutural de conceitos em textos modelados como rede. Assim, a modelagem de textos como redes de adjacência de palavras pode ser útil não apenas para entender mecanismos fundamentais da linguagem, mas também para aperfeiçoar aplicações reais quando combinada com métodos tradicionais de processamento de texto. / The automatic classification of texts in pre-established categories is drawing increasing interest owing to the need to organize the ever growing number of electronic documents. The prevailing approach for classification is based on analysis of textual contents. In this thesis, we investigate the applicability of attributes based on textual style using the complex network (CN) representation, where nodes represent words and edges are adjacency relations. We studied the suitability of CN measurements for natural language processing tasks, with classification being assisted by supervised and unsupervised machine learning methods. A detailed study of topological measurements in texts revealed that several measurements are informative in the sense that they are able to distinguish meaningful from shuffled texts. Moreover, most measurements depend on syntactic factors, while intermittency measurements are more sensitive to semantic factors. As for the use of the CN model in practical scenarios, there is significant correlation between authors style and network topology. We achieved an accuracy rate of 65% in discriminating eight authors of novels with the use of network and intermittency measurements. During the stylistic analysis, we also found that books belonging to the same literary movement could be identified from their similar topological features. The network model also proved useful for disambiguating word senses. Upon employing only topological information to characterize nodes representing polysemous words, we found a strong relationship between syntax and semantics. For several words, the CN approach performed surprisingly better than the method based on recurrence patterns of neighboring words. The studies carried out in this thesis confirm that stylistic and semantic aspects play a crucial role in the structural organization of word adjacency networks. The word adjacency model investigated here might be useful not only to provide insight into the underlying mechanisms of the language, but also to enhance the performance of real applications implementing both CN and traditional approaches.
55

Automating Text Categorization with Machine Learning : Error Responsibility in a multi-layer hierarchy

Helén, Ludvig January 2017 (has links)
The company Ericsson is taking steps towards embracing automating techniques and applying them to their product development cycle. Ericsson wants to apply machine learning techniques to automate the evaluation of a text categorization problem of error reports, or trouble reports (TRs). An excess of 100,000 TRs are handled annually. This thesis presents two possible solutions for solving the routing problems where one technique uses traditional classifiers (Multinomial Naive Bayes and Support Vector Machines) for deciding the route through the company hierarchy where a specific TR belongs. The other solution utilizes a Convolutional Neural Network for translating the TRs into low-dimensional word vectors, or word embeddings, in order to be able to classify what group within the company should be responsible for the handling of the TR. The traditional classifiers achieve up to 83% accuracy and the Convolutional Neural Network achieve up to 71% accuracy in the task of predicting the correct class for a specific TR.
56

Translationese and Swedish-English Statistical Machine Translation

Joelsson, Jakob January 2016 (has links)
This thesis investigates how well machine learned classifiers can identify translated text, and the effect translationese may have in Statistical Machine Translation -- all in a Swedish-to-English, and reverse, context. Translationese is a term used to describe the dialect of a target language that is produced when a source text is translated. The systems trained for this thesis are SVM-based classifiers for identifying translationese, as well as translation and language models for Statistical Machine Translation. The classifiers successfully identified translationese in relation to non-translated text, and to some extent, also what source language the texts were translated from. In the SMT experiments, variation of the translation model was whataffected the results the most in the BLEU evaluation. Systems configured with non-translated source text and translationese target text performed better than their reversed counter parts. The language model experiments showed that those trained on known translationese and classified translationese performed better than known non-translated text, though classified translationese did not perform as well as the known translationese. Ultimately, the thesis shows that translationese can be identified by machine learned classifiers and may affect the results of SMT systems.
57

Classifying receipts or invoices from images based on text extraction

Kaci, Iuliia January 2016 (has links)
Nowadays, most of the documents are stored in electronic form and there is a high demand to organize and categorize them efficiently. Therefore, the field of automated text classification has gained a significant attention both from science and industry. This technology has been applied to information retrieval, information filtering, news classification, etc. The goal of this project is the automated text classification of photos as invoices or receipts in Visma Mobile Scanner, based on the previously extracted text. Firstly, several OCR tools available on the market have been evaluated in order to find the most accurate to be used for the text extraction, which turned out to be ABBYY FineReader. The machine learning tool WEKA has been used for the text classification, with the focus on the Naïve Bayes classifier. Since the Naïve Bayes implementation provided by WEKA does not support some advances in the text classification field such as N-gram, Laplace smoothing, etc., an improved version of Naïve Bayes classifier which is more specialized for the text classification and the invoice/receipt classification has been implemented. Improving the Naive Bayes classifier, investigating how it can be improved for the problem domain and evaluating the obtained classification accuracy compared to the generic Naïve Bayes are the main parts of this research. Experimental results show that the specialized Naïve Bayes classifier has the highest accuracy. By applying the Fixed penalty feature, the best result of 95.6522% accuracy on cross-validation mode has been achieved. In case of more accurate text extraction, the accuracy is even higher.
58

A Multi-label Text Classification Framework: Using Supervised and Unsupervised Feature Selection Strategy

Ma, Long 08 August 2017 (has links)
Text classification, the task of metadata to documents, requires significant time and effort when performed by humans. Moreover, with online-generated content explosively growing, it becomes a challenge for manually annotating with large scale and unstructured data. Currently, lots of state-or-art text mining methods have been applied to classification process, many of them based on the key word extraction. However, when using these key words as features in classification task, it is common that feature dimension is huge. In addition, how to select key words from tons of documents as features in classification task is also a challenge. Especially when using tradition machine learning algorithm in the large data set, the computation cost would be high. In addition, almost 80% of real data is unstructured and non-labeled. The advanced supervised feature selection methods cannot be used directly in selecting entities from massive of data. Usually, extracting features from unlabeled data for classification tasks, statistical strategies have been utilized to discover key features. However, we propose a nova method to extract important features effectively before feeding them into the classification assignment. There is another challenge in the text classification is the multi-label problem, the assignment of multiple non-exclusive labels to the documents. This problem makes text classification more complicated when compared with single label classification. Considering above issues, we develop a framework for extracting and eliminating data dimensionality, solving the multi-label problem on labeled and unlabeled data set. To reduce data dimension, we provide 1) a hybrid feature selection method that extracts meaningful features according to the importance of each feature. 2) we apply the Word2Vec to represent each document with a lower feature dimension when doing the document categorization for the big data set. 3) An unsupervised approach to extract features from real online generated data for text classification and prediction. On the other hand, to solve the multi-label classification task, we design a new Multi-Instance Multi-Label (MIML) algorithm in the proposed framework.
59

Stance Detection and Analysis in Social Media

Sobhani, Parinaz January 2017 (has links)
Computational approaches to opinion mining have mostly focused on polarity detection of product reviews by classifying the given text as positive, negative or neutral. While, there is less effort in the direction of socio-political opinion mining to determine favorability towards given targets of interest, particularly for social media data like news comments and tweets. In this research, we explore the task of automatically determining from the text whether the author of the text is in favor of, against, or neutral towards a proposition or target. The target may be a person, an organization, a government policy, a movement, a product, etc. Moreover, we are interested in detecting the reasons behind authors’ positions. This thesis is organized into three main parts: the first part on Twitter stance detection and interaction of stance and sentiment labels, the second part on detecting stance and the reasons behind it in online news comments, and the third part on multi-target stance classification. One may express favor (or disfavor) towards a target by using positive or negative language. Here, for the first time, we present a dataset of tweets annotated for whether the tweeter is in favor of or against pre-chosen targets, as well as for sentiment. These targets may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. We develop a simple stance detection system that outperforms all 19 teams that participated in a recent shared task competition on the same dataset (SemEval-2016 Task #6). Additionally, access to both stance and sentiment annotations allows us to conduct several experiments to tease out their interactions. Next, we proposed a novel framework for joint learning of stance and reasons behind it. This framework relies on topic modeling. Unlike other machine learning approaches for argument tagging which often require a large set of labeled data, our approach is minimally supervised. The extracted arguments are subsequently employed for stance classification. Furthermore, we create and make available the first dataset of online news comments manually annotated for stance and arguments. Experiments on this dataset demonstrate the benefits of using topic modeling, particularly Non-Negative Matrix Factorization, for argument detection. Previous models for stance classification often treat each target independently, ignoring the potential (sometimes very strong) dependency that could exist among targets. However, in many applications, there exist natural dependencies among targets. In this research, we relieve such independence assumptions in order to jointly model the stance expressed towards multiple targets. We present a new dataset that we built for this task and make it publicly available. Next, we show that an attention-based encoder-decoder framework is very effective for this problem, outperforming several alternatives that jointly learn dependent subjectivity through cascading classification or multi-task learning.
60

Effect of ontology hierarchy on a concept vector machine's ability to classify web documents

Graham, Jeffrey A. 01 January 2009 (has links)
As the quantity of text documents created on the web grows the ability of experts to manually classify them has decreased. Because people need to find and organize this information, interest has grown in developing automatic means of categorizing these documents. In this effort, ontologies have been developed that capture domain specific knowledge in the form of a hierarchy of concepts. Support Vector Machines are machine learning methods that are widely used for automated document categorization. Recent studies suggest that the classification accuracy of a Support Vector Machine may be improved by using concepts defined by a domain ontology instead of using the words that appear in the document. However, such studies have not taken into account the hierarchy inherent in the relationship between concepts. The goal of this dissertation was to investigate whether the hierarchical relationships among concepts in ontologies can be exploited to improve the classification accuracy of web documents by a Support Vector Machine. Concept vectors that capture the hierarchy of domain ontologies were created and used to train a Support Vector Machine. Tests conducted using the benchmark Reuters-21578 data set indicate that the Support Vector Machines achieve higher classification accuracy when they make use of the hierarchical relationships among concepts in ontologies.

Page generated in 0.0753 seconds