• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 124
  • 39
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 207
  • 207
  • 90
  • 78
  • 66
  • 62
  • 48
  • 47
  • 45
  • 42
  • 35
  • 28
  • 25
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Classificação automática de textos por meio de aprendizado de máquina baseado em redes / Text automatic classification through machine learning based on networks

Rafael Geraldeli Rossi 26 October 2015 (has links)
Nos dias atuais há uma quantidade massiva de dados textuais sendo produzida e armazenada diariamente na forma de e-mails, relatórios, artigos e postagens em redes sociais ou blogs. Processar, organizar ou gerenciar essa grande quantidade de dados textuais manualmente exige um grande esforço humano, sendo muitas vezes impossível de ser realizado. Além disso, há conhecimento embutido nos dados textuais, e analisar e extrair conhecimento de forma manual também torna-se inviável devido à grande quantidade de textos. Com isso, técnicas computacionais que requerem pouca intervenção humana e que permitem a organização, gerenciamento e extração de conhecimento de grandes quantidades de textos têm ganhado destaque nos últimos anos e vêm sendo aplicadas tanto na academia quanto em empresas e organizações. Dentre as técnicas, destaca-se a classificação automática de textos, cujo objetivo é atribuir rótulos (identificadores de categorias pré-definidos) à documentos textuais ou porções de texto. Uma forma viável de realizar a classificação automática de textos é por meio de algoritmos de aprendizado de máquina, que são capazes de aprender, generalizar, ou ainda extrair padrões das classes das coleções com base no conteúdo e rótulos de documentos textuais. O aprendizado de máquina para a tarefa de classificação automática pode ser de 3 tipos: (i) indutivo supervisionado, que considera apenas documentos rotulados para induzir um modelo de classificação e classificar novos documentos; (ii) transdutivo semissupervisionado, que classifica documentos não rotulados de uma coleção com base em documentos rotulados; e (iii) indutivo semissupervisionado, que considera documentos rotulados e não rotulados para induzir um modelo de classificação e utiliza esse modelo para classificar novos documentos. Independente do tipo, é necessário que as coleções de documentos textuais estejam representadas em um formato estruturado para os algoritmos de aprendizado de máquina. Normalmente os documentos são representados em um modelo espaço-vetorial, no qual cada documento é representado por um vetor, e cada posição desse vetor corresponde a um termo ou atributo da coleção de documentos. Algoritmos baseados no modelo espaço-vetorial consideram que tanto os documentos quanto os termos ou atributos são independentes, o que pode degradar a qualidade da classificação. Uma alternativa à representação no modelo espaço-vetorial é a representação em redes, que permite modelar relações entre entidades de uma coleção de textos, como documento e termos. Esse tipo de representação permite extrair padrões das classes que dificilmente são extraídos por algoritmos baseados no modelo espaço-vetorial, permitindo assim aumentar a performance de classificação. Além disso, a representação em redes permite representar coleções de textos utilizando diferentes tipos de objetos bem como diferentes tipos de relações, o que permite capturar diferentes características das coleções. Entretanto, observa-se na literatura alguns desafios para que se possam combinar algoritmos de aprendizado de máquina e representações de coleções de textos em redes para realizar efetivamente a classificação automática de textos. Os principais desafios abordados neste projeto de doutorado são (i) o desenvolvimento de representações em redes que possam ser geradas eficientemente e que também permitam realizar um aprendizado de maneira eficiente; (ii) redes que considerem diferentes tipos de objetos e relações; (iii) representações em redes de coleções de textos de diferentes línguas e domínios; e (iv) algoritmos de aprendizado de máquina eficientes e que façam um melhor uso das representações em redes para aumentar a qualidade da classificação automática. Neste projeto de doutorado foram propostos e desenvolvidos métodos para gerar redes que representem coleções de textos, independente de domínio e idioma, considerando diferentes tipos de objetos e relações entre esses objetos. Também foram propostos e desenvolvidos algoritmos de aprendizado de máquina indutivo supervisionado, indutivo semissupervisionado e transdutivo semissupervisionado, uma vez que não foram encontrados na literatura algoritmos para lidar com determinados tipos de relações, além de sanar a deficiência dos algoritmos existentes em relação à performance e/ou tempo de classificação. É apresentado nesta tese (i) uma extensa avaliação empírica demonstrando o benefício do uso das representações em redes para a classificação de textos em relação ao modelo espaço-vetorial, (ii) o impacto da combinação de diferentes tipos de relações em uma única rede e (iii) que os algoritmos propostos baseados em redes são capazes de superar a performance de classificação de algoritmos tradicionais e estado da arte tanto considerando algoritmos de aprendizado supervisionado quanto semissupervisionado. As soluções propostas nesta tese demonstraram ser úteis e aconselháveis para serem utilizadas em diversas aplicações que envolvam classificação de textos de diferentes domínios, diferentes características ou para diferentes quantidades de documentos rotulados. / A massive amount of textual data, such as e-mails, reports, articles and posts in social networks or blogs, has been generated and stored on a daily basis. The manual processing, organization and management of this huge amount of texts require a considerable human effort and sometimes these tasks are impossible to carry out in practice. Besides, the manual extraction of knowledge embedded in textual data is also unfeasible due to the large amount of texts. Thus, computational techniques which require little human intervention and allow the organization, management and knowledge extraction from large amounts of texts have gained attention in the last years and have been applied in academia, companies and organizations. The tasks mentioned above can be carried out through text automatic classification, in which labels (identifiers of predefined categories) are assigned to texts or portions of texts. A viable way to perform text automatic classification is through machine learning algorithms, which are able to learn, generalize or extract patterns from classes of text collections based on the content and labels of the texts. There are three types of machine learning algorithms for automatic classification: (i) inductive supervised, in which only labeled documents are considered to induce a classification model and this model are used to classify new documents; (ii) transductive semi-supervised, in which all known unlabeled documents are classified based on some labeled documents; and (iii) inductive semi-supervised, in which labeled and unlabeled documents are considered to induce a classification model in order to classify new documents. Regardless of the learning algorithm type, the texts of a collection must be represented in a structured format to be interpreted by the algorithms. Usually, the texts are represented in a vector space model, in which each text is represented by a vector and each dimension of the vector corresponds to a term or feature of the text collection. Algorithms based on vector space model consider that texts, terms or features are independent and this assumption can degrade the classification performance. Networks can be used as an alternative to vector space model representations. Networks allow the representations of relations among the entities of a text collection, such as documents and terms. This type of representation allows the extraction patterns which are not extracted by algorithms based on vector-space model. Moreover, text collections can be represented by networks composed of different types of entities and relations, which provide the extraction of different patterns from the texts. However, there are some challenges to be solved in order to allow the combination of machine learning algorithms and network-based representations to perform text automatic classification in an efficient way. The main challenges addressed in this doctoral project are (i) the development of network-based representations efficiently generated which also allows an efficient learning; (ii) the development of networks which represent different types of entities and relations; (iii) the development of networks which can represent texts written in different languages and about different domains; and (iv) the development of efficient learning algorithms which make a better use of the network-based representations and increase the classification performance. In this doctoral project we proposed and developed methods to represent text collections into networks considering different types of entities and relations and also allowing the representation of texts written in any language or from any domain. We also proposed and developed supervised inductive, semi-supervised transductive and semi-supervised inductive learning algorithms to interpret and learn from the proposed network-based representations since there were no algorithms to handle certain types of relations considered in this thesis. Besides, the proposed algorithms also attempt to obtain a higher classification performance and a faster classification than the existing network-based algorithms. In this doctoral thesis we present (i) an extensive empirical evaluation demonstrating the benefits about the use of network-based representations for text classification, (ii) the impact of the combination of different types of relations in a single network and (iii) that the proposed network-based algorithms are able to surpass the classification performance of traditional and state-of-the-art algorithms considering both supervised and semi-supervised learning. The solutions proposed in this doctoral project have proved to be advisable to be used in many applications involving classification of texts from different domains, areas, characteristics or considering different numbers of labeled documents.
62

Aspectos semânticos na representação de textos para classificação automática / Semantic aspects in the representation of texts for automatic classification

Roberta Akemi Sinoara 24 May 2018 (has links)
Dada a grande quantidade e diversidade de dados textuais sendo criados diariamente, as aplicações do processo de Mineração de Textos são inúmeras e variadas. Nesse processo, a qualidade da solução final depende, em parte, do modelo de representação de textos adotado. Por se tratar de textos em língua natural, relações sintáticas e semânticas influenciam o seu significado. No entanto, modelos tradicionais de representação de textos se limitam às palavras, não sendo possível diferenciar documentos que possuem o mesmo vocabulário, mas que apresentam visões diferentes sobre um mesmo assunto. Nesse contexto, este trabalho foi motivado pela diversidade das aplicações da tarefa de classificação automática de textos, pelo potencial das representações no modelo espaço-vetorial e pela lacuna referente ao tratamento da semântica inerente aos dados em língua natural. O seu desenvolvimento teve o propósito geral de avançar as pesquisas da área de Mineração de Textos em relação à incorporação de aspectos semânticos na representação de coleções de documentos. Um mapeamento sistemático da literatura da área foi realizado e os problemas de classificação foram categorizados em relação à complexidade semântica envolvida. Aspectos semânticos foram abordados com a proposta, bem como o desenvolvimento e a avaliação de sete modelos de representação de textos: (i) gBoED, modelo que incorpora a semântica obtida por meio de conhecimento do domínio; (ii) Uni-based, modelo que incorpora a semântica por meio da desambiguação lexical de sentidos e hiperônimos de conceitos; (iii) SR-based Terms e SR-based Sentences, modelos que incorporam a semântica por meio de anotações de papéis semânticos; (iv) NASARIdocs, Babel2Vec e NASARI+Babel2Vec, modelos que incorporam a semântica por meio de desambiguação lexical de sentidos e embeddings de palavras e conceitos. Representações de coleções de documentos geradas com os modelos propostos e outros da literatura foram analisadas e avaliadas na classificação automática de textos, considerando datasets de diferentes níveis de complexidade semântica. As propostas gBoED, Uni-based, SR-based Terms e SR-based Sentences apresentam atributos mais expressivos e possibilitam uma melhor interpretação da representação dos documentos. Já as propostas NASARIdocs, Babel2Vec e NASARI+Babel2Vec incorporam, de maneira latente, a semântica obtida de embeddings geradas a partir de uma grande quantidade de documentos externos. Essa propriedade tem um impacto positivo na performance de classificação. / Text Mining applications are numerous and varied since a huge amount of textual data are created daily. The quality of the final solution of a Text Mining process depends, among other factors, on the adopted text representation model. Despite the fact that syntactic and semantic relations influence natural language meaning, traditional text representation models are limited to words. The use of such models does not allow the differentiation of documents that use the same vocabulary but present different ideas about the same subject. The motivation of this work relies on the diversity of text classification applications, the potential of vector space model representations and the challenge of dealing with text semantics. Having the general purpose of advance the field of semantic representation of documents, we first conducted a systematic mapping study of semantics-concerned Text Mining studies and we categorized classification problems according to their semantic complexity. Then, we approached semantic aspects of texts through the proposal, analysis, and evaluation of seven text representation models: (i) gBoED, which incorporates text semantics by the use of domain expressions; (ii) Uni-based, which takes advantage of word sense disambiguation and hypernym relations; (iii) SR-based Terms and SR-based Sentences, which make use of semantic role labels; (iv) NASARIdocs, Babel2Vec and NASARI+Babel2Vec, which take advantage of word sense disambiguation and embeddings of words and senses.We analyzed the expressiveness and interpretability of the proposed text representation models and evaluated their classification performance against different literature models. While the proposed models gBoED, Uni-based, SR-based Terms and SR-based Sentences have improved expressiveness, the proposals NASARIdocs, Babel2Vec and NASARI+Babel2Vec are latently enriched by the embeddings semantics, obtained from the large training corpus. This property has a positive impact on text classification performance.
63

Busca guiada de patentes de Bioinformática / Guided Search of Bioinformatics Patents

Marcio Branquinho Dutra 17 October 2013 (has links)
As patentes são licenças públicas temporárias outorgadas pelo Estado e que garantem aos inventores e concessionários a exploração econômica de suas invenções. Escritórios de marcas e patentes recomendam aos interessados na concessão que, antes do pedido formal de uma patente, efetuem buscas em diversas bases de dados utilizando sistemas clássicos de busca de patentes e outras ferramentas de busca específicas, com o objetivo de certificar que a criação a ser depositada ainda não foi publicada, seja na sua área de origem ou em outras áreas. Pesquisas demonstram que a utilização de informações de classificação nas buscas por patentes melhoram a eficiência dos resultados das consultas. A pesquisa associada ao trabalho aqui reportado tem como objetivo explorar artefatos linguísticos, técnicas de Recuperação de Informação e técnicas de Classificação Textual para guiar a busca por patentes de Bioinformática. O resultado dessa investigação é o Sistema de Busca Guiada de Patentes de Bioinformática (BPS), o qual utiliza um classificador automático para guiar as buscas por patentes de Bioinformática. A utilização do BPS é demonstrada em comparações com ferramentas de busca de patentes atuais para uma coleção específica de patentes de Bioinformática. No futuro, deve-se experimentar o BPS em coleções diferentes e mais robustas. / Patents are temporary public licenses granted by the State to ensure to inventors and assignees economical exploration rights. Trademark and patent offices recommend to perform wide searches in different databases using classic patent search systems and specific tools before a patent\'s application. The goal of these searches is to ensure the invention has not been published yet, either in its original field or in other fields. Researches have shown the use of classification information improves the efficiency on searches for patents. The objetive of the research related to this work is to explore linguistic artifacts, Information Retrieval techniques and Automatic Classification techniques, to guide searches for Bioinformatics patents. The result of this work is the Bioinformatics Patent Search System (BPS), that uses automatic classification to guide searches for Bioinformatics patents. The utility of BPS is illustrated by a comparison with other patent search tools. In the future, BPS system must be experimented with more robust collections.
64

Predikce povahy spamových krátkých textů textovým klasifikátorem / Machine Learning Text Classifier for Short Texts Category Prediction

Drápela, Karel January 2018 (has links)
This thesis deals with categorization of short spam texts from SMS messages. First part summarizes current methods for text classification and~it's followed by description of several commonly used classifiers. In following chapters test data analysis, program implementation and results are described. The program is able to predict text categories based on predefined set of classes and also estimate classification accuracy on training data. For the two category types, that I designed, classifier reached accuracy of 82% and 92% . Both preprocessing and feature selection had a positive impact on resulting accuracy. It is possible to improve this accuracy further by removing portion of samples, which are difficult to classify. With 80\% recall it is possible to increase accuracy by 8-10%.
65

Extrakce sémantických vztahů z textu / Extraction of Semantic Relations from Text

Pospíšil, Milan January 2010 (has links)
Today exists many semi-structured documents, whitch we want convert to structured form. Goal of this work is create a system, that make this task more automatized. That could be difficult problem, because most of these documents are not generated by computer, so system have to tolerate differences. We also need some semantic understanding, thats why we choose only domain of meeting minutes documents.
66

Klasifikace textu s omezeným množstvím dat / Low-resource Text Classification

Szabó, Adam January 2021 (has links)
The aim of the thesis is to evaluate Czech text classification tasks in the low-resource settings. We introduce three datasets, two of which were publicly available and one was created partly by us. This dataset is based on contracts provided by the web platform Hlídač Státu. It has most of the data annotated automatically and only a small part manually. Its distinctive feature is that it contains long contracts in the Czech language. We achieve outstanding results with the proposed model on publicly available datasets, which confirms the sufficient performance of our model. In addition, we performed ex- perimental measurements of noisy data and of various amounts of data needed to train the model on these publicly available datasets. On the contracts dataset, we focused on selecting the right part of each contract and we studied with which part we can get the best result. We have found that for a dataset that contains some systematic errors due to automatic annotation, it is more advantageous to use a shorter but more relevant part of the contract for classification than to take a longer text from the contract and rely on BERT to learn correctly. 1
67

Méthodes d’apprentissage interactif pour la classification des messages courts / Interactive learning methods for short text classification

Bouaziz, Ameni 19 June 2017 (has links)
La classification automatique des messages courts est de plus en plus employée de nos jours dans diverses applications telles que l'analyse des sentiments ou la détection des « spams ». Par rapport aux textes traditionnels, les messages courts, comme les tweets et les SMS, posent de nouveaux défis à cause de leur courte taille, leur parcimonie et leur manque de contexte, ce qui rend leur classification plus difficile. Nous présentons dans cette thèse deux nouvelles approches visant à améliorer la classification de ce type de message. Notre première approche est nommée « forêts sémantiques ». Dans le but d'améliorer la qualité des messages, cette approche les enrichit à partir d'une source externe construite au préalable. Puis, pour apprendre un modèle de classification, contrairement à ce qui est traditionnellement utilisé, nous proposons un nouvel algorithme d'apprentissage qui tient compte de la sémantique dans le processus d'induction des forêts aléatoires. Notre deuxième contribution est nommée « IGLM » (Interactive Generic Learning Method). C'est une méthode interactive qui met récursivement à jour les forêts en tenant compte des nouvelles données arrivant au cours du temps, et de l'expertise de l'utilisateur qui corrige les erreurs de classification. L'ensemble de ce mécanisme est renforcé par l'utilisation d'une méthode d'abstraction permettant d'améliorer la qualité des messages. Les différentes expérimentations menées en utilisant ces deux méthodes ont permis de montrer leur efficacité. Enfin, la dernière partie de la thèse est consacrée à une étude complète et argumentée de ces deux prenant en compte des critères variés tels que l'accuracy, la rapidité, etc. / Automatic short text classification is more and more used nowadays in various applications like sentiment analysis or spam detection. Short texts like tweets or SMS are more challenging than traditional texts. Therefore, their classification is more difficult owing to their shortness, sparsity and lack of contextual information. We present two new approaches to improve short text classification. Our first approach is "Semantic Forest". The first step of this approach proposes a new enrichment method that uses an external source of enrichment built in advance. The idea is to transform a short text from few words to a larger text containing more information in order to improve its quality before building the classification model. Contrarily to the methods proposed in the literature, the second step of our approach does not use traditional learning algorithm but proposes a new one based on the semantic links among words in the Random Forest classifier. Our second contribution is "IGLM" (Interactive Generic Learning Method). It is a new interactive approach that recursively updates the classification model by considering the new data arriving over time and by leveraging the user intervention to correct misclassified data. An abstraction method is then combined with the update mechanism to improve short text quality. The experiments performed on these two methods show their efficiency and how they outperform traditional algorithms in short text classification. Finally, the last part of the thesis concerns a complete and argued comparative study of the two proposed methods taking into account various criteria such as accuracy, speed, etc.
68

Textová klasifikace s limitovanými trénovacími daty / Text classification with limited training data

Laitoch, Petr January 2021 (has links)
The aim of this thesis is to minimize manual work needed to create training data for text classification tasks. Various research areas including weak supervision, interactive learning and transfer learning explore how to minimize training data creation effort. We combine ideas from available literature in order to design a comprehensive text classification framework that employs keyword-based labeling instead of traditional text annotation. Keyword-based labeling aims to label texts based on keywords contained in the texts that are highly correlated with individual classification labels. As noted repeatedly in previous work, coming up with many new keywords is challenging for humans. To accommodate for this issue, we propose an interactive keyword labeler featuring the use of word similarity for guiding a user in keyword labeling. To verify the effectiveness of our novel approach, we implement a minimum viable prototype of the designed framework and use it to perform a user study on a restaurant review multi-label classification problem.
69

Performance comparison of different machine learningmodels in detecting fake news

Wan, Zhibin, Xu, Huatai January 2021 (has links)
The phenomenon of fake news has a significant impact on our social life, especially in the political world. Fake news detection is an emerging area of research. The sharing of infor-mation on the Web, primarily through Web-based online media, is increasing. The ability to identify, evaluate, and process this information is of great importance. Deliberately created disinformation is being generated on the Internet, either intentionally or unintentionally. This is affecting a more significant segment of society that is being blinded by technology. This paper illustrates models and methods for detecting fake news from news articles with the help of machine learning and natural language processing. We study and compare three different feature extraction techniques and seven different machine classification techniques. Different feature engineering methods such as TF, TF-IDF, and Word2Vec are used to gener-ate feature vectors in this proposed work. Even different machine learning classification al-gorithms were trained to classify news as false or true. The best algorithm was selected to build a model to classify news as false or true, considering accuracy, F1 score, etc., for com-parison. We perform two different sets of experiments and finally obtain the combination of fake news detection models that perform best in different situations.
70

Taskfinder : Comparison of NLP techniques for textclassification within FMCG stores

Jensen, Julius January 2022 (has links)
Natural language processing has many important applications in today, such as translations, spam filters, and other useful products. To achieve these applications supervised and unsupervised machine learning models, have shown to be successful. The most important aspect of these models is what the model can achieve with different datasets. This article will examine how RNN models compare with Naive Bayes in text classification. The chosen RNN models are long short-term memory (LSTM) and gated recurrent unit (GRU). Both LSTM and GRU will be trained using the flair Framework. The models will be trained on three separate datasets with different compositions, where the trend within each model will be examined and compared with the other models. The result showed that Naive Bayes performed better on classifying short sentences than the RNN models, but worse in longer sentences. When trained on a small dataset LSTM and GRU had a better result then Naive Bayes. The best performing model was Naive Bayes, which had the highest accuracy score in two out of the three datasets.

Page generated in 0.0226 seconds