31 |
[en] A STUDY OF MULTILABEL TEXT CLASSIFICATION ALGORITHMS USING NAIVE-BAYES / [pt] UM ESTUDO DE ALGORITMOS PARA CLASSIFICAÇÃO AUTOMÁTICA DE TEXTOS UTILIZANDO NAIVE-BAYESDAVID STEINBRUCH 12 March 2007 (has links)
[pt] A quantidade de informação eletrônica vem crescendo de
forma acelerada,
motivada principalmente pela facilidade de publicação e
divulgação que a
Internet proporciona. Desta forma, é necessária a
organização da informação
de forma a facilitar a sua aquisição. Muitos trabalhos
propuseram resolver
este problema através da classificação automática de
textos associando a
eles vários rótulos (classificação multirótulo). No
entanto, estes trabalhos
transformam este problema em subproblemas de classificação
binária,
considerando que existe independência entre as categorias.
Além disso,
utilizam limiares (thresholds), que são muito específicos
para o conjunto
de treinamento utilizado, não possuindo grande capacidade
de generalização
na aprendizagem. Esta dissertação propõe dois algoritmos
de classificação
automática de textos baseados no algoritmo multinomial
naive Bayes e sua
utilização em um ambiente on-line de classificação
automática de textos
com realimentação de relevância pelo usuário. Para testar
a eficiência dos
algoritmos propostos, foram realizados experimentos na
base de notícias
Reuters 21758 e na base de documentos médicos Ohsumed. / [en] The amount of electronic information has been growing
fast, mainly due to
the easiness of publication and spreading that Internet
provides. Therefore,
is necessary the organisation of information to facilitate
its retrieval. Many
works have solved this problem through the automatic text
classification,
associating to them several labels (multilabel
classification). However, those
works have transformed this problem into binary
classification subproblems,
considering there is not dependence among categories.
Moreover, they have
used thresholds, which are very sepecific of the
classifier document base,
and so, does not have great generalization capacity in the
learning process.
This thesis proposes two text classifiers based on the
multinomial algorithm
naive Bayes and its usage in an on-line text
classification environment with
user relevance feedback. In order to test the proposed
algorithms efficiency,
experiments have been performed on the Reuters 21578 news
base, and on
the Ohsumed medical document base.
|
32 |
Modelos de tópicos na classificação automática de resenhas de usuários. / Topic models in user review automatic classification.Mauá, Denis Deratani 14 August 2009 (has links)
Existe um grande número de resenhas de usuário na internet contendo valiosas informações sobre serviços, produtos, política e tendências. A compreensão automática dessas opiniões é não somente cientificamente interessante, mas potencialmente lucrativa. A tarefa de classificação de sentimentos visa a extração automática das opiniões expressas em documentos de texto. Diferentemente da tarefa mais tradicional de categorização de textos, na qual documentos são classificados em assuntos como esportes, economia e turismo, a classificação de sentimentos consiste em anotar documentos com os sentimentos expressos no texto. Se comparados aos classificadores tradicionais, os classificadores de sentimentos possuem um desempenho insatisfatório. Uma das possíveis causas do baixo desempenho é a ausência de representações adequadas que permitam a discriminação das opiniões expressas de uma forma concisa e própria para o processamento de máquina. Modelos de tópicos são modelos estatísticos que buscam extrair informações semânticas ocultas na grande quantidade de dados presente em coleções de texto. Eles representam um documento como uma mistura de tópicos, onde cada tópico é uma distribuição de probabilidades sobre palavras. Cada distribuição representa um conceito semântico implícito nos dados. Modelos de tópicos, as palavras são substituídas por tópicos que representam seu significado de forma sucinta. De fato, os modelos de tópicos realizam uma redução de dimensionalidade nos dados que pode levar a um aumento do desempenho das técnicas de categorização de texto e recuperação de informação. Na classificação de sentimentos, eles podem fornecer a representação necessária através da extração de tópicos que representem os sentimentos expressos no texto. Este trabalho dedica-se ao estudo da aplicação de modelos de tópicos na representação e classificação de sentimentos de resenhas de usuário. Em particular, o modelo Latent Dirichlet Allocation (LDA) e quatro extensões (duas delas desenvolvidas pelo autor) são avaliados na tarefa de classificação de sentimentos baseada em múltiplos aspectos. As extensões ao modelo LDA permitem uma investigação dos efeitos da incorporação de informações adicionais como contexto, avaliações de aspecto e avaliações de múltiplos aspectos no modelo original. / There is a large number of user reviews on the internet with valuable information on services, products, politics and trends. There is both scientific and economic interest in the automatic understanding of such data. Sentiment classification is concerned with automatic extraction of opinions expressed in user reviews. Unlike standard text categorization tasks that deal with the classification of documents into subjects such as sports, economics and tourism, sentiment classification attempts to tag documents with respect to the feelings they express. Compared to the accuracy of standard methods, sentiment classifiers have shown poor performance. One possible cause of such a poor performance is the lack of adequate representations that lead to opinion discrimination in a concise and machine-readable form. Topic Models are statistical models concerned with the extraction of semantic information hidden in the large number of data available in text collections. They represent a document as a mixture of topics, probability distributions over words that represent a semantic concept. According to Topic Model representation, words can be substituted by topics able to represent concisely its meaning. Indeed, Topic Models perform a data dimensionality reduction that can improve the performance of text classification and information retrieval techniques. In sentiment classification, they can provide the necessary representation by extracting topics that represent the general feelings expressed in text. This work presents a study of the use of Topic Models for representing and classifying user reviews with respect to their feelings. In particular, the Latent Dirichlet Allocation (LDA) model and four extensions (two of them developed by the author) are evaluated on the task of aspect-based sentiment classification. The extensions to the LDA model enables us to investigate the effects of the incorporation of additional information such as context, aspect rating and multiple aspect rating into the original model.
|
33 |
Προσωποποιημένη προβολή περιεχομένου του Διαδικτύου με τεχνικές προ-επεξεργασίας, αυτόματης κατηγοριοποίησης και αυτόματης εξαγωγής περίληψηςΠουλόπουλος, Βασίλειος 22 November 2007 (has links)
Σκοπός της Μεταπτυχιακής Εργασίας είναι η επέκταση και αναβάθμιση του μηχανισμού που είχε δημιουργηθεί στα πλαίσια της Διπλωματικής Εργασίας που εκπόνησα με τίτλο «Δημιουργία Πύλης Προσωποποιημένης Πρόσβασης σε Περιεχόμενο του WWW».
Η παραπάνω Διπλωματική εργασία περιλάμβανε τη δημιουργία ενός μηχανισμού που ξεκινούσε με ανάκτηση πληροφορίας από το Διαδίκτυο (HTML σελίδες από news portals), εξαγωγή χρήσιμου κειμένου και προεπεξεργασία της πληροφορίας, αυτόματη κατηγοριοποίηση της πληροφορίας και τέλος παρουσίαση στον τελικό χρήστη με προσωποποίηση με στοιχεία που εντοπίζονταν στις επιλογές του χρήστη.
Στην παραπάνω εργασία εξετάστηκαν διεξοδικά θέματα που είχαν να κάνουν με τον τρόπο προεπεξεργασίας της πληροφορίας καθώς και με τον τρόπο αυτόματης κατηγοριοποίησης ενώ υλοποιήθηκαν αλγόριθμοι προεπεξεργασίας πληροφορίας τεσσάρων σταδίων και αλγόριθμος αυτόματης κατηγοριοποίησης βασισμένος σε πρότυπες κατηγορίες.
Τέλος υλοποιήθηκε portal το οποίο εκμεταλλευόμενο την επεξεργασία που έχει πραγματοποιηθεί στην πληροφορία παρουσιάζει το περιεχόμενο στους χρήστες προσωποποιημένο βάσει των επιλογών που αυτοί πραγματοποιούν.
Σκοπός της μεταπτυχιακής εργασίας είναι η εξέταση περισσοτέρων αλγορίθμων για την πραγματοποίηση της παραπάνω διαδικασίας αλλά και η υλοποίησή τους προκειμένου να γίνει σύγκριση αλγορίθμων και παραγωγή ποιοτικότερου αποτελέσματος.
Πιο συγκεκριμένα αναβαθμίζονται όλα τα στάδια λειτουργίας του μηχανισμού. Έτσι, το στάδιο λήψης πληροφορίας βασίζεται σε έναν απλό crawler λήψης HTML σελίδων από αγγλόφωνα news portals. Η διαδικασία βασίζεται στο γεγονός πως για κάθε σελίδα υπάρχουν RSS feeds. Διαβάζοντας τα τελευταία νέα που προκύπτουν από τις εγγραφές στα RSS feeds μπορούμε να εντοπίσουμε όλα τα URL που περιέχουν HTML σελίδες με τα άρθρα. Οι HTML σελίδες φιλτράρονται προκειμένου από αυτές να γίνει εξαγωγή μόνο του κειμένου και πιο αναλυτικά του χρήσιμου κειμένου ούτως ώστε το κείμενο που εξάγεται να αφορά αποκλειστικά άρθρα. Η τεχνική εξαγωγής χρήσιμου κειμένου βασίζεται στην τεχνική web clipping. Ένας parser, ελέγχει την HTML δομή προκειμένου να εντοπίσει τους κόμβους που περιέχουν μεγάλη ποσότητα κειμένου και βρίσκονται κοντά σε άλλους κόμβους που επίσης περιέχουν μεγάλες ποσότητες κειμένου.
Στα εξαγόμενα άρθρα πραγματοποιείται προεπεξεργασία πέντε σταδίων με σκοπό να προκύψουν οι λέξεις κλειδιά που είναι αντιπροσωπευτικές του άρθρου. Πιο αναλυτικά, αφαιρούνται όλα τα σημεία στίξης, όλοι οι αριθμοί, μετατρέπονται όλα τα γράμματα σε πεζά, αφαιρούνται όλες οι λέξεις που έχουν λιγότερους από 4 χαρακτήρες, αφαιρούνται όλες οι κοινότυπες λέξεις και τέλος εφαρμόζονται αλγόριθμοι εύρεσης της ρίζας μίας λέξεις. Οι λέξεις κλειδιά που απομένουν είναι stemmed το οποίο σημαίνει πως από τις λέξεις διατηρείται μόνο η ρίζα.
Από τις λέξεις κλειδιά ο μηχανισμός οδηγείται σε δύο διαφορετικά στάδια ανάλυσης. Στο πρώτο στάδιο υπάρχει μηχανισμός ο οποίος αναλαμβάνει να δημιουργήσει μία αντιπροσωπευτική περίληψη του κειμένου ενώ στο δεύτερο στάδιο πραγματοποιείται αυτόματη κατηγοριοποίηση του κειμένου βασισμένη σε πρότυπες κατηγορίες που έχουν δημιουργηθεί από επιλεγμένα άρθρα που συλλέγονται καθ’ όλη τη διάρκεια υλοποίησης του μηχανισμού. Η εξαγωγή περίληψης βασίζεται σε ευρεστικούς αλγορίθμους. Πιο συγκεκριμένα προσπαθούμε χρησιμοποιώντας λεξικολογική ανάλυση του κειμένου αλλά και γεγονότα για τις λέξεις του κειμένου αν δημιουργήσουμε βάρη για τις προτάσεις του κειμένου. Οι προτάσεις με τα μεγαλύτερη βάρη μετά το πέρας της διαδικασίας είναι αυτές που επιλέγονται για να διαμορφώσουν την περίληψη. Όπως θα δούμε και στη συνέχεια για κάθε άρθρο υπάρχει μία γενική περίληψη αλλά το σύστημα είναι σε θέση να δημιουργήσει προσωποποιημένες περιλήψεις για κάθε χρήστη. Η διαδικασία κατηγοριοποίησης βασίζεται στη συσχέτιση συνημίτονου συγκριτικά με τις πρότυπες κατηγορίες. Η κατηγοριοποίηση δεν τοποθετεί μία ταμπέλα σε κάθε άρθρο αλλά μας δίνει τα αποτελέσματα συσχέτισης του άρθρου με κάθε κατηγορία.
Ο συνδυασμός των δύο παραπάνω σταδίων δίνει την πληροφορία που εμφανίζεται σε πρώτη φάση στο χρήστη που επισκέπτεται το προσωποποιημένο portal. Η προσωποποίηση στο portal βασίζεται στις επιλογές που κάνουν οι χρήστες, στο χρόνο που παραμένουν σε μία σελίδα αλλά και στις επιλογές που δεν πραγματοποιούν προκειμένου να δημιουργηθεί προφίλ χρήστη και να είναι εφικτό με την πάροδο του χρόνου να παρουσιάζεται στους χρήστες μόνο πληροφορία που μπορεί να τους ενδιαφέρει. / The scope of this MsC thesis is the extension and upgrade of the mechanism that was constructed during my undergraduate studies under my undergraduate thesis entitled “Construction of a Web Portal with Personalized Access to WWW content”.
The aforementioned thesis included the construction of a mechanism that would begin with information retrieval from the WWW and would conclude to representation of information through a portal after applying useful text extraction, text pre-processing and text categorization techniques.
The scope of the MsC thesis is to locate the problematic parts of the system and correct them with better algorithms and also include more modules on the complete mechanism.
More precisely, all the modules are upgraded while more of them are constructed in every aspect of the mechanism. The information retrieval module is based on a simple crawler. The procedure is based on the fact that all the major news portals include RSS feeds. By locating the latest articles that are added to the RSS feeds we are able to locate all the URLs of the HTML pages that include articles. The crawler then visits every simple URL and downloads the HTML page. These pages are filtered by the useful text extraction mechanism in order to extract only the body of the article from the HTML page. This procedure is based on the web-clipping technique. An HTML parser analyzes the DOM model of HTML and locates the nodes (leafs) that include large amounts of text and are close to nodes with large amounts of text. These nodes are considered to include the useful text.
In the extracted useful text we apply a 5 level preprocessing technique in order to extract the keywords of the article. More analytically, we remove the punctuation, the numbers, the words that are smaller than 4 letters, the stopwords and finally we apply a stemming algorithm in order to produce the root of the word.
The keywords are utilized into two different interconnected levels. The first is the categorization subsystem and the second is the summarization subsystem. During the summarization stage the system constructs a summary of the article while the second stage tries to label the article. The labeling is not unique but the categorization applies multi-labeling techniques in order to detect the relation with each of the standard categories of the system. The summarization technique is based on heuristics. More specifically, we try, by utilizing language processing and facts that concern the keywords, to create a score for each of the sentences of the article. The more the score of a sentence, the more the probability of it to be included to the summary which consists of sentences of the text.
The combination of the categorization and summarization provides the information that is shown to our web portal called perssonal. The personalization issue of the portal is based on the selections of the user, on the non-selections of the user, on the time that the user remains on an article, on the time that spends reading similar or identical articles. After a short period of time, the system is able to adopt on the user’s needs and is able to present articles that match the preferences of the user only.
|
34 |
Modelos de tópicos na classificação automática de resenhas de usuários. / Topic models in user review automatic classification.Denis Deratani Mauá 14 August 2009 (has links)
Existe um grande número de resenhas de usuário na internet contendo valiosas informações sobre serviços, produtos, política e tendências. A compreensão automática dessas opiniões é não somente cientificamente interessante, mas potencialmente lucrativa. A tarefa de classificação de sentimentos visa a extração automática das opiniões expressas em documentos de texto. Diferentemente da tarefa mais tradicional de categorização de textos, na qual documentos são classificados em assuntos como esportes, economia e turismo, a classificação de sentimentos consiste em anotar documentos com os sentimentos expressos no texto. Se comparados aos classificadores tradicionais, os classificadores de sentimentos possuem um desempenho insatisfatório. Uma das possíveis causas do baixo desempenho é a ausência de representações adequadas que permitam a discriminação das opiniões expressas de uma forma concisa e própria para o processamento de máquina. Modelos de tópicos são modelos estatísticos que buscam extrair informações semânticas ocultas na grande quantidade de dados presente em coleções de texto. Eles representam um documento como uma mistura de tópicos, onde cada tópico é uma distribuição de probabilidades sobre palavras. Cada distribuição representa um conceito semântico implícito nos dados. Modelos de tópicos, as palavras são substituídas por tópicos que representam seu significado de forma sucinta. De fato, os modelos de tópicos realizam uma redução de dimensionalidade nos dados que pode levar a um aumento do desempenho das técnicas de categorização de texto e recuperação de informação. Na classificação de sentimentos, eles podem fornecer a representação necessária através da extração de tópicos que representem os sentimentos expressos no texto. Este trabalho dedica-se ao estudo da aplicação de modelos de tópicos na representação e classificação de sentimentos de resenhas de usuário. Em particular, o modelo Latent Dirichlet Allocation (LDA) e quatro extensões (duas delas desenvolvidas pelo autor) são avaliados na tarefa de classificação de sentimentos baseada em múltiplos aspectos. As extensões ao modelo LDA permitem uma investigação dos efeitos da incorporação de informações adicionais como contexto, avaliações de aspecto e avaliações de múltiplos aspectos no modelo original. / There is a large number of user reviews on the internet with valuable information on services, products, politics and trends. There is both scientific and economic interest in the automatic understanding of such data. Sentiment classification is concerned with automatic extraction of opinions expressed in user reviews. Unlike standard text categorization tasks that deal with the classification of documents into subjects such as sports, economics and tourism, sentiment classification attempts to tag documents with respect to the feelings they express. Compared to the accuracy of standard methods, sentiment classifiers have shown poor performance. One possible cause of such a poor performance is the lack of adequate representations that lead to opinion discrimination in a concise and machine-readable form. Topic Models are statistical models concerned with the extraction of semantic information hidden in the large number of data available in text collections. They represent a document as a mixture of topics, probability distributions over words that represent a semantic concept. According to Topic Model representation, words can be substituted by topics able to represent concisely its meaning. Indeed, Topic Models perform a data dimensionality reduction that can improve the performance of text classification and information retrieval techniques. In sentiment classification, they can provide the necessary representation by extracting topics that represent the general feelings expressed in text. This work presents a study of the use of Topic Models for representing and classifying user reviews with respect to their feelings. In particular, the Latent Dirichlet Allocation (LDA) model and four extensions (two of them developed by the author) are evaluated on the task of aspect-based sentiment classification. The extensions to the LDA model enables us to investigate the effects of the incorporation of additional information such as context, aspect rating and multiple aspect rating into the original model.
|
35 |
Análise de abordagens automáticas de anotação semântica para textos ruidosos e seus impactos na similaridade entre vídeosDias, Laura Lima 31 August 2017 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-01-29T16:52:29Z
No. of bitstreams: 0 / Rejected by Adriana Oliveira (adriana.oliveira@ufjf.edu.br), reason: on 2018-01-30T14:50:12Z (GMT) / Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-01-30T16:08:06Z
No. of bitstreams: 0 / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-03-21T19:26:08Z (GMT) No. of bitstreams: 0 / Made available in DSpace on 2018-03-21T19:26:08Z (GMT). No. of bitstreams: 0
Previous issue date: 2017-08-31 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Com o acúmulo de informações digitais armazenadas ao longo do tempo, alguns esforços precisam ser aplicados para facilitar a busca e indexação de conteúdos. Recursos como vídeos e áudios, por sua vez, são mais difíceis de serem tratados por mecanismos de busca. A anotação de vídeos é uma forma considerável de resumo do vídeo, busca e classificação. A parcela de vídeos que possui anotações atribuídas pelo próprio autor na maioria das vezes é muito pequena e pouco significativa, e anotar vídeos manualmente é bastante trabalhoso quando trata-se de bases legadas. Por esse motivo, automatizar esse processo tem sido desejado no campo da Recuperação de Informação. Em repositórios de videoaulas, onde a maior parte da informação se concentra na fala do professor, esse processo pode ser realizado através de anotações automáticas de transcritos gerados por sistemas de Reconhecimento Automático de Fala. Contudo, essa técnica produz textos ruidosos, dificultando a tarefa de anotação semântica automática. Entre muitas técnicas de Processamento de Linguagem de Natural utilizadas para anotação, não é trivial a escolha da técnica mais adequada a um determinado cenário, principalmente quando trata-se de anotar textos com ruídos. Essa pesquisa propõe analisar um conjunto de diferentes técnicas utilizadas para anotação automática e verificar o seu impacto em um mesmo cenário, o cenário de similaridade entre vídeos. / With the accumulation of digital information stored over time, some efforts need to be applied to facilitate search and indexing of content. Resources such as videos and audios, in turn, are more difficult to handle with by search engines. Video annotation is a considerable form of video summary, search and classification. The share of videos that have annotations attributed by the author most often is very small and not very significant, and annotating videos manually is very laborious when dealing with legacy bases. For this reason, automating this process has been desired in the field of Information Retrieval. In video lecture repositories, where most of the information is focused on the teacher’s speech, this process can be performed through automatic annotations of transcripts gene-rated by Automatic Speech Recognition systems. However, this technique produces noisy texts, making the task of automatic semantic annotation difficult. Among many Natural Language Processing techniques used for annotation, it is not trivial to choose the most appropriate technique for a given scenario, especially when writing annotated texts. This research proposes to analyze a set of different techniques used for automatic annotation and verify their impact in the same scenario, the scenario of similarity between videos.
|
36 |
Normalização textual e indexação semântica aplicadas da filtragem de SMS spam / Texto normalization and semantic indexing to enhance SMS spam filteringSilva, Tiago Pasqualini da 01 July 2016 (has links)
Submitted by Milena Rubi (milenarubi@ufscar.br) on 2017-06-01T17:49:19Z
No. of bitstreams: 1
SILVA_Tiago_2016.pdf: 13631569 bytes, checksum: 7774c3913aa556cc48c0669f686cd3b5 (MD5) / Approved for entry into archive by Milena Rubi (milenarubi@ufscar.br) on 2017-06-01T17:49:26Z (GMT) No. of bitstreams: 1
SILVA_Tiago_2016.pdf: 13631569 bytes, checksum: 7774c3913aa556cc48c0669f686cd3b5 (MD5) / Approved for entry into archive by Milena Rubi (milenarubi@ufscar.br) on 2017-06-01T17:49:32Z (GMT) No. of bitstreams: 1
SILVA_Tiago_2016.pdf: 13631569 bytes, checksum: 7774c3913aa556cc48c0669f686cd3b5 (MD5) / Made available in DSpace on 2017-06-01T17:49:38Z (GMT). No. of bitstreams: 1
SILVA_Tiago_2016.pdf: 13631569 bytes, checksum: 7774c3913aa556cc48c0669f686cd3b5 (MD5)
Previous issue date: 2016-07-01 / Não recebi financiamento / The rapid popularization of smartphones has contributed to the growth of SMS usage as an alternative way of communication. The increasing number of users, along with the trust they inherently have in their devices, makes SMS messages a propitious environment for spammers. In fact, reports clearly indicate that volume of mobile phone spam is dramatically increasing year by year. SMS spam represents a challenging problem for traditional filtering methods nowadays, since such messages are usually fairly short and normally rife with slangs, idioms, symbols and acronyms that make even tokenization a difficult task. In this scenario, this thesis proposes and then evaluates a method to normalize and expand original short and messy SMS text messages in order to acquire better attributes and enhance the classification performance. The proposed text processing approach is based on lexicography and semantic dictionaries along with the state-of-the-art techniques for semantic analysis and context detection. This technique is used to normalize terms and create new attributes in order to change and expand original text samples aiming to alleviate factors that can degrade the algorithms performance, such as redundancies and inconsistencies. The approach was validated with a public, real and non-encoded dataset along with several established machine learning methods. The experiments were diligently designed to ensure statistically sound results which indicate that the proposed text processing techniques can in fact enhance SMS spam filtering. / A popularização dos smartphones contribuiu para o crescimento do uso de mensagens SMS como forma alternativa de comunicação. O crescente número de usuários, aliado à confiança que eles possuem nos seus dispositivos tornam as mensagem SMS um ambiente propício aos spammers. Relatórios recentes indicam que o volume de spam enviados via SMS está aumentando vertiginosamente nos últimos anos. SMS spam representa um problema desafiador para os métodos tradicionais de detecção de spam, uma vez que essas mensagens são curtas e geralmente repletas de gírias, símbolos, abreviações e emoticons, que torna até mesmo a tokenização uma tarefa difícil. Diante desse cenário, esta dissertação propõe e avalia um método para normalizar e expandir amostras curtas e ruidosas de mensagens SMS de forma a obter atributos mais representativos e, com isso, melhorar o desempenho geral na tarefa de classificação. O método proposto é baseado em dicionários lexicográficos e semânticos e utiliza técnicas modernas de análise semântica e detecção de contexto. Ele é empregado para normalizar os termos que compõem as mensagens e criar novos atributos para alterar e expandir as amostras originais de texto com o objetivo de mitigar fatores que podem degradar o desempenho dos métodos de classificação, tais como redundâncias e inconsistências. A proposta foi avaliada usando uma base de dados real, pública e não codificada, além de vários métodos consagrados de aprendizado de máquina. Os experimentos foram conduzidos para garantir resultados estatisticamente corretos e indicaram que o método proposto pode de fato melhorar a detecção de spam em SMS.
|
37 |
Categorization of Customer Reviews Using Natural Language Processing / Kategorisering av kundrecensioner med naturlig språkbehandlingLiliemark, Adam, Enghed, Viktor January 2021 (has links)
Databases of user generated data can quickly become unmanageable. Klarna faced this issue, with a database of around 700,000 customer reviews. Ideally, the database would be cleaned of uninteresting reviews and the remaining reviews categorized. Without knowing what categories might emerge, the idea was to use an unsupervised clustering algorithm to find categories. This thesis describes the work carried out to solve this problem, and proposes a solution for Klarna that involves artificial neural networks rather than unsupervised clustering. The implementation done by us is able to categorize reviews as either interesting or uninteresting. We propose a workflow that would create means to categorize reviews not only in these two categories, but in multiple. The method revolved around experimentation with clustering algorithms and neural networks. Previous research shows that texts can be clustered, however, the datasets used seem to be vastly different from the Klarna dataset. The Klarna dataset consists of short reviews and contain a large amount of uninteresting reviews. Using unsupervised clustering yielded unsatisfactory results, as no discernible categories could be found. In some cases, the technique created clusters of uninteresting reviews. These clusters were used as training data for an artificial neural network, together with manually labeled interesting reviews. The results from this artificial neural network was satisfactory; it can with an accuracy of around 86% say whether a review is interesting or not. This was achieved using the aforementioned clusters and five feedback loops, where the model’s wrongfully predicted reviews from an evaluation dataset was fed back to it as training data. We argue that the main reason behind why unsupervised clustering failed is that the length of the reviews are too short. In comparison, other researchers have successfully clustered text data with an average length in the hundreds. These items pack much more features than the short reviews in the Klarna dataset. We show that an artificial neural network is able to detect these features despite the short length, through its intrinsic design. Further research in feature extraction of short text strings could provide means to cluster this kind of data. If features can be extracted, the clustering can thus be done on the features rather than the actual words. Our artificial neural network shows that the arbitrary features interesting and uninteresting can be extracted, so we are hopeful that future researchers will find ways of extracting more features from short text strings. In theory, this should mean that text of all lengths can be clustered unsupervised. / Databaser med användargenererad data kan snabbt bli ohanterbara. Klarna stod inför detta problem, med en databas innehållande cirka 700 000 recensioner från kunder. De såg helst att databasen skulle rensas från ointressanta recensioner och att de kvarvarande kategoriseras. Eftersom att kategorierna var okända initialt, var tanken att använda en oövervakad grupperingsalgoritm. Denna rapport beskriver det arbete som utfördes för att lösa detta problem, och föreslår en lösning till Klarna som involverar artificiella neurala nätverk istället för oövervakad gruppering. Implementationen skapad av oss är kapabel till att kategorisera recensioner som intressanta eller ointressanta. Vi föreslår ett arbetsflöde som skulle skapa möjlighet att kategorisera recensioner inte bara i dessa två kategorier, utan i flera. Metoden kretsar kring experimentering med grupperingsalgoritmer och artificiella neurala nätverk. Tidigare forskning visar att texter kan grupperas oövervakat, dock med ingångsdata som väsentligt skiljer sig från Klarnas data. Recensionerna i Klarnas data är generellt sett korta och en stor andel av dem kan ses som ointressanta. Oövervakad grupperingen gav otillräckliga resultat, då inga skönjbara kategorier stod att finna. I vissa fall skapades grupperingar av ointressanta recensioner. Dessa användes som träningsdata för ett artificiellt neuralt nätverk. Till träningsdatan lades intressanta recensioner som tagits fram manuellt. Resultaten från detta var positivt; med en träffsäkerhet om cirka 86% avgörs om en recension är intressant eller inte. Detta uppnåddes genom den tidigare skapade träningsdatan samt fem återkopplingsprocesser, där modellens felaktiga prediktioner av evalueringsdata matades in som träningsdata. Vår uppfattning är att den korta längden på recensionerna gör att den oövervakade grupperingen inte fungerar. Andra forskare har lyckats gruppera textdata med snittlängder om hundratals ord per text. Dessa texter rymmer fler meningsfulla enheter än de korta recensionerna i Klarnas data. Det finns lösningar som innefattar artificiella neurala nätverk å andra sidan kan upptäcka dessa meningsfulla enheter, tack vare sin grundläggande utformning. Vårt arbete visar att ett artificiellt neuralt nätverk kan upptäcka dessa meningsfulla enheter, trots den korta längden per recension. Extrahering av meningsfulla enheter ur korta texter är ett ¨ämne som behöver mer forskning för att underlätta problem som detta. Om meningsfulla enheter kan extraheras ur texter, kan grupperingen göras på dessa enheter istället för orden i sig. Vårt artificiella neurala nätverk visar att de arbiträra enheterna intressant och ointressant kan extraheras, vilket gör oss hoppfulla om att framtida forskare kan finna sätt att extrahera fler enheter ur korta texter. I teorin innebär detta att texter av alla längder kan grupperas oövervakat.
|
38 |
High-Dimensional Data Representations and Metrics for Machine Learning and Data Mining / Reprezentacije i metrike za mašinsko učenje i analizu podataka velikih dimenzijaRadovanović Miloš 11 February 2011 (has links)
<p>In the current information age, massive amounts of data are gathered, at a rate prohibiting their effective structuring, analysis, and conversion into useful knowledge. This information overload is manifested both in large numbers of data objects recorded in data sets, and large numbers of attributes, also known as high dimensionality. This dis-sertation deals with problems originating from high dimensionality of data representation, referred to as the “curse of dimensionality,” in the context of machine learning, data mining, and information retrieval. The described research follows two angles: studying the behavior of (dis)similarity metrics with increasing dimensionality, and exploring feature-selection methods, primarily with regard to document representation schemes for text classification. The main results of the dissertation, relevant to the first research angle, include theoretical insights into the concentration behavior of cosine similarity, and a detailed analysis of the phenomenon of hubness, which refers to the tendency of some points in a data set to become hubs by being in-cluded in unexpectedly many <em>k</em>-nearest neighbor lists of other points. The mechanisms behind the phenomenon are studied in detail, both from a theoretical and empirical perspective, linking hubness with the (intrinsic) dimensionality of data, describing its interaction with the cluster structure of data and the information provided by class la-bels, and demonstrating the interplay of the phenomenon and well known algorithms for classification, semi-supervised learning, clustering, and outlier detection, with special consideration being given to time-series classification and information retrieval. Results pertaining to the second research angle include quantification of the interaction between various transformations of high-dimensional document representations, and feature selection, in the context of text classification.</p> / <p>U tekućem „informatičkom dobu“, masivne količine podataka se<br />sakupljaju brzinom koja ne dozvoljava njihovo efektivno strukturiranje,<br />analizu, i pretvaranje u korisno znanje. Ovo zasićenje informacijama<br />se manifestuje kako kroz veliki broj objekata uključenih<br />u skupove podataka, tako i kroz veliki broj atributa, takođe poznat<br />kao velika dimenzionalnost. Disertacija se bavi problemima koji<br />proizilaze iz velike dimenzionalnosti reprezentacije podataka, često<br />nazivanim „prokletstvom dimenzionalnosti“, u kontekstu mašinskog<br />učenja, data mining-a i information retrieval-a. Opisana istraživanja<br />prate dva pravca: izučavanje ponašanja metrika (ne)sličnosti u odnosu<br />na rastuću dimenzionalnost, i proučavanje metoda odabira atributa,<br />prvenstveno u interakciji sa tehnikama reprezentacije dokumenata za<br />klasifikaciju teksta. Centralni rezultati disertacije, relevantni za prvi<br />pravac istraživanja, uključuju teorijske uvide u fenomen koncentracije<br />kosinusne mere sličnosti, i detaljnu analizu fenomena habovitosti koji<br />se odnosi na tendenciju nekih tačaka u skupu podataka da postanu<br />habovi tako što bivaju uvrštene u neočekivano mnogo lista k najbližih<br />suseda ostalih tačaka. Mehanizmi koji pokreću fenomen detaljno su<br />proučeni, kako iz teorijske tako i iz empirijske perspektive. Habovitost<br />je povezana sa (latentnom) dimenzionalnošću podataka, opisana<br />je njena interakcija sa strukturom klastera u podacima i informacijama<br />koje pružaju oznake klasa, i demonstriran je njen efekat na<br />poznate algoritme za klasifikaciju, semi-supervizirano učenje, klastering<br />i detekciju outlier-a, sa posebnim osvrtom na klasifikaciju vremenskih<br />serija i information retrieval. Rezultati koji se odnose na<br />drugi pravac istraživanja uključuju kvantifikaciju interakcije između<br />različitih transformacija višedimenzionalnih reprezentacija dokumenata<br />i odabira atributa, u kontekstu klasifikacije teksta.</p>
|
39 |
[en] TEXT CATEGORIZATION: CASE STUDY: PATENT S APPLICATION DOCUMENTS IN PORTUGUESE / [pt] CATEGORIZAÇÃO DE TEXTOS: ESTUDO DE CASO: DOCUMENTOS DE PEDIDOS DE PATENTE NO IDIOMA PORTUGUÊSNEIDE DE OLIVEIRA GOMES 08 January 2015 (has links)
[pt] Atualmente os categorizadores de textos construídos por técnicas de
aprendizagem de máquina têm alcançado bons resultados, tornando viável a
categorização automática de textos. A proposição desse estudo foi a definição de
vários modelos direcionados à categorização de pedidos de patente, no idioma
português. Para esse ambiente foi proposto um comitê composto de 6 (seis)
modelos, onde foram usadas várias técnicas. A base de dados foi constituída de
1157 (hum mil cento e cinquenta e sete) resumos de pedidos de patente,
depositados no INPI, por depositantes nacionais, distribuídos em várias
categorias. Dentre os vários modelos propostos para a etapa de processamento da
categorização de textos, destacamos o desenvolvido para o Método 01, ou seja, o
k-Nearest-Neighbor (k-NN), modelo também usado no ambiente de patentes, para
o idioma inglês. Para os outros modelos, foram selecionados métodos que não os
tradicionais para ambiente de patentes. Para quatro modelos, optou-se por
algoritmos, onde as categorias são representadas por vetores centróides. Para um
dos modelos, foi explorada a técnica do High Order Bit junto com o algoritmo k-
NN, sendo o k todos os documentos de treinamento. Para a etapa de préprocessamento
foram implementadas duas técnicas: os algoritmos de stemização
de Porter; e o StemmerPortuguese; ambos com modificações do original. Foram
também utilizados na etapa do pré-processamento: a retirada de stopwords; e o
tratamento dos termos compostos. Para a etapa de indexação foi utilizada
principalmente a técnica de pesagem dos termos intitulada: frequência de termos
modificada versus frequência de documentos inversa TF -IDF . Para as medidas
de similaridade ou medidas de distância destacamos: cosseno; Jaccard; DICE;
Medida de Similaridade; HOB. Para a obtenção dos resultados foram usadas as
técnicas de predição da relevância e do rank. Dos métodos implementados nesse
trabalho, destacamos o k-NN tradicional, o qual apresentou bons resultados
embora demande muito tempo computacional. / [en] Nowadays, the text s categorizers constructed based on learning techniques,
had obtained good results and the automatic text categorization became viable.
The purpose of this study was the definition of various models directed to text
categorization of patent s application in Portuguese language. For this
environment was proposed a committee composed of 6 (six) models, where were
used various techniques. The text base was constituted of 1157 (one thousand one
hundred fifty seven) abstracts of patent s applications, deposited in INPI, by
national applicants, distributed in various categories. Among the various models
proposed for the step of text categorization s processing, we emphasized the one
devellopped for the 01 Method, the k-Nearest-Neighbor (k-NN), model also used
in the English language patent s categorization environment. For the others
models were selected methods, that are not traditional in the English language
patent s environment. For four models, there were chosen for the algorithms,
centroid vectors representing the categories. For one of the models, was explored
the High Order Bit technique together with the k-NN algorithm, being the k all the
training documents. For the pre-processing step, there were implemented two
techniques: the Porter s stemization algorithm; and the StemmerPortuguese
algorithm; both with modifications of the original. There were also used in the
pre-processing step: the removal of the stopwards; and the treatment of the
compound terms. For the indexing step there was used specially the modified
documents term frequency versus documents term inverse frequency TF-IDF .
For the similarity or distance measures there were used: cosine; Jaccard; DICE;
Similarity Measure; HOB. For the results, there were used the relevance and the
rank technique. Among the methods implemented in this work it was emphasized
the traditional k-NN, which had obtained good results, although demands much
computational time.
|
Page generated in 0.1897 seconds