101 |
A data mining approach to ontology learning for automatic content-related question-answering in MOOCsShatnawi, Safwan January 2016 (has links)
The advent of Massive Open Online Courses (MOOCs) allows massive volume of registrants to enrol in these MOOCs. This research aims to offer MOOCs registrants with automatic content related feedback to fulfil their cognitive needs. A framework is proposed which consists of three modules which are the subject ontology learning module, the short text classification module, and the question answering module. Unlike previous research, to identify relevant concepts for ontology learning a regular expression parser approach is used. Also, the relevant concepts are extracted from unstructured documents. To build the concept hierarchy, a frequent pattern mining approach is used which is guided by a heuristic function to ensure that sibling concepts are at the same level in the hierarchy. As this process does not require specific lexical or syntactic information, it can be applied to any subject. To validate the approach, the resulting ontology is used in a question-answering system which analyses students' content-related questions and generates answers for them. Textbook end of chapter questions/answers are used to validate the question-answering system. The resulting ontology is compared vs. the use of Text2Onto for the question-answering system, and it achieved favourable results. Finally, different indexing approaches based on a subject's ontology are investigated when classifying short text in MOOCs forum discussion data; the investigated indexing approaches are: unigram-based, concept-based and hierarchical concept indexing. The experimental results show that the ontology-based feature indexing approaches outperform the unigram-based indexing approach. Experiments are done in binary classification and multiple labels classification settings . The results are consistent and show that hierarchical concept indexing outperforms both concept-based and unigram-based indexing. The BAGGING and random forests classifiers achieved the best result among the tested classifiers.
|
102 |
[en] USING MACHINE LEARNING TO BUILD A TOOL THAT HELPS COMMENTS MODERATION / [pt] UTILIZANDO APRENDIZADO DE MÁQUINA PARA CONSTRUÇÃO DE UMA FERRAMENTA DE APOIO A MODERAÇÃO DE COMENTÁRIOSSILVANO NOGUEIRA BUBACK 05 March 2012 (has links)
[pt] Uma das mudanças trazidas pela Web 2.0 é a maior participação dos
usuários na produção do conteúdo, através de opiniões em redes sociais ou
comentários nos próprios sites de produtos e serviços. Estes comentários são
muito valiosos para seus sites pois fornecem feedback e incentivam a participação
e divulgação do conteúdo. Porém excessos podem ocorrer através de comentários
com palavrões indesejados ou spam. Enquanto para alguns sites a própria
moderação da comunidade é suficiente, para outros as mensagens indesejadas
podem comprometer o serviço. Para auxiliar na moderação dos comentários foi
construída uma ferramenta que utiliza técnicas de aprendizado de máquina para
auxiliar o moderador. Para testar os resultados, dois corpora de comentários
produzidos na Globo.com foram utilizados, o primeiro com 657.405 comentários
postados diretamente no site, e outro com 451.209 mensagens capturadas do
Twitter. Nossos experimentos mostraram que o melhor resultado é obtido quando
se separa o aprendizado dos comentários de acordo com o tema sobre o qual está
sendo comentado. / [en] One of the main changes brought by Web 2.0 is the increase of user
participation in content generation mainly in social networks and comments in
news and service sites. These comments are valuable to the sites because they
bring feedback and motivate other people to participate and to spread the content.
On the other hand these comments also bring some kind of abuse as bad words
and spam. While for some sites their own community moderation is enough, for
others this impropriate content may compromise its content. In order to help
theses sites, a tool that uses machine learning techniques was built to mediate
comments. As a test to compare results, two datasets captured from Globo.com
were used: the first one with 657.405 comments posted through its site and the
second with 451.209 messages captured from Twitter. Our experiments show that
best result is achieved when comment learning is done according to the subject
that is being commented.
|
103 |
Uma investigação de aspectos da classificação de tópicos para textos curtosOliveira, Ewerton Lopes Silva de 23 February 2015 (has links)
Submitted by Clebson Anjos (clebson.leandro54@gmail.com) on 2016-02-15T17:35:03Z
No. of bitstreams: 1
arquivototal.pdf: 1768771 bytes, checksum: 5e8df60284fb114853ef61923cb2ec0d (MD5) / Made available in DSpace on 2016-02-15T17:35:03Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 1768771 bytes, checksum: 5e8df60284fb114853ef61923cb2ec0d (MD5)
Previous issue date: 2015-02-23 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / In recent years a large number of scientific research has stimulated the use of web data
as inputs for the epidemiological surveillance and knowledge discovery/mining related
to public health in general. In order to make use of social media content, especially
tweets, some approaches proposed before transform a content identification problem to a
text classification problem, following the supervised learning scenario. However, during
this process, some limitations attributed to the representation of messages as well as the
extraction of attributes arise. From this, the present research is aimed to investigate the
performance impact in the short social messages classification task using a continuous
expansion of the training set approach with support of a measure of confidence in the
predictions made. At the same time, the survey also aimed to evaluate alternatives for
consideration and extraction of terms used for the classification in order to reduce dependencies on term-frequency based metrics. Restricted to the binary classification of tweets related to health events and written in English, the results showed a 9% improvement in F1, compared to the baseline used, showing that the action of expanding the classifier increases the performance, even in the case of short message classification task for health concerns. For the term weighting objective, the main contribution obtained is the ability to automatically indentify high discriminative terms in the dataset, without suffering limitations regarding term-frequency. This may, for example, be able to help build more robust and dynamic classification processes which make use of lists of specific terms for indexing contents on external database ( textit background knowledge). Overall, the results can benefit, by the improvement of the discussed hypotheses, the emergence of more robust applications in the field of surveillance, control and decision making to real health events (epidemiology, health campaigns, etc.), through the task of classifying short social messages. / Nos últimos anos um grande número de pesquisas científicas fomentou o uso de informações da web como insumos para a vigilância epidemiológica e descoberta/mineração de conhecimentos relacionados a saúde pública em geral. Ao fazerem uso de conteúdo das mídias sociais, principalmente tweets, as abordagens propostas transformam o problema de identificação de conteúdo em um problema de classificação de texto, seguindo o cenário de aprendizagem supervisionada. Neste processo, algumas limitações atribuídas à representação das mensagens, atualização de modelo assim como a extração de atributos discriminativos, surgem. Partido disso, a presente pesquisa propõe investigar o impacto no desempenho de classificação
de mensagens sociais curtas através da expansão contínua do conjunto de treinamento tendo como referência a medida de confiança nas predições realizadas. Paralelamente, a pesquisa também teve como objetivo avaliar alternativas para ponderação e extração de termos utilizados para a classificação, de modo a reduzir a dependência em métricas baseadas em frequência de termos. Restringindo-se à classificação binária de tweets relacionados a eventos de saúde e escritos em língua inglesa, os resultados obtidos revelaram uma melhoria de F1 de 9%, em relação a linha de base utilizada, evidenciando que a ação de expandir o classificador eleva o desempenho de classificação, também para o caso da classificação
de mensagens curtas em domínio de saúde. Sobre a ponderação de termos, tem-se que a principal contribuição obtida, está na capacidade de levantar termos característicos do conjunto de dados e suas classes de interesse automaticamente, sem sofrer com limitações de frequência de termos, o que pode, por exemplo, ser capaz de ajudar a construir processos de classificação mais robustos e dinâmicos ao qual façam uso de listas de termos específicos para indexação em consultas à bancos de dados externos (background knowledge). No geral, os resultados apresentados podem beneficiar, pelo aprimoramento das hipóteses levantadas, o surgimento de aplicações mais robustas no campo da vigilância, controle e contrapartida à eventos reais de saúde (epidemiologia, campanhas de saúde, etc.), por meio da tarefa de
classificação de mensagens sociais curtas.
|
104 |
Classificação supervisionada com programação probabilísticaLucena, Danilo Carlos Gouveia de 10 February 2014 (has links)
Made available in DSpace on 2015-05-14T12:36:45Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 606852 bytes, checksum: 6a982febbce62a2525ee58de6e011a23 (MD5)
Previous issue date: 2014-02-10 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Probabilistic inference mechanisms are at the intersection of three main areas: statistics,
programming languages and probability. These mechanisms are used to create probabilistic
models and assist in treating uncertainties. Probabilistic programming languages assist
in high-level description of these models. These languages facilitate the development
of the models because they abstract the inference mechanisms at the lower levels, allow
reuse of code, and assist in results analysis. This study proposes the analysis of inference
engines implemented by probabilistic programming languages and presents a case study of
a supervised text classifier using probabilistic programming. / Mecanismos de inferência probabilísticos estão na intersecção de três áreas: estatística,
linguagens de programação e sistemas de probabilidade. Esses mecanismos são utilizados
para criar modelos probabilísticos e auxiliam no tratamento de incertezas. As linguagens de
programação probabilísticas auxiliam na descrição de alto nível desses tipos de modelos.
Essas linguagens facilitam o desenvolvimento abstraindo os mecanismos de inferência de
mais baixo nível, favorecem o reuso de código e auxiliam na análise dos resultados. Este
estudo propõe a análise dos mecanismos de inferência implementados pelas linguagens de
programação probabilísticas e apresenta um estudo de caso com a implementação de um
classificador supervisionado de textos com programação probabilística.
|
105 |
A novel approach to text classificationZechner, Niklas January 2017 (has links)
This thesis explores the foundations of text classification, using both empirical and deductive methods, with a focus on author identification and syntactic methods. We strive for a thorough theoretical understanding of what affects the effectiveness of classification in general. To begin with, we systematically investigate the effects of some parameters on the accuracy of author identification. How is the accuracy affected by the number of candidate authors, and the amount of data per candidate? Are there differences in how methods react to the changes in parameters? Using the same techniques, we see indications that methods previously thought to be topic-independent might not be so, but that syntactic methods may be the best option for avoiding topic dependence. This means that previous studies may have overestimated the power of lexical methods. We also briefly look for ways of spotting which particular features might be the most effective for classification. Apart from author identification, we apply similar methods to identifying properties of the author, including age and gender, and attempt to estimate the number of distinct authors in a text sample. In all cases, the techniques are proven viable if not overwhelmingly accurate, and we see that lexical and syntactic methods give very similar results. In the final parts, we see some results of automata theory that can be of use for syntactic analysis and classification. First, we generalise a known algorithm for finding a list of the best-ranked strings according to a weighted automaton, to doing the same with trees and a tree automaton. This result can be of use for speeding up parsing, which often runs in several steps, where each step needs several trees from the previous as input. Second, we use a compressed version of deterministic finite automata, known as failure automata, and prove that finding the optimal compression is NP-complete, but that there are efficient algorithms for finding good approximations. Third, we find and prove the derivatives of regular expressions with cuts. Derivatives are an operation on expressions to calculate the remaining expression after reading a given symbol, and cuts are an extension to regular expressions found in many programming languages. Together, these findings may be able to improve on the syntactic analysis which we have seen is a valuable tool for text classification.
|
106 |
DETECTION OF EMERGING DISRUPTIVE FIELDS USING ABSTRACTS OF SCIENTIFIC ARTICLESVorgianitis, Georgios January 2017 (has links)
With the significant advancementstaking place in the last three decades in the field ofInformation Technology (IT), we are witnesses of an era unprecedented to the standards that mankind was used to, for centuries. Having access to a huge amount of dataalmost instantly,entails certainadvantages. One of which is the ability to observe in which segments of their expertise do scientists focus their research. That kind of knowledge, if properly appraised could hold the key to explaining what the new directions of the applied sciences will be and thus could help to constructing a “map” of the future developments from the Research and Development labs of the industries worldwide.Though the above statement may be considered too “futuristic”, already there have been documented attempts in the literature that have been fruitful into using vast amount of scientific data in an attempt to outline future scientific trends and thus scientific discoveries.The purpose of this research is to try to use a pioneeringmethodof modeling text corpora that already hasbeen used previously to the task of mapping the history of scientific discovery, that of Latent Dirichlet Allocation (LDA)and try to evaluate itsusability into detecting emerging research trends by the mere use of only the “Abstracts” from a collectionof scientific articles.To do that an experimental set is being utilized and the process is repeated over three experimental runs.The results, although not the ones that would validate the hypothesis, are showing that with certain improvements in the processing the hypothesis could be confirmed.
|
107 |
Trajectory-based methods to predict user churn in online health communitiesJoshi, Apoorva 01 May 2018 (has links)
Online Health Communities (OHCs) have positively disrupted the modern global healthcare system as patients and caregivers are interacting online with similar peers to improve quality of their life. Social support is the pillar of OHCs and, hence, analyzing the different types of social support activities contributes to a better understanding and prediction of future user engagement in OHCs.
This thesis used data from a popular OHC, called Breastcancer.org, to first classify user posts in the community into the different categories of social support using Word2Vec for language processing and six different classifiers were explored, resulting in the conclusion that Random Forest was the best approach for classification of the user posts. This exercise helped identify the different types of social support activities that users participate in and also detect the most common type of social support activity among users in the community.
Thereafter, three trajectory-based methods were proposed and implemented to predict user churn (attrition) from the OHC. Comparison of the proposed trajectory-based methods with two non-trajectory-based benchmark methods helped establish that user trajectories, which represent the month-to-month change in the type of social support activity of users are effective pointers for user churn from the community.
The results and findings from this thesis could help OHC managers better understand the needs of users in the community and take necessary steps to improve user retention and community management.
|
108 |
DECEPTIVE REVIEW IDENTIFICATION VIA REVIEWER NETWORK REPRESENTATION LEARNINGShih-Feng Yang (11502553) 19 December 2021 (has links)
<div><div>With the growth of the popularity of e-commerce and mobile apps during the past decade, people rely on online reviews more than ever before for purchasing products, booking hotels, and choosing all kinds of services. Users share their opinions by posting product reviews on merchant sites or online review websites (e.g., Yelp, Amazon, TripAdvisor). Although online reviews are valuable information for people who are interested in products and services, many reviews are manipulated by spammers to provide untruthful information for business competition. Since deceptive reviews can damage the reputation of brands and mislead customers’ buying behaviors, the identification of fake reviews has become an important topic for online merchants. Among the computational approaches proposed for fake review identification, network-based fake review analysis jointly considers the information from review text, reviewer behaviors, and production information. Researchers have proposed network-based methods (e.g., metapath) on heterogeneous networks, which have shown promising results.</div><div><br></div><div>However, we’ve identified two research gaps in this study: 1) We argue the previous network-based reviewer representations are not sufficient to preserve the relationship of reviewers in networks. To be specific, previous studies only considered first-order proximity, which indicates the observable connection between reviewers, but not second-order proximity, which captures the neighborhood structures where two vertices overlap. Moreover, although previous network-based fake review studies (e.g., metapath) connect reviewers through feature nodes across heterogeneous networks, they ignored the multi-view nature of reviewers. A view is derived from a single type of proximity or relationship between the nodes, which can be characterized by a set of edges. In other words, the reviewers could form different networks with regard to different relationships. 2) The text embeddings of reviews in previous network-based fake review studies were not considered with reviewer embeddings.</div><div><br></div><div>To tackle the first gap, we generated reviewer embeddings via MVE (Qu et al., 2017), a framework for multi-view network representation learning, and conducted spammer classification experiments to examine the effectiveness of the learned embeddings for distinguishing spammers and non-spammers. In addition, we performed unsupervised hierarchical clustering to observe the clusters of the reviewer embeddings. Our results show the clusters generated based on reviewer embeddings capture the difference between spammers and non-spammers better than those generated based on reviewers’ features.</div><div><br></div><div>To fill the second gap, we proposed hybrid embeddings that combine review text embeddings with reviewer embeddings (i.e., the vector that represents a reviewer’s characteristics, such as writing or behavioral patterns). We conducted fake review classification experiments to compare the performance between using hybrid embeddings (i.e., text+reviewer) as features and using text-only embeddings as features. Our results suggest that hybrid embedding is more effective than text-only embedding for fake review identification. Moreover, we compared the prediction performance of the hybrid embeddings with baselines and showed our approach outperformed others on fake review identification experiments.</div><div><br></div><div>The contributions of this study are four-fold: 1) We adopted a multi-view representation learning approach for reviewer embedding learning and analyze the efficacy of the embeddings used for spammer classification and fake review classification. 2) We proposed a hybrid embedding that considers the characteristics of both review text and the reviewer. Our results are promising and suggest hybrid embedding is very effective for fake review identification. 3) We proposed a heuristic network construction approach that builds a user network based on user features. 4) We evaluated how different spammer thresholds impact the performance of fake review classification. Several studies have used the same datasets as we used in this study, but most of them followed the spammer definition mentioned by Jindal and Liu (2008). We argued that the spammer definition should be configurable based on different datasets. Our findings showed that by carefully choosing the spammer thresholds for the target datasets, hybrid embeddings have higher efficacy for fake review classification.</div></div>
|
109 |
Automatic Dispatching of Issues using Machine Learning / Automatisk fördelning av ärenden genom maskininlärningBengtsson, Fredrik, Combler, Adam January 2019 (has links)
Many software companies use issue tracking systems to organize their work. However, when working on large projects, across multiple teams, a problem of finding the correctteam to solve a certain issue arises. One team might detect a problem, which must be solved by another team. This can take time from employees tasked with finding the correct team and automating the dispatching of these issues can have large benefits for the company. In this thesis, the use of machine learning methods, mainly convolutional neural networks (CNN) for text classification, has been applied to this problem. For natural language processing both word- and character-level representations are commonly used. The results in this thesis suggests that the CNN learns different information based on whether word- or character-level representation is used. Furthermore, it was concluded that the CNN models performed on similar levels as the classical Support Vector Machine for this task. When compared to a human expert, working with dispatching issues, the best CNN model performed on a similar level when given the same information. The high throughput of a computer model, therefore, suggests automation of this task is very much possible.
|
110 |
Optimizing Deep Neural Networks for Classification of Short TextsPettersson, Fredrik January 2019 (has links)
This master's thesis investigates how a state-of-the-art (SOTA) deep neural network (NN) model can be created for a specific natural language processing (NLP) dataset, the effects of using different dimensionality reduction techniques on common pre-trained word embeddings and how well this model generalize on a secondary dataset. The research is motivated by two factors. One is that the construction of a machine learning (ML) text classification (TC) model is typically done around a specific dataset and often requires a lot of manual intervention. It's therefore hard to know exactly what procedures to implement for a specific dataset and how the result will be affected. The other reason is that, if the dimensionality of pre-trained embedding vectors can be lowered without losing accuracy, and thus saving execution time, other techniques can be used during the time saved to achieve even higher accuracy. A handful of deep neural network architectures are used, namely a convolutional neural network (CNN), long short-term memory neural network (LSTM) and a bidirectional LSTM (Bi-LSTM) architecture. These deep neural network architectures are combined with four different word embeddings: GoogleNews-vectors-negative300, glove.840B.300d, paragram_300_sl999 and wiki-news-300d-1M. Three main experiments are conducted in this thesis. In the first experiment, a top-performing TC model is created for a recent NLP competition held at Kaggle.com. Each implemented procedure is benchmarked on how the accuracy and execution time of the model is affected. In the second experiment, principal component analysis (PCA) and random projection (RP) are applied to the pre-trained word embeddings used in the top-performing model to investigate how the accuracy and execution time is affected when creating lower-dimensional embedding vectors. In the third experiment, the same model is benchmarked on a separate dataset (Sentiment140) to investigate how well it generalizes on other data and how each implemented procedure affects the accuracy compared to on the original dataset. The first experiment results in a bidirectional LSTM model and a combination of the three embeddings: glove, paragram and wiki-news concatenated together. The model is able to give predictions with an F1 score of 71% which is good enough to reach 9th place out of 1,401 participating teams in the competition. In the second experiment, the execution time is improved by 13%, by using PCA, while lowering the dimensionality of the embeddings by 66% and only losing half a percent of F1 accuracy. RP gave a constant accuracy of 66-67% regardless of the projected dimensions compared to over 70% when using PCA. In the third experiment, the model gained around 12% accuracy from the initial to the final benchmarks, compared to 19% on the competition dataset. The best-achieved accuracy on the Sentiment140 dataset is 86% and thus higher than the 71% achieved on the Quora dataset.
|
Page generated in 0.0544 seconds