Spelling suggestions: "subject:"textmining"" "subject:"detemining""
281 |
Essays on Data Driven Insights from Crowd Sourcing, Social Media and Social NetworksVelichety, Srikar, Velichety, Srikar January 2016 (has links)
The beginning of this decade has seen a phenomenal raise in the amount of data generated in the world. While this increase provides us with opportunities to understand various aspects of human behavior and mechanisms behind new phenomena, the technologies, statistical techniques and theories required to gain an in depth and comprehensive understanding haven't progressed at an equal pace. As little as 5 years back, we used to deal with problems where there is insufficient prior social science or economic theory and the interest is only in prediction of the outcome or where there is an appropriate social science or economic theory and the interest is in explaining a given phenomenon. Today, we deal with problems where there is insufficient social science or economic theory but the interest is in explaining a given phenomenon. This creates a big challenge the solution to which is of equal interest to both academics and practitioners. In my research, I contribute towards addressing these challenges by building exploratory frameworks that leverage a variety of techniques including social network analysis, text and data mining, econometrics, statistical computing and visualization. My three essay dissertation focuses on understanding the antecedents to the quality of user generated content and on subscription and un-subscription behavior of users from lists on Social Media. Using a data science approach on population sized samples from Wikipedia and Twitter, I demonstrate the power of customized exploratory analyses in uncovering facts that social science or economic theory doesn't dictate and show how metrics from these analyses can be used to build prediction models with higher accuracy. I also demonstrate a method for combining exploration, prediction and explanatory modeling and propose to extend this methodology to provide causal inference. This dissertation has general implications for building better predictive and explanatory models and for mining text efficiently in Social Media.
|
282 |
Organização flexível de documentos / Flexible organization of documentsTatiane Nogueira Rios 25 March 2013 (has links)
Diversos métodos têm sido desenvolvidos para a organização da crescente quantidade de documentos textuais. Esses métodos frequentemente fazem uso de algoritmos de agrupamento para organizar documentos que referem-se a um mesmo assunto em um mesmo grupo, supondo que conteúdos de documentos de um mesmo grupo são similares. Porém, existe a possibilidade de que documentos pertencentes a grupos distintos também apresentem características semelhantes. Considerando esta situação, há a necessidade de desenvolver métodos que possibilitem a organização flexível de documentos, ou seja, métodos que possibilitem que documentos sejam organizados em diferentes grupos com diferentes graus de compatibilidade. O agrupamento fuzzy de documentos textuais apresenta-se como uma técnica adequada para este tipo de organização, uma vez que algoritmos de agrupamento fuzzy consideram que um mesmo documento pode ser compatível com mais de um grupo. Embora tem-se desenvolvido algoritmos de agrupamento fuzzy que possibilitam a organização flexível de documentos, tal organização é avaliada em termos do desempenho do agrupamento de documentos. No entanto, considerando que grupos de documentos devem possuir descritores que identifiquem adequadamente os tópicos representados pelos mesmos, de maneira geral os descritores de grupos tem sido extraídos utilizando alguma heurística sobre um conjunto pequeno de documentos, realizando assim, uma avaliação simples sobre o significado dos grupos extraídos. No entanto, uma apropriada extração e avaliação de descritores de grupos é importante porque os mesmos são termos representantes da coleção que identificam os tópicos abordados nos documentos. Portanto, em aplicações em que o agrupamento fuzzy é utilizado para a organização flexível de documentos, uma descrição apropriada dos grupos obtidos é tão importante quanto um bom agrupamento, uma vez que, neste tipo de agrupamento, um mesmo descritor pode indicar o conteúdo de mais de um grupo. Essa necessidade motivou esta tese, cujo objetivo foi investigar e desenvolver métodos para a extração de descritores de grupos fuzzy para a organização flexível de documentos. Para cumprir esse objetivo desenvolveu se: i) o método SoftO-FDCL (Soft Organization - Fuzzy Description Comes Last ), pelo qual descritores de grupos fuzzy at são extraídos após o processo de agrupamento fuzzy, visando identicar tópicos da organização flexível de documentos independentemente do algoritmo de agrupamento fuzzy utilizado; ii) o método SoftO-wFDCL ( Soft Organization - weighted Fuzzy Description Comes Last ), pelo qual descritores de grupos fuzzy at também são extraídos após o processo de agrupamento fuzzy utilizando o grau de pertinência dos documentos em cada grupo, obtidos do agrupamento fuzzy, como fator de ponderação dos termos candidatos a descritores; iii) o método HSoftO-FDCL (Hierarchical Soft Organization - Fuzzy Description Comes Last ), pelo qual descritores de grupos fuzzy hierárquicos são extraídos após o processo de agrupamento hierárquico fuzzy, identificando tópicos da organização hierárquica flexível de documentos. Adicionalmente, apresenta-se nesta tese uma aplicação do método SoftO-FDCL no contexto do programa de educação médica continuada canadense, reforçando a utilidade e aplicabilidade da organização flexível de documentos / Several methods have been developed to organize the growing number of textual documents. Such methods frequently use clustering algorithms to organize documents with similar topics into clusters. However, there are situations when documents of dffierent clusters can also have similar characteristics. In order to overcome this drawback, it is necessary to develop methods that permit a soft document organization, i.e., clustering documents into different clusters according to different compatibility degrees. Among the techniques that we can use to develop methods in this sense, we highlight fuzzy clustering algorithms (FCA). By using FCA, one of the most important steps is the evaluation of the yield organization, which is performed considering that all analyzed topics are adequately identified by cluster descriptors. In general, cluster descriptors are extracted using some heuristic over a small number of documents. The adequate extraction and evaluation of cluster descriptors is important because they are terms that represent the collection and identify the topics of the documents. Therefore, an adequate description of the obtained clusters is as important as a good clustering, since the same descriptor might identify one or more clusters. Hence, the development of methods to extract descriptors from fuzzy clusters obtained for soft organization of documents motivated this thesis. Aiming at investigating such methods, we developed: i) the SoftO-FDCL (Soft Organization - Fuzzy Description Comes Last) method, in which descriptors of fuzzy clusters are extracted after clustering documents, identifying topics regardless the adopted fuzzy clustering algorithm; ii) the SoftO-wFDCL (Soft Organization - weighted Fuzzy Description Comes Last) method, in which descriptors of fuzzy clusters are also extracted after the fuzzy clustering process using the membership degrees of the documents as a weighted factor for the candidate descriptors; iii) the HSoftO-FDCL (Hierarchical Soft Organization - Fuzzy Description Comes Last) method, in which descriptors of hierarchical fuzzy clusters are extracted after the hierarchical fuzzy clustering process, identifying topics by means of a soft hierarchical organization of documents. Besides presenting these new methods, this thesis also discusses the application of the SoftO-FDCL method on documents produced by the Canadian continuing medical education program, presenting the utility and applicability of the soft organization of documents in real-world scenario
|
283 |
Aprendizado de máquina parcialmente supervisionado multidescrição para realimentação de relevância em recuperação de informação na WEB / Partially supervised multi-view machine learning for relevance feedback in WEB information retrievalMatheus Victor Brum Soares 28 May 2009 (has links)
Atualmente, o meio mais comum de busca de informações é a WEB. Assim, é importante procurar métodos eficientes para recuperar essa informação. As máquinas de busca na WEB usualmente utilizam palavras-chaves para expressar uma busca. Porém, não é trivial caracterizar a informação desejada. Usuários diferentes com necessidades diferentes podem estar interessados em informações relacionadas, mas distintas, ao realizar a mesma busca. O processo de realimentação de relevância torna possível a participação ativa do usuário no processo de busca. A idéia geral desse processo consiste em, após o usuário realizar uma busca na WEB permitir que indique, dentre os sites encontrados, quais deles considera relevantes e não relevantes. A opinião do usuário pode então ser considerada para reordenar os dados, de forma que os sites relevantes para o usuário sejam retornados mais facilmente. Nesse contexto, e considerando que, na grande maioria dos casos, uma consulta retorna um número muito grande de sites WEB que a satisfazem, das quais o usuário é responsável por indicar um pequeno número de sites relevantes e não relevantes, tem-se o cenário ideal para utilizar aprendizado parcialmente supervisionado, pois essa classe de algoritmos de aprendizado requer um número pequeno de exemplos rotulados e um grande número de exemplos não-rotulados. Assim, partindo da hipótese que a utilização de aprendizado parcialmente supervisionado é apropriada para induzir um classificador que pode ser utilizado como um filtro de realimentação de relevância para buscas na WEB, o objetivo deste trabalho consiste em explorar algoritmos de aprendizado parcialmente supervisionado, mais especificamente, aqueles que utilizam multidescrição de dados, para auxiliar na recuperação de sites na WEB. Para avaliar esta hipótese foi projetada e desenvolvida uma ferramenta denominada C-SEARCH que realiza esta reordenação dos sites a partir da indicação do usuário. Experimentos mostram que, em casos que buscas genéricas, que o resultado possui um bom diferencial entre sites relevantes e irrelevantes, o sistema consegue obter melhores resultados para o usuário / As nowadays the WEB is the most common source of information, it is very important to find reliable and efficient methods to retrieve this information. However, the WEB is a highly volatile and heterogeneous information source, thus keyword based querying may not be the best approach when few information is given. This is due to the fact that different users with different needs may want distinct information, although related to the same keyword query. The process of relevance feedback makes it possible for the user to interact actively with the search engine. The main idea is that after performing an initial search in the WEB, the process enables the user to indicate, among the retrieved sites, a small number of the ones considered relevant or irrelevant according with his/her required information. The users preferences can then be used to rearrange sites returned in the initial search, so that relevant sites are ranked first. As in most cases a search returns a large amount of WEB sites which fits the keyword query, this is an ideal situation to use partially supervised machine learning algorithms. This kind of learning algorithms require a small number of labeled examples, and a large number of unlabeled examples. Thus, based on the assumption that the use of partially supervised learning is appropriate to induce a classifier that can be used as a filter for relevance feedback in WEB information retrieval, the aim of this work is to explore the use of a partially supervised machine learning algorithm, more specifically, one that uses multi-description data, in order to assist the WEB search. To this end, a computational tool called C-SEARCH, which performs the reordering of the searched results using the users feedback, has been implemented. Experimental results show that in cases where the keyword query is generic and there is a clear distinction between relevant and irrelevant sites, which is recognized by the user, the system can achieve good results
|
284 |
Aspectos semânticos na representação de textos para classificação automática / Semantic aspects in the representation of texts for automatic classificationRoberta Akemi Sinoara 24 May 2018 (has links)
Dada a grande quantidade e diversidade de dados textuais sendo criados diariamente, as aplicações do processo de Mineração de Textos são inúmeras e variadas. Nesse processo, a qualidade da solução final depende, em parte, do modelo de representação de textos adotado. Por se tratar de textos em língua natural, relações sintáticas e semânticas influenciam o seu significado. No entanto, modelos tradicionais de representação de textos se limitam às palavras, não sendo possível diferenciar documentos que possuem o mesmo vocabulário, mas que apresentam visões diferentes sobre um mesmo assunto. Nesse contexto, este trabalho foi motivado pela diversidade das aplicações da tarefa de classificação automática de textos, pelo potencial das representações no modelo espaço-vetorial e pela lacuna referente ao tratamento da semântica inerente aos dados em língua natural. O seu desenvolvimento teve o propósito geral de avançar as pesquisas da área de Mineração de Textos em relação à incorporação de aspectos semânticos na representação de coleções de documentos. Um mapeamento sistemático da literatura da área foi realizado e os problemas de classificação foram categorizados em relação à complexidade semântica envolvida. Aspectos semânticos foram abordados com a proposta, bem como o desenvolvimento e a avaliação de sete modelos de representação de textos: (i) gBoED, modelo que incorpora a semântica obtida por meio de conhecimento do domínio; (ii) Uni-based, modelo que incorpora a semântica por meio da desambiguação lexical de sentidos e hiperônimos de conceitos; (iii) SR-based Terms e SR-based Sentences, modelos que incorporam a semântica por meio de anotações de papéis semânticos; (iv) NASARIdocs, Babel2Vec e NASARI+Babel2Vec, modelos que incorporam a semântica por meio de desambiguação lexical de sentidos e embeddings de palavras e conceitos. Representações de coleções de documentos geradas com os modelos propostos e outros da literatura foram analisadas e avaliadas na classificação automática de textos, considerando datasets de diferentes níveis de complexidade semântica. As propostas gBoED, Uni-based, SR-based Terms e SR-based Sentences apresentam atributos mais expressivos e possibilitam uma melhor interpretação da representação dos documentos. Já as propostas NASARIdocs, Babel2Vec e NASARI+Babel2Vec incorporam, de maneira latente, a semântica obtida de embeddings geradas a partir de uma grande quantidade de documentos externos. Essa propriedade tem um impacto positivo na performance de classificação. / Text Mining applications are numerous and varied since a huge amount of textual data are created daily. The quality of the final solution of a Text Mining process depends, among other factors, on the adopted text representation model. Despite the fact that syntactic and semantic relations influence natural language meaning, traditional text representation models are limited to words. The use of such models does not allow the differentiation of documents that use the same vocabulary but present different ideas about the same subject. The motivation of this work relies on the diversity of text classification applications, the potential of vector space model representations and the challenge of dealing with text semantics. Having the general purpose of advance the field of semantic representation of documents, we first conducted a systematic mapping study of semantics-concerned Text Mining studies and we categorized classification problems according to their semantic complexity. Then, we approached semantic aspects of texts through the proposal, analysis, and evaluation of seven text representation models: (i) gBoED, which incorporates text semantics by the use of domain expressions; (ii) Uni-based, which takes advantage of word sense disambiguation and hypernym relations; (iii) SR-based Terms and SR-based Sentences, which make use of semantic role labels; (iv) NASARIdocs, Babel2Vec and NASARI+Babel2Vec, which take advantage of word sense disambiguation and embeddings of words and senses.We analyzed the expressiveness and interpretability of the proposed text representation models and evaluated their classification performance against different literature models. While the proposed models gBoED, Uni-based, SR-based Terms and SR-based Sentences have improved expressiveness, the proposals NASARIdocs, Babel2Vec and NASARI+Babel2Vec are latently enriched by the embeddings semantics, obtained from the large training corpus. This property has a positive impact on text classification performance.
|
285 |
A Conditional Random Field (CRF) Based Machine Learning Framework for Product Review MiningMing, Yue January 2019 (has links)
The task of opinion mining from product reviews has been achieved by employing rule-based approaches or generative learning models such as hidden Markov models (HMMs). This paper introduced a discriminative model using linear-chain Conditional Random Fields (CRFs) that can naturally incorporate arbitrary, non-independent features of the input without conditional independence among the features or distributional assumptions of inputs. The framework firstly performs part-of-speech (POS) tagging tasks over each word in sentences of review text. The performance is evaluated based on three criteria: precision, recall and F-score. The result shows that this approach is effective for this type of natural language processing (NLP) tasks. Then the framework extracts the keywords associated with each product feature and summarizes into concise lists that are simple and intuitive for people to read.
|
286 |
Ukhetho : A Text Mining Study Of The South African General ElectionsMoodley, Avashlin January 2019 (has links)
The elections in South Africa are contested by multiple political parties appealing to a
diverse population that comes from a variety of socioeconomic backgrounds. As a result,
a rich source of discourse is created to inform voters about election-related content. Two
common sources of information to help voters with their decision are news articles and
tweets, this study aims to understand the discourse in these two sources using natural
language processing. Topic modelling techniques, Latent Dirichlet Allocation and Non-
negative Matrix Factorization, are applied to digest the breadth of information collected
about the elections into topics. The topics produced are subjected to further analysis
that uncovers similarities between topics, links topics to dates and events and provides a
summary of the discourse that existed prior to the South African general elections. The
primary focus is on the 2019 elections, however election-related articles from 2014 and
2019 were also compared to understand how the discourse has changed. / Mini Dissertation (MIT (Big Data Science))--University of Pretoria, 2019. / Computer Science / MIT (Big Data Science) / Unrestricted
|
287 |
Zlepšení předpovědi sociálních značek využitím Data Mining / Improved Prediction of Social Tags Using Data MiningHarár, Pavol January 2015 (has links)
This master’s thesis deals with using Text mining as a method to predict tags of articles. It describes the iterative way of handling big data files, parsing the data, cleaning the data and scoring of terms in article using TF-IDF. It describes in detail the flow of program written in programming language Python 3.4.3. The result of processing more than 1 million articles from Wikipedia database is a dictionary of English terms. By using this dictionary one is capable of determining the most important terms from article in corpus of articles. Relevancy of consequent tags proves the method used in this case.
|
288 |
Klasifikace textu pomocí metody SVM / Text Classification with the SVM MethodSynek, Radovan January 2010 (has links)
This thesis deals with text mining. It focuses on problems of document classification and related techniques, mainly data preprocessing. Project also introduces the SVM method, which has been chosen for classification, design and testing of implemented application.
|
289 |
Algoritmus pro detekci pozitívního a negatívního textu / The algorithm for the detection of positive and negative textMusil, David January 2016 (has links)
As information and communication technology develops swiftly, amount of information produced by various sources grows as well. Sorting and obtaining knowledge from this data requires significant effort which is not ensured easily by a human, meaning machine processing is taking place. Acquiring emotion from text data is an interesting area of research and it’s going through considerable expansion while being used widely. Purpose of this thesis is to create a system for positive and negative emotion detection from text along with evaluation of its performance. System was created with Java programming language and it allows training with use of large amount of data (known as Big Data), exploiting Spark library. Thesis describes structure and handling text from database used as source of input data. Classificator model was created with use of Support Vector Machines and optimized by the n-grams method.
|
290 |
Modélisation automatique des conversations en tant que processus d'intentions de discours interdépendantes / Automatically modeling conversations as processes of interrelated speech IntentionsEpure, Elena Viorica 14 December 2018 (has links)
La prolifération des données numériques a permis aux communautés de scientifiques et de praticiens de créer de nouvelles technologies basées sur les données pour mieux connaître les utilisateurs finaux et en particulier leur comportement. L’objectif est alors de fournir de meilleurs services et un meilleur support aux personnes dans leur expérience numérique. La majorité de ces technologies créées pour analyser le comportement humain utilisent très souvent des données de logs générées passivement au cours de l’interaction homme-machine. Une particularité de ces traces comportementales est qu’elles sont enregistrées et stockées selon une structure clairement définie. En revanche, les traces générées de manière proactive sont très peu structurées et représentent la grande majorité des données numériques existantes. De plus, les données non structurées se trouvent principalement sous forme de texte. À ce jour, malgré la prédominance des données textuelles et la pertinence des connaissances comportementales dans de nombreux domaines, les textes numériques sont encore insuffisamment étudiés en tant que traces du comportement humain pour révéler automatiquement des connaissances détaillées sur le comportement.L’objectif de recherche de cette thèse est de proposer une méthode indépendante du corpus pour exploiter automatiquement les communications asynchrones en tant que traces de comportement générées de manière proactive afin de découvrir des modèles de processus de conversations,axés sur des intentions de discours et des relations, toutes deux exhaustives et détaillées.Plusieurs contributions originales sont faites. Il y est menée la seule revue systématique existante à ce jour sur la modélisation automatique des conversations asynchrones avec des actes de langage. Une taxonomie des intentions de discours est dérivée de la linguistique pour modéliser la communication asynchrone. Comparée à toutes les taxonomies des travaux connexes,celle proposée est indépendante du corpus, à la fois plus détaillée et exhaustive dans le contexte donné, et son application par des non-experts est prouvée au travers d’expériences approfondies.Une méthode automatique, indépendante du corpus, pour annoter les énoncées de communication asynchrone avec la taxonomie des intentions de discours proposée, est conçue sur la base d’un apprentissage automatique supervisé. Pour cela, deux corpus "ground-truth" validés sont créés et trois groupes de caractéristiques (discours, contenu et conversation) sont conçus pour être utilisés par les classificateurs. En particulier, certaines des caractéristiques du discours sont nouvelles et définies en considérant des moyens linguistiques pour exprimer des intentions de discours,sans s’appuyer sur le contenu explicite du corpus, le domaine ou les spécificités des types de communication asynchrones. Une méthode automatique basée sur la fouille de processus est conçue pour générer des modèles de processus d’intentions de discours interdépendantes à partir de tours de parole, annotés avec plusieurs labels par phrase. Comme la fouille de processus repose sur des logs d’événements structurés et bien définis, un algorithme est proposé pour produire de tels logs d’événements à partir de conversations. Par ailleurs, d’autres solutions pour transformer les conversations annotées avec plusieurs labels par phrase en logs d’événements, ainsi que l’impact des différentes décisions sur les modèles comportementaux en sortie sont analysées afin d’alimenter de futures recherches.Des expériences et des validations qualitatives à la fois en médecine et en analyse conversationnelle montrent que la solution proposée donne des résultats fiables et pertinents. Cependant,des limitations sont également identifiées, elles devront être abordées dans de futurs travaux. / The proliferation of digital data has enabled scientific and practitioner communities to createnew data-driven technologies to learn about user behaviors in order to deliver better services and support to people in their digital experience. The majority of these technologies extensively derive value from data logs passively generated during the human-computer interaction. A particularity of these behavioral traces is that they are structured. However, the pro-actively generated text across Internet is highly unstructured and represents the overwhelming majority of behavioral traces. To date, despite its prevalence and the relevance of behavioral knowledge to many domains, such as recommender systems, cyber-security and social network analysis,the digital text is still insufficiently tackled as traces of human behavior to automatically reveal extensive insights into behavior.The main objective of this thesis is to propose a corpus-independent method to automatically exploit the asynchronous communication as pro-actively generated behavior traces in order to discover process models of conversations, centered on comprehensive speech intentions and relations. The solution is built in three iterations, following a design science approach.Multiple original contributions are made. The only systematic study to date on the automatic modeling of asynchronous communication with speech intentions is conducted. A speech intention taxonomy is derived from linguistics to model the asynchronous communication and, comparedto all taxonomies from the related works, it is corpus-independent, comprehensive—as in both finer-grained and exhaustive in the given context, and its application by non-experts is proven feasible through extensive experiments. A corpus-independent, automatic method to annotate utterances of asynchronous communication with the proposed speech intention taxonomy is designed based on supervised machine learning. For this, validated ground-truth corpora arecreated and groups of features—discourse, content and conversation-related, are engineered to be used by the classifiers. In particular, some of the discourse features are novel and defined by considering linguistic means to express speech intentions, without relying on the corpus explicit content, domain or on specificities of the asynchronous communication types. Then, an automatic method based on process mining is designed to generate process models of interrelated speech intentions from conversation turns, annotated with multiple speech intentions per sentence. As process mining relies on well-defined structured event logs, an algorithm to produce such logs from conversations is proposed. Additionally, an extensive design rationale on how conversations annotated with multiple labels per sentence could be transformed in event logs and what is the impact of different decisions on the output behavioral models is released to support future research. Experiments and qualitative validations in medicine and conversation analysis show that the proposed solution reveals reliable and relevant results, but also limitations are identified,to be addressed in future works.
|
Page generated in 0.0864 seconds