391 |
Concept Based Knowledge Discovery from Biomedical LiteratureRadovanovic, Aleksandar. January 2009 (has links)
Philosophiae Doctor - PhD / This thesis describes and introduces novel methods for knowledge discovery and presents a software system that is able to extract information from biomedical literature, review interesting connections between various biomedical concepts and in so doing, generates new hypotheses. The experimental results obtained by using methods described in this thesis, are compared to currently published results obtained by other methods and a number of case studies are described. This thesis shows how the technology, resented can be integrated with the researchers own knowledge, experimentation and observations for optimal progression of scientific research. / South Africa
|
392 |
The development of a single nucleotide polymorphism database for forensic identification of specified physical traitsNaidu, Alecia Geraldine January 2009 (has links)
Magister Scientiae - MSc / Many Single Nucleotide Polymorphisms (SNPs) found in coding or regulatory regions within the human genome lead to phenotypic differences that make prediction of physical appearance, based on genetic analysis, potentially useful in forensic investigations. Complex traits such as pigmentation can be predicted from the genome sequence, provided that genes with strong effects on the trait exist and are known. Phenotypic traits may also be associated with variations in gene expression due to the presence of SNPs in promoter regions. In this project, the identification of genes associated with these physical traits of potential forensic relevance have been collated from the literature using a text mining platform and hand curation. The SNPs associated with these genes have been acquired from public SNP repositories such as the International HapMap project, dbSNP and Ensembl. Characterization of different population groups based on the SNPs has been performed and the results and data stored in a MySQL database. This database contains SNP genotyping data with respect to physical phenotypic differences of forensic interest. The potential forensicrelevance of the SNP information contained in this database has been verified through in silico SNP analysis aimed at establishing possible relationships between SNP occurrence and phenotype. The software used for this analysis is MATCH™. Data
management and access has been enhanced by the use of a functional web-based front-end which enables the users to extract and display SNP information without running complex Structured Query Language (SQL) statements from the command line. This Forensic SNP Phenotype resource can be accessed at http://forensic.sanbi.ac.za/alecia_forensics/Index.html / South Africa
|
393 |
Development of a hepatitis C virus knowledgebase with computational prediction of functional hypothesis of therapeutic relevanceSamuel, Kojo Kwofie January 2011 (has links)
Philosophiae Doctor - PhD / To ameliorate Hepatitis C Virus (HCV) therapeutic and diagnostic challenges requires robust intervention strategies, including approaches that leverage the plethora of rich data published in biomedical literature to gain greater understanding of HCV pathobiological mechanisms. The multitudes of metadata originating from HCV clinical trials as well as low and high-throughput experiments embedded in text corpora can be mined as data sources for the implementation of HCV-specific resources. HCV-customized resources may support the generation of worthy and testable hypothesis and reveal potential research clues to augment the pursuit of efficient diagnostic biomarkers and therapeutic targets. This research thesis report the development of two freely available HCV-specific web-based resources: (i) Dragon Exploratory System on Hepatitis C Virus (DESHCV) accessible via http://apps.sanbi.ac.za/DESHCV/ or http://cbrc.kaust.edu.sa/deshcv/ and(ii) Hepatitis C Virus Protein Interaction Database (HCVpro) accessible via http://apps.sanbi.ac.za/hcvpro/ or http://cbrc.kaust.edu.sa/hcvpro/.DESHCV is a text mining system implemented using named concept recognition and cooccurrence based approaches to computationally analyze about 32, 000 HCV related abstracts obtained from PubMed. As part of DESHCV development, the pre-constructed dictionaries of the Dragon Exploratory System (DES) were enriched with HCV biomedical concepts, including HCV proteins, name variants and symbols to enable HCV knowledge specific exploration. The DESHCV query inputs consist of user-defined keywords, phrases and concepts. DESHCV is therefore an information extraction tool that enables users to computationally generate association between concepts and support the prediction of potential hypothesis with diagnostic and therapeutic relevance.Additionally, users can retrieve a list of abstracts containing tagged concepts that can be used to overcome the herculean task of manual biocuration. DESHCV has been used to simulate previously reported thalidomide-chronic hepatitis C hypothesis and also to model a potentially novel thalidomide-amantadine hypothesis.HCVpro is a relational knowledgebase dedicated to housing experimentally detected HCV-HCV and HCV-human protein interaction information obtained from other databases and curated from biomedical journal articles. Additionally, the database contains consolidated biological information consisting of hepatocellular carcinoma(HCC) related genes, comprehensive reviews on HCV biology and drug development,functional genomics and molecular biology data, and cross-referenced links to canonical pathways and other essential biomedical databases. Users can retrieve enriched information including interaction metadata from HCVpro by using protein identifiers,gene chromosomal locations, experiment types used in detecting the interactions, PubMed IDs of journal articles reporting the interactions, annotated protein interaction IDs from external databases, and via “string searches”. The utility of HCVpro has been demonstrated by harnessing integrated data to suggest putative baseline clues that seem to support current diagnostic exploratory efforts directed towards vimentin. Furthermore,eight genes comprising of ACLY, AZGP1, DDX3X, FGG, H19, SIAH1, SERPING1 and THBS1 have been recommended for possible investigation to evaluate their diagnostic potential. The data archived in HCVpro can be utilized to support protein-protein interaction network-based candidate HCC gene prioritization for possible validation by experimental biologists.
|
394 |
DETECTION OF EMERGING DISRUPTIVE FIELDS USING ABSTRACTS OF SCIENTIFIC ARTICLESVorgianitis, Georgios January 2017 (has links)
With the significant advancementstaking place in the last three decades in the field ofInformation Technology (IT), we are witnesses of an era unprecedented to the standards that mankind was used to, for centuries. Having access to a huge amount of dataalmost instantly,entails certainadvantages. One of which is the ability to observe in which segments of their expertise do scientists focus their research. That kind of knowledge, if properly appraised could hold the key to explaining what the new directions of the applied sciences will be and thus could help to constructing a “map” of the future developments from the Research and Development labs of the industries worldwide.Though the above statement may be considered too “futuristic”, already there have been documented attempts in the literature that have been fruitful into using vast amount of scientific data in an attempt to outline future scientific trends and thus scientific discoveries.The purpose of this research is to try to use a pioneeringmethodof modeling text corpora that already hasbeen used previously to the task of mapping the history of scientific discovery, that of Latent Dirichlet Allocation (LDA)and try to evaluate itsusability into detecting emerging research trends by the mere use of only the “Abstracts” from a collectionof scientific articles.To do that an experimental set is being utilized and the process is repeated over three experimental runs.The results, although not the ones that would validate the hypothesis, are showing that with certain improvements in the processing the hypothesis could be confirmed.
|
395 |
Elicitation of Protein-Protein Interactions from Biomedical Literature Using Association Rule DiscoverySamuel, Jarvie John 08 1900 (has links)
Extracting information from a stack of data is a tedious task and the scenario is no different in proteomics. Volumes of research papers are published about study of various proteins in several species, their interactions with other proteins and identification of protein(s) as possible biomarker in causing diseases. It is a challenging task for biologists to keep track of these developments manually by reading through the literatures. Several tools have been developed by computer linguists to assist identification, extraction and hypotheses generation of proteins and protein-protein interactions from biomedical publications and protein databases. However, they are confronted with the challenges of term variation, term ambiguity, access only to abstracts and inconsistencies in time-consuming manual curation of protein and protein-protein interaction repositories. This work attempts to attenuate the challenges by extracting protein-protein interactions in humans and elicit possible interactions using associative rule mining on full text, abstracts and captions from figures available from publicly available biomedical literature databases. Two such databases are used in our study: Directory of Open Access Journals (DOAJ) and PubMed Central (PMC). A corpus is built using articles based on search terms. A dataset of more than 38,000 protein-protein interactions from the Human Protein Reference Database (HPRD) is cross-referenced to validate discovered interactive pairs. A set of an optimal size of possible binary protein-protein interactions is generated to be made available for clinician or biological validation. A significant change in the number of new associations was found by altering the thresholds for support and confidence metrics. This study narrows down the limitations for biologists in keeping pace with discovery of protein-protein interactions via manually reading the literature and their needs to validate each and every possible interaction.
|
396 |
Avaliação de métodos não-supervisionados de seleção de atributos para mineração de textos / Evaluation of unsupervised feature selection methods for Text MiningBruno Magalhães Nogueira 27 March 2009 (has links)
Selecionar atributos é, por vezes, uma atividade necessária para o correto desenvolvimento de tarefas de aprendizado de máquina. Em Mineração de Textos, reduzir o número de atributos em uma base de textos é essencial para a eficácia do processo e a compreensibilidade do conhecimento extraído, uma vez que se lida com espaços de alta dimensionalidade e esparsos. Quando se lida com contextos nos quais a coleção de textos é não-rotulada, métodos não-supervisionados de redução de atributos são utilizados. No entanto, não existe forma geral predefinida para a obtenção de medidas de utilidade de atributos em métodos não-supervisionados, demandando um esforço maior em sua realização. Assim, este trabalho aborda a seleção não-supervisionada de atributos por meio de um estudo exploratório de métodos dessa natureza, comparando a eficácia de cada um deles na redução do número de atributos em aplicações de Mineração de Textos. Dez métodos são comparados - Ranking porTerm Frequency, Ranking por Document Frequency, Term Frequency-Inverse Document Frequency, Term Contribution, Term Variance, Term Variance Quality, Método de Luhn, Método LuhnDF, Método de Salton e Zone-Scored Term Frequency - sendo dois deles aqui propostos - Método LuhnDF e Zone-Scored Term Frequency. A avaliação se dá em dois focos, supervisionado, pelo medida de acurácia de quatro classificadores (C4.5, SVM, KNN e Naïve Bayes), e não-supervisionado, por meio da medida estatística de Expected Mutual Information Measure. Aos resultados de avaliação, aplica-se o teste estatístico de Kruskal-Wallis para determinação de significância estatística na diferença de desempenho dos diferentes métodos de seleção de atributos comparados. Seis bases de textos são utilizadas nas avaliações experimentais, cada uma relativa a um grande domínio e contendo subdomínios, os quais correspondiam às classes usadas para avaliação supervisionada. Com esse estudo, este trabalho visa contribuir com uma aplicação de Mineração de Textos que visa extrair taxonomias de tópicos a partir de bases textuais não-rotuladas, selecionando os atributos mais representativos em uma coleção de textos. Os resultados das avaliações mostram que não há diferença estatística significativa entre os métodos não-supervisionados de seleção de atributos comparados. Além disso, comparações desses métodos não-supervisionados com outros supervisionados (Razão de Ganho e Ganho de Informação) apontam que é possível utilizar os métodos não-supervisionados em atividades supervisionadas de Mineração de Textos, obtendo eficiência compatível com os métodos supervisionados, dado que não detectou-se diferença estatística nessas comparações, e com um custo computacional menor / Feature selection is an activity sometimes necessary to obtain good results in machine learning tasks. In Text Mining, reducing the number of features in a text base is essential for the effectiveness of the process and the comprehensibility of the extracted knowledge, since it deals with high dimensionalities and sparse contexts. When dealing with contexts in which the text collection is not labeled, unsupervised methods for feature reduction have to be used. However, there aren\'t any general predefined feature quality measures for unsupervised methods, therefore demanding a higher effort for its execution. So, this work broaches the unsupervised feature selection through an exploratory study of methods of this kind, comparing their efficacies in the reduction of the number of features in the Text Mining process. Ten methods are compared - Ranking by Term Frequency, Ranking by Document Frequency, Term Frequency-Inverse Document Frequency, Term Contribution, Term Variance, Term Variance Quality, Luhn\'s Method, LuhnDF Method, Salton\'s Method and Zone-Scored Term Frequency - and two of them are proposed in this work - LuhnDF Method and Zone-Scored Term Frequency. The evaluation process is done in two ways, supervised, through the accuracy measure of four classifiers (C4.5, SVM, KNN and Naïve Bayes), and unsupervised, using the Expected Mutual Information Measure. The evaluation results are submitted to the statistical test of Kruskal-Wallis in order to determine the statistical significance of the performance difference of the different feature selection methods. Six text bases are used in the experimental evaluation, each one related to one domain and containing sub domains, which correspond to the classes used for supervised evaluation. Through this study, this work aims to contribute with a Text Mining application that extracts topic taxonomies from unlabeled text collections, through the selection of the most representative features in a text collection. The evaluation results show that there is no statistical difference between the unsupervised feature selection methods compared. Moreover, comparisons of these unsupervised methods with other supervised ones (Gain Ratio and Information Gain) show that it is possible to use unsupervised methods in supervised Text Mining activities, obtaining an efficiency compatible with supervised methods, since there isn\'t any statistical difference the statistical test detected in these comparisons, and with a lower computational effort
|
397 |
Extração de informação contextual utilizando mineração de textos para sistemas de recomendação sensíveis ao contexto / Contextual information extraction using text mining for recommendation systems context sensitiveCamila Vaccari Sundermann 20 March 2015 (has links)
Com a grande variedade de produtos e serviços disponíveis na Web, os usuários possuem, em geral, muita liberdade de escolha, o que poderia ser considerado uma vantagem se não fosse pela dificuldade encontrada em escolher o produto ou serviço que mais atenda a suas necessidades dentro do vasto conjunto de opções disponíveis. Sistemas de recomendação são sistemas que têm como objetivo auxiliar esses usuários a identificarem itens de interesse em um conjunto de opções. A maioria das abordagens de sistemas de recomendação foca em recomendar itens mais relevantes para usuários individuais, não levando em consideração o contexto dos usuários. Porém, em muitas aplicações é importante também considerar informações contextuais para fazer as recomendações. Por exemplo, um usuário pode desejar assistir um filme com a sua namorada no sábado à noite ou com os seus amigos durante um dia de semana, e uma locadora de filmes na Web pode recomendar diferentes tipos de filmes para este usuário dependendo do contexto no qual este se encontra. Um grande desafio para o uso de sistemas de recomendação sensíveis ao contexto é a falta de métodos para aquisição automática de informação contextual para estes sistemas. Diante desse cenário, neste trabalho é proposto um método para extrair informações contextuais do conteúdo de páginas Web que consiste em construir hierarquias de tópicos do conteúdo textual das páginas considerando, além da bag-of-words tradicional (informação técnica), também informações mais valiosas dos textos como entidades nomeadas e termos do domínio (informação privilegiada). Os tópicos extraídos das hierarquias das páginas Web são utilizados como informações de contexto em sistemas de recomendação sensíveis ao contexto. Neste trabalho foram realizados experimentos para avaliação do contexto extraído pelo método proposto em que foram considerados dois baselines: um sistema de recomendação que não considera informação de contexto e um método da literatura de extração de contexto implementado e adaptado para este mestrado. Além disso, foram utilizadas duas bases de dados. Os resultados obtidos foram, de forma geral, muito bons apresentando ganhos significativos sobre o baseline sem contexto. Com relação ao baseline que extrai informação contextual, o método proposto se mostrou equivalente ou melhor que o mesmo. / With the wide variety of products and services available on the web, it is difficult for users to choose the option that most meets their needs. In order to reduce or even eliminate this difficulty, recommender systems have emerged. A recommender system is used in various fields to recommend items of interest to users. Most recommender approaches focus only on users and items to make the recommendations. However, in many applications it is also important to incorporate contextual information into the recommendation process. For example, a user may want to watch a movie with his girlfriend on Saturday night or with his friends during a weekday, and a video store on the Web can recommend different types of movies for this user depending on his context. Although the use of contextual information by recommendation systems has received great focus in recent years, there is a lack of automatic methods to obtain such information for context-aware recommender systems. For this reason, the acquisition of contextual information is a research area that needs to be better explored. In this scenario, this work proposes a method to extract contextual information of Web page content. This method builds topic hierarchies of the pages textual content considering, besides the traditional bag-of-words, valuable information of texts as named entities and domain terms (privileged information). The topics extracted from the hierarchies are used as contextual information in context-aware recommender systems. By using two databases, experiments were conducted to evaluate the contextual information extracted by the proposed method. Two baselines were considered: a recommendation system that does not use contextual information (IBCF) and a method proposed in literature to extract contextual information (\\methodological\" baseline), adapted for this research. The results are, in general, very good and show significant gains over the baseline without context. Regarding the \"methodological\" baseline, the proposed method is equivalent to or better than this baseline.
|
398 |
Élaboration d'une méthode semi-automatique pour l'identification et le traitement des signaux d'émergence pour la veille internationale sur les maladies animales infectieuses / Elaboration of a Semi-Automatic Method for Identification and Analysis of Signals of Emergence of Animal Infectious Diseases at International LevelArsevska, Elena 31 January 2017 (has links)
La veille en santé animale, notamment la détection précoce de l'émergence d'agents pathogènes exotiques et émergents à l'échelle mondiale, est l'un des moyens de lutte contre l'introduction de ces agents pathogènes en France.Récemment, il y a eu une réelle prise de conscience par les autorités sanitaires de l'utilité de l'information non-structurée concernant les maladies infectieuses publiée sur le Web.C'est dans ce contexte que nous proposons un outil de veille basé sur une méthode de fouille de textes pour la détection, collecte, catégorisation et extraction de l'information sanitaire à partir des donnés textuelles non structurées (articles médias) publiées sur le Web.Notre méthode est générique. Toutefois, pour l'élaborer, nous l'appliquons à cinq maladies animales infectieuses exotiques : la peste porcine africaine, la fièvre aphteuse, la fièvre catarrhale ovine, la maladie du virus Schmallenberg et l'influenza aviaire.Nous démontrons que des techniques de fouille de textes, complétées par les connaissances d'experts du domaine, sont la fondation d'une veille sanitaire du Web à la fois efficace et réactive pour détecter des émergences de maladies exotiques au niveau international.Notre outil sera utilisé par le dispositif de veille sanitaire internationale en France, et facilitera la détection précoce de signaux de dangers sanitaires émergents dans les articles médias du Web. / Monitoring animal health worldwide, especially the early detection of outbreaks of emerging and exotic pathogens, is one of the means of preventing the introduction of infectious diseases in France.Recently, there is an increasing awareness among health authorities for the use of unstructured information published on the Web for epidemic intelligence purposes.In this manuscript we present a semi-automatic text mining approach, which detects, collects, classifies and extracts information from non-structured textual data available in the media reports on the Web. Our approach is generic; however, it was elaborated using five exotic animal infectious diseases: african swine fever, foot-and-mouth disease, bluetongue, Schmallenberg, and avian influenza.We show that the text mining techniques, supplemented by the knowledge of domain experts, are the foundation of an efficient and reactive system for monitoring animal health emergence on the Web.Our tool will be used by the French epidemic intelligence team for international monitoring of animal health, and will facilitate the early detection of events related to emerging health hazards identified from media reports on the Web.
|
399 |
Recognising Moral Foundations in Online Extremist Discourse : A Cross-Domain Classification Studyvan Luenen, Anne Fleur January 2020 (has links)
So far, studies seeking to recognise moral foundations in texts have been relatively successful (Araque et al., 2019; Lin et al., 2018; Mooijman et al., 2017; Rezapouret al., 2019). There are, however, two issues with these studies: Firstly, it is an extensive process to gather and annotate sufficient material for training. Secondly, models are only trained and tested within the same domain. It is yet unexplored how these models for moral foundation prediction perform when tested in other domains, but from their experience with annotation, Hoover et al. (2017) describe how moral sentiments on one topic (e.g. black lives matter) might be completely different from moral sentiments on another (e.g. presidential elections). This study attempts to explore to what extent models generalise to other domains. More specifically, we focus on training on Twitter data from non-extremist sources, and testing on data from an extremist (white nationalist) forum. We conducted two experiments. In our first experiment we test whether it is possible to do cross domain classification of moral foundations. Additionally, we compare the performance of a model using the Word2Vec embeddings used in previous studies to a model using the newer BERT embeddings. We find that although the performance drops significantly on the extremist out-domain test sets, out-domain classification is not impossible. Furthermore, we find that the BERT model generalises marginally better to the out-domain test set, than the Word2Vec model. In our second experiment we attempt to improve the generalisation to extremist test data by providing contextual knowledge. Although this does not improve the model, it does show the model’s robustness against noise. Finally we suggest an alternative approach for accounting for contextual knowledge.
|
400 |
Dolovanie znalostí z textových dát použitím metód umelej inteligencie / Text Mining Based on Artificial Intelligence MethodsPovoda, Lukáš January 2018 (has links)
This work deals with the problem of text mining which is becoming more popular due to exponential growth of the data in electronic form. The work explores contemporary methods and their improvement using optimization methods, as well as the problem of text data understanding in general. The work addresses the problem in three ways: using traditional methods and their optimizations, using Big Data in train phase and abstraction through the minimization of language-dependent parts, and introduction of the new method based on the deep learning which is closer to how human reads and understands text data. The main aim of the dissertation was to propose a method for machine understanding of unstructured text data. The method was experimentally verified by classification of text data on 5 different languages – Czech, English, German, Spanish and Chinese. This demonstrates possible application to different languages families. Validation on the Yelp evaluation database achieve accuracy higher by 0.5% than current methods.
|
Page generated in 0.0427 seconds