• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 5
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 28
  • 28
  • 12
  • 11
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Data mining in large audio collections of dolphin signals

Kohlsdorf, Daniel 21 September 2015 (has links)
The study of dolphin cognition involves intensive research of animal vocal- izations recorded in the field. In this dissertation I address the automated analysis of audible dolphin communication. I propose a system called the signal imager that automatically discovers patterns in dolphin signals. These patterns are invariant to frequency shifts and time warping transformations. The discovery algorithm is based on feature learning and unsupervised time series segmentation using hidden Markov models. Researchers can inspect the patterns visually and interactively run com- parative statistics between the distribution of dolphin signals in different behavioral contexts. The required statistics for the comparison describe dolphin communication as a combination of the following models: a bag-of-words model, an n-gram model and an algorithm to learn a set of regular expressions. Furthermore, the system can use the patterns to automatically tag dolphin signals with behavior annotations. My results indicate that the signal imager provides meaningful patterns to the marine biologist and that the comparative statistics are aligned with the biologists’ domain knowledge.
2

Microarray analysis using pattern discovery

Bainbridge, Matthew Neil 10 December 2004
Analysis of gene expression microarray data has traditionally been conducted using hierarchical clustering. However, such analysis has many known disadvantages and pattern discovery (PD) has been proposed as an alternative technique. In this work, three similar but different PD algorithms Teiresias, Splash and Genes@Work were benchmarked for time and memory efficiency on a small yeast cell-cycle data set. Teiresias was found to be the fastest, and best over-all program. However, Splash was more memory efficient. This work also investigated the performance of four methods of discretizing microarray data: sign-of-the-derivative, K-means, pre-set value, and Genes@Work stratification. The first three methods were evaluated on their predisposition to group together biologically related genes. On a yeast cell-cycle data set, sign-of-the-derivative method yielded the most biologically significant patterns, followed by the pre-set value and K-means methods. K-means, preset-value, and Genes@Work were also compared on their ability to classify tissue samples from diffuse large b-cell lymphoma (DLBCL) into two subtypes determined by standard techniques. The Genes@Work stratification method produced the best patterns for discriminating between the two subtypes of lymphoma. However, the results from the second-best method, K-means, call into question the accuracy of the classification by the standard technique. Finally, a number of recommendations for improvement of pattern discovery algorithms and discretization techniques are made.
3

Microarray analysis using pattern discovery

Bainbridge, Matthew Neil 10 December 2004 (has links)
Analysis of gene expression microarray data has traditionally been conducted using hierarchical clustering. However, such analysis has many known disadvantages and pattern discovery (PD) has been proposed as an alternative technique. In this work, three similar but different PD algorithms Teiresias, Splash and Genes@Work were benchmarked for time and memory efficiency on a small yeast cell-cycle data set. Teiresias was found to be the fastest, and best over-all program. However, Splash was more memory efficient. This work also investigated the performance of four methods of discretizing microarray data: sign-of-the-derivative, K-means, pre-set value, and Genes@Work stratification. The first three methods were evaluated on their predisposition to group together biologically related genes. On a yeast cell-cycle data set, sign-of-the-derivative method yielded the most biologically significant patterns, followed by the pre-set value and K-means methods. K-means, preset-value, and Genes@Work were also compared on their ability to classify tissue samples from diffuse large b-cell lymphoma (DLBCL) into two subtypes determined by standard techniques. The Genes@Work stratification method produced the best patterns for discriminating between the two subtypes of lymphoma. However, the results from the second-best method, K-means, call into question the accuracy of the classification by the standard technique. Finally, a number of recommendations for improvement of pattern discovery algorithms and discretization techniques are made.
4

Pattern Discovery in DNA Sequences

Yan, Rui 20 March 2014 (has links)
A pattern is a relatively short sequence that represents a phenomenon in a set of sequences. Not all short sequences are patterns; only those that are statistically significant are referred to as patterns or motifs. Pattern discovery methods analyze sequences and attempt to identify and characterize meaningful patterns. This thesis extends the application of pattern discovery algorithms to a new problem domain - Single Nucleotide Polymorphism (SNP) classification. SNPs are single base-pair (bp) variations in the genome, and are probably the most common form of genetic variation. On average, one in every thousand bps may be an SNP. The function of most SNPs, especially those not associated with protein sequence changes, remains unclear. However, genome-wide linkage analyses have associated many SNPs with disorders ranging from Crohn’s disease, to cancer, to quantitative traits such as height or hair color. As a result, many groups are working to predict the functional effects of individual SNPs. In contrast, very little research has examined the causes of SNPs: Why do SNPs occur where they do? This thesis addresses this problem by using pattern discovery algorithms to study DNA non-coding sequences. The hypothesis is that short DNA patterns can be used to predict SNPs. For example, such patterns found in the SNP sequence might block the DNA repair mechanism for the SNP, thus causing SNP occurrence. In order to test the hypothesis, a model is developed to predict SNPs by using pattern discovery methods. The results show that SNP prediction with pattern discovery methods is weak (50 2%), whereas machine learning classification algorithms can achieve prediction accuracy as high as 68%. To determine whether the poor performance of pattern discovery is due to data characteristics (such as sequence length or pattern length) or to the specific biological problem (SNP prediction), a survey was conducted by profiling eight representative pattern discovery methods at multiple parameter settings on 6,754 real biological datasets. This is the first systematic review of pattern discovery methods with assessments of prediction accuracy, CPU usage and memory consumption. It was found that current pattern discovery methods do not consider positional information and do not handle short sequences well (<150 bps), including SNP sequences. Therefore, this thesis proposes a new supervised pattern discovery classification algorithm, referred to as Weighted-Position Pattern Discovery and Classification (WPPDC). The WPPDC is able to exploit positional information to identify positionally-enriched motifs, and to select motifs with a high information content for further classification. Tree structure is applied to WPPDC (referred to as T-WPPDC) in order to reduce algorithmic complexity. Compared to pattern discovery methods T-WPPDC not only showed consistently superior prediction accuracy and but generated patterns with positional information. Machine-learning classification methods (such as Random Forests) showed comparable prediction accuracy. However, unlike T-WPPDC, they are classification methods and are unable to generate SNP-associated patterns.
5

Pattern Discovery in DNA Sequences

Yan, Rui 20 March 2014 (has links)
A pattern is a relatively short sequence that represents a phenomenon in a set of sequences. Not all short sequences are patterns; only those that are statistically significant are referred to as patterns or motifs. Pattern discovery methods analyze sequences and attempt to identify and characterize meaningful patterns. This thesis extends the application of pattern discovery algorithms to a new problem domain - Single Nucleotide Polymorphism (SNP) classification. SNPs are single base-pair (bp) variations in the genome, and are probably the most common form of genetic variation. On average, one in every thousand bps may be an SNP. The function of most SNPs, especially those not associated with protein sequence changes, remains unclear. However, genome-wide linkage analyses have associated many SNPs with disorders ranging from Crohn’s disease, to cancer, to quantitative traits such as height or hair color. As a result, many groups are working to predict the functional effects of individual SNPs. In contrast, very little research has examined the causes of SNPs: Why do SNPs occur where they do? This thesis addresses this problem by using pattern discovery algorithms to study DNA non-coding sequences. The hypothesis is that short DNA patterns can be used to predict SNPs. For example, such patterns found in the SNP sequence might block the DNA repair mechanism for the SNP, thus causing SNP occurrence. In order to test the hypothesis, a model is developed to predict SNPs by using pattern discovery methods. The results show that SNP prediction with pattern discovery methods is weak (50 2%), whereas machine learning classification algorithms can achieve prediction accuracy as high as 68%. To determine whether the poor performance of pattern discovery is due to data characteristics (such as sequence length or pattern length) or to the specific biological problem (SNP prediction), a survey was conducted by profiling eight representative pattern discovery methods at multiple parameter settings on 6,754 real biological datasets. This is the first systematic review of pattern discovery methods with assessments of prediction accuracy, CPU usage and memory consumption. It was found that current pattern discovery methods do not consider positional information and do not handle short sequences well (<150 bps), including SNP sequences. Therefore, this thesis proposes a new supervised pattern discovery classification algorithm, referred to as Weighted-Position Pattern Discovery and Classification (WPPDC). The WPPDC is able to exploit positional information to identify positionally-enriched motifs, and to select motifs with a high information content for further classification. Tree structure is applied to WPPDC (referred to as T-WPPDC) in order to reduce algorithmic complexity. Compared to pattern discovery methods T-WPPDC not only showed consistently superior prediction accuracy and but generated patterns with positional information. Machine-learning classification methods (such as Random Forests) showed comparable prediction accuracy. However, unlike T-WPPDC, they are classification methods and are unable to generate SNP-associated patterns.
6

Etude bioinformatique de l’évolution de la régulation transcriptionnelle chez les bactéries/Bioinformatic study of the evolution of the transcriptional regulation in bacteria

Janky, Rekin's 17 December 2007 (has links)
L'objet de cette thèse de bioinformatique est de mieux comprendre l’ensemble des systèmes de régulation génique chez les bactéries. La disponibilité de centaines de génomes complets chez les bactéries ouvre la voie aux approches de génomique comparative et donc à l’étude de l’évolution des réseaux transcriptionnels bactériens. Dans un premier temps, nous avons implémenté et validé plusieurs méthodes de prédiction d’opérons sur base des génomes bactériens séquencés. Suite à cette étude, nous avons décidé d’utiliser un algorithme qui se base simplement sur un seuil sur la distance intergénique, à savoir la distance en paires de bases entre deux gènes adjacents. Notre évaluation sur base d’opérons annotés chez Escherichia coli et Bacillus subtilis nous permet de définir un seuil optimal de 55pb pour lequel nous obtenons respectivement 78 et 79% de précision. Deuxièmement, l’identification des motifs de régulation transcriptionnelle, tels les sites de liaison des facteurs de transcription, donne des indications de l’organisation de la régulation. Nous avons développé une méthode de recherche d’empreintes phylogénétiques qui consiste à découvrir des paires de mots espacés (dyades) statistiquement sur-représentées en amont de gènes orthologues bactériens. Notre méthode est particulièrement adaptée à la recherche de motifs chez les bactéries puisqu’elle profite d’une part des centaines de génomes bactériens séquencés et d’autre part les facteurs de transcription bactériens présentent des domaines Hélice-Tour-Hélice qui reconnaissent spécifiquement des dyades. Une évaluation systématique sur 368 gènes de E.coli a permis d’évaluer les performances de notre méthode et de tester l’influence de plus de 40 combinaisons de paramètres concernant le niveau taxonomique, l’inférence d’opérons, le filtrage des dyades spécifiques de E.coli, le choix des modèles de fond pour le calcul du score de significativité, et enfin un seuil sur ce score. L’analyse détaillée pour un cas d’étude, l’autorégulation du facteur de transcription LexA, a montré que notre approche permet d’étudier l’évolution des sites d’auto-régulation dans plusieurs branches taxonomiques des bactéries. Nous avons ensuite appliqué la détection d’empreintes phylogénétiques à chaque gène de E.coli, et utilisé les motifs détectés comme significatifs afin de prédire les gènes co-régulés. Au centre de cette dernière stratégie, est définie une matrice de scores de significativité pour chaque mot détecté par gène chez l’organisme de référence. Plusieurs métriques ont été définies pour la comparaison de paires de profils de scores de sorte que des paires de gènes ayant des motifs détectés significativement en commun peuvent être regroupées. Ainsi, l’ensemble des nos méthodes nous permet de reconstruire des réseaux de co-régulation uniquement à partir de séquences génomiques, et nous ouvre la voie à l’étude de l’organisation et de l’évolution de la régulation transcriptionnelle pour des génomes dont on ne connaît rien. The purpose of my thesis is to study the evolution of regulation within bacterial genomes by using a cross-genomic comparative approach. Nowadays, numerous genomes have been sequenced facilitating in silico analysis in order to detect groups of functionally related genes and to predict the mechanism of their relative regulation. In this project, we combined prediction of operons and regulons in order to reconstruct the transcriptional regulatory network for a bacterial genome. We have implemented three methods in order to predict operons from a bacterial genome and evaluated them on hundreds of annotated operons of Escherichia coli and Bacillus subtilis. It turns out that a simple distance-based threshold method gives good results with about 80% of accuracy. The principle of this method is to classify pairs of adjacent genes as “within operon” or “transcription unit border”, respectively, by using a threshold on their intergenic distance: two adjacent genes are predicted to be within an operon if their intergenic distance is smaller than 55bp. In the second part of my thesis, I evaluated the performances of a phylogenetic footprinting approach based on the detection of over-represented spaced motifs. This method is particularly suitable for (but not restricted to) Bacteria, since such motifs are typically bound by factors containing a Helix-Turn-Helix domain. We evaluated footprint discovery in 368 E.coli K12 genes with annotated sites, under 40 different combinations of parameters (taxonomical level, background model, organism-specific filtering, operon inference, significance threshold). Motifs are assessed both at the level of correctness and significance. The footprint discovery method proposed here shows excellent results with E. coli and can readily be extended to predict cis-acting regulatory signals and propose testable hypotheses in bacterial genomes for which nothing is known about regulation. Moreover, the predictive power of the strategy, and its capability to track the evolutionary divergence of cis-regulatory motifs was illustrated with the example of LexA auto-regulation, for which our predictions are remarkably consistent with the binding sites characterized in different taxonomical groups. A next challenge was to identify groups of co-regulated genes (regulons), by regrouping genes with similar motifs, in order to address the challenging domain of the evolution of transcriptional regulatory networks. We tested different metrics to detect putative pairs of co-regulated genes. The comparison between predicted and annotated co-regulation networks shows a high positive predictive value, since a good fraction of the predicted associations correspond to annotated co-regulations, and a low sensitivity, which may be due to the consequence of highly connected transcription factors (global regulator). A regulon-per-regulon analysis indeed shows that the sensitivity is very weak for these transcription factors, but can be quite good for specific transcription factors. The originality of this global strategy is to be able to infer a potential network from the sole analysis of genome sequences, and without any prior knowledge about the regulation in the considered organism.
7

Event-Level Pattern Discovery for Large Mixed-Mode Database

Wu, Bin January 2010 (has links)
For a large mixed-mode database, how to discretize its continuous data into interval events is still a practical approach. If there are no class labels for the database, we have nohelpful correlation references to such task Actually a large relational database may contain various correlated attribute clusters. To handle these kinds of problems, we first have to partition the databases into sub-groups of attributes containing some sort of correlated relationship. This process has become known as attribute clustering, and it is an important way to reduce our search in looking for or discovering patterns Furthermore, once correlated attribute groups are obtained, from each of them, we could find the most representative attribute with the strongest interdependence with all other attributes in that cluster, and use it as a candidate like a a class label of that group. That will set up a correlation attribute to drive the discretization of the other continuous data in each attribute cluster. This thesis provides the theoretical framework, the methodology and the computational system to achieve that goal.
8

Event-Level Pattern Discovery for Large Mixed-Mode Database

Wu, Bin January 2010 (has links)
For a large mixed-mode database, how to discretize its continuous data into interval events is still a practical approach. If there are no class labels for the database, we have nohelpful correlation references to such task Actually a large relational database may contain various correlated attribute clusters. To handle these kinds of problems, we first have to partition the databases into sub-groups of attributes containing some sort of correlated relationship. This process has become known as attribute clustering, and it is an important way to reduce our search in looking for or discovering patterns Furthermore, once correlated attribute groups are obtained, from each of them, we could find the most representative attribute with the strongest interdependence with all other attributes in that cluster, and use it as a candidate like a a class label of that group. That will set up a correlation attribute to drive the discretization of the other continuous data in each attribute cluster. This thesis provides the theoretical framework, the methodology and the computational system to achieve that goal.
9

Modeling Time Series Data for Supervised Learning

January 2012 (has links)
abstract: Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning of the relevant patterns This dissertation proposes TS representations and methods for supervised TS analysis. The approaches combine new representations that handle translations and dilations of patterns with bag-of-features strategies and tree-based ensemble learning. This provides flexibility in handling time-warped patterns in a computationally efficient way. The ensemble learners provide a classification framework that can handle high-dimensional feature spaces, multiple classes and interaction between features. The proposed representations are useful for classification and interpretation of the TS data of varying complexity. The first contribution handles the problem of time warping with a feature-based approach. An interval selection and local feature extraction strategy is proposed to learn a bag-of-features representation. This is distinctly different from common similarity-based time warping. This allows for additional features (such as pattern location) to be easily integrated into the models. The learners have the capability to account for the temporal information through the recursive partitioning method. The second contribution focuses on the comprehensibility of the models. A new representation is integrated with local feature importance measures from tree-based ensembles, to diagnose and interpret time intervals that are important to the model. Multivariate time series (MTS) are especially challenging because the input consists of a collection of TS and both features within TS and interactions between TS can be important to models. Another contribution uses a different representation to produce computationally efficient strategies that learn a symbolic representation for MTS. Relationships between the multiple TS, nominal and missing values are handled with tree-based learners. Applications such as speech recognition, medical diagnosis and gesture recognition are used to illustrate the methods. Experimental results show that the TS representations and methods provide better results than competitive methods on a comprehensive collection of benchmark datasets. Moreover, the proposed approaches naturally provide solutions to similarity analysis, predictive pattern discovery and feature selection. / Dissertation/Thesis / Ph.D. Industrial Engineering 2012
10

Discovery of temporal association rules in multivariate time series

Zhao, Yi January 2017 (has links)
This thesis focuses on mining association rules on multivariate time series. Com-mon association rule mining algorithms can usually only be applied to transactional data, and a typical application is market basket analysis. If we want to mine temporal association rules on time series data, changes need to be made. During temporal association rule mining, the temporal ordering nature of data and the temporal interval between the left and right patterns of a rule need to be considered. This thesis reviews some mining methods for temporal association rule mining, and proposes two similar algorithms for the mining of frequent patterns in single and multivariate time series. Both algorithms are scalable and efficient. In addition, temporal association rules are generated from the patterns found. Finally, the usability and efficiency of the algorithms are demonstrated by evaluating the results.

Page generated in 0.0774 seconds