• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 5
  • 1
  • Tagged with
  • 20
  • 20
  • 9
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Sequential Data Mining and its Applications to Pharmacovigilance

Qin, Xiao 02 April 2019 (has links)
With the phenomenal growth of digital devices coupled with their ever-increasing capabilities of data generation and storage, sequential data is becoming more and more ubiquitous in a wide spectrum of application scenarios. There are various embodiments of sequential data such as temporal database, time series and text (word sequence) where the first one is synchronous over time and the latter two often generated in an asynchronous fashion. In order to derive precious insights, it is critical to learn and understand the behavior dynamics as well as the causality relationships across sequences. Pharmacovigilance is defined as the science and activities relating to the detection, assessment, understanding and prevention of adverse drug reactions (ADR) or other drug-related problems. In the post-marketing phase, the effectiveness and the safety of drugs is monitored by regulatory agencies known as post-marketing surveillance. Spontaneous Reporting System (SRS), e.g., U.S. Food and Drug Administration Adverse Event Reporting System (FAERS), collects drug safety complaints over time providing the key evidence to support regularity actions towards the reported products. With the rapid growth of the reporting volume and velocity, data mining techniques promise to be effective to facilitating drug safety reviewers performing supervision tasks in a timely fashion. My dissertation studies the problem of exploring, analyzing and modeling various types of sequential data within a typical SRS: Temporal Correlations Discovery and Exploration. SRS can be seen as a temporal database where each transaction encodes the co-occurrence of some reported drugs and observed ADRs in a time frame. Temporal association rule learning (TARL) has been proven to be a prime candidate to derive associations among the objects from such temporal database. However, TARL is parameterized and computational expensive making it difficult to use for discovering interesting association among drugs and ADRs in a timely fashion. Worse yet, existing interestingness measures fail to capture the significance of certain types of association in the context of pharmacovigilance, e.g. drug-drug interaction (DDI) related ADR. To discover DDI related ADR using TARL, we propose an interestingness measure that aligns with the DDI semantics. We propose an interactive temporal association analytics framework that supports real-time temporal association derivation and exploration. Anomaly Detection in Time Series. Abnormal reports may reveal meaningful ADR case which is overlooked by frequency-based data mining approach such as association rule learning where patterns are derived from frequently occurred events. In addition, the sense of abnormal or rareness may vary in different contexts. For example, an ADR, normally occurs to adult population, may rarely happen to youth population but with life threatening outcomes. Local outlier factor (LOF) is identified as a suitable approach to capture such local abnormal phenomenon. However, existing LOF algorithms and its variations fail to cope with high velocity data streams due to its high algorithmic complexity. We propose new local outlier semantics that leverage kernel density estimation (KDE) to effectively detect local outliers from streaming data. A strategy to continuously detect top-N KDE-based local outliers over streams is also designed, called KELOS -- the first linear time complexity streaming local outlier detection approach. Text Modeling. Language modeling (LM) is a fundamental problem in many natural language processing (NLP) tasks. LM is the development of probabilistic models that are able to predict the next word in the sequence given the words that precede it. Recently, LM is advanced by the success of the recurrent neural networks (RNNs) which overcome the Markov assumption made in the traditional statistical language models. In theory, RNNs such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) can “remember� arbitrarily long span of history if provided with enough capacity. However, they do not perform well on very long sequences in practice as the gradient computation for RNNs becomes increasingly ill-behaved as the expected dependency becomes longer. One way of tackling this problem is to feed succinct information that encodes the semantic structure of the entire document such as latent topics as context to guide the modeling process. Clinical narratives that describe complex medical events are often accompanied by meta-information such as a patient's demographics, diagnoses and medications. This structured information implicitly relates to the logical and semantic structure of the entire narrative, and thus affects vocabulary choices for the narrative composition. To leverage this meta-information, we propose a supervised topic compositional neural language model, called MeTRNN, that integrates the strength of supervised topic modeling in capturing global semantics with the capacity of contextual recurrent neural networks (RNN) in modeling local word dependencies.
2

Hybrid Algorithms of Finding Features for Clustering Sequential Data

Chang, Hsi-mei 08 July 2010 (has links)
Proteins are the structural components of living cells and tissues, and thus an important building block in all living organisms. Patterns in proteins sequences are some subsequences which appear frequently. Patterns often denote important functional regions in proteins and can be used to characterize a protein family or discover the function of proteins. Moreover, it provides valuable information about the evolution of species. Grouping protein sequences that share similar structure helps in identifying sequences with similar functionality. Many algorithms have been proposed for clustering proteins according to their similarity, i.e., sequential patterns in protein databases, for example, feature-based clustering algorithms of the global approach and the local approach. They use the algorithm of mining sequential patterns to solve the no-gap-limit sequential pattern problem in a protein sequences database, and then find global features and local features separately for clustering. Feature-based clustering algorithms are entirely different approaches to protein clustering that do not require an all-against-all analysis and use a near-linear complexity K-means based clustering algorithm. Although feature-based clustering algorithms are scalable and lead to reasonably good clusters, they consume time on performing the global approach and the local approach separately. Therefore, in this thesis, we propose hybrid algorithms to find and mark features for feature-based clustering algorithms. We observe an interesting result from the relation between the local features and the closed frequent sequential patterns. The important observation which we find is that some features in the closed frequent sequential patterns can be taken apart to several features in the local selected features and the total support number of these features in the local selected features is equal to the support number of the corresponding feature in the closed frequent sequential patterns. There are two phases, find-feature and mark-feature, in the global approach and the local approach after mining sequential patterns. In our hybrid algorithms of Method 1 (LocalG), we first find and mark the local features. Then, we find the global features. Finally, we mark the bit vectors of the global features efficiently from the bit vector of the local features. In our hybrid algorithms of Method 2 (CLoseLG), we first find the closed frequent sequential patterns directly. Next, we find local candidate features efficiently from the closed frequent sequential patterns and then mark the local features. Finally, we find and mark the global features. From our performance study based on the biological data and the synthetic data, we show that our proposed hybrid algorithms are more efficient than the feature-based algorithm.
3

On Modeling Dependency Dynamics of Sequential Data: Methods and Applications

Ji, Taoran 04 February 2022 (has links)
Information mining and knowledge learning from sequential data is a field of growing importance in both industrial and academic fields. Sequential data, which is the natural representation format of the information flow in many applications, usually carries enormous information and is able to help researchers gain insights for various tasks such as airport threat detection, cyber-attack detection, recommender system, point-of-interest (POI) prediction, and citation forecasting. This dissertation focuses on developing the methods for sequential data-driven applications and evolutionary dynamics characterization for various topics such as transit service disruption detection, early event detection on social media, technology opportunity discovery, and traffic incident impact analysis. In particular, four specific applications are studied with four proposed novel methods, including a spatiotemporal feature learning framework for transit service disruption detection, a multi-task learning framework for cybersecurity event detection, citation dynamics modeling via multi-context attentional recurrent neural networks, and traffic incident impact forecasting via hierarchical spatiotemporal graph neural networks. For the first of these methods, the existing transit service disruption detection methods usually suffer from two significant shortcomings: 1) failing to modulate the sparsity of the social media feature domain, i.e., only a few important ``particles'' are indeed related to service disruption among the massive volume of data generated every day and 2) ignoring the real-world geographical connections of transit networks as well as the semantic consistency existing in the problem space. This work makes three contributions: 1) developing a spatiotemporal learning framework for metro disruption detection using open-source data, 2) modeling semantic similarity and spatial connectivity among metro lines in feature space, and 3) developing an optimization algorithm for solving the multi-convex and non-smooth objective function efficiently. For the second of these methods, the conventional studies in cybersecurity detection suffer from the following shortcomings: 1) unable to capture weak signals generated by the cyber-attacks on small organizations or individual accounts, 2) lack of generalization of distinct types of security incidents, and 3) failing to consider the relatedness across different types of cyber-attacks in the feature domain. Three contributions are made in this work: 1) formulating the problem of social media-based cyber-attack detection into the multi-task learning framework, 2) modeling multi-type task relatedness in feature space, and 3) developing an efficient algorithm to solve the non-smooth model with inequality constraints. For the third of these methods, conventional citation forecasting methods are using the traditional temporal point process, which suffers from several drawbacks: 1) unable to predict the technological categories of citing documents and thus are incapable of technological diversity assessment, and 2) require prior domain knowledge and thus are hard to extend to different research areas. Two contributions are made in this work: 1) formulating a novel framework to provide long-term citation predictions in an end-to-end fashion by integrating the process of learning intensity function representations and the process of predicting future citations and 2) designing two novel temporal attention mechanisms to improve the model's ability to modulate complicated temporal dependencies and to allow the model to dynamically combine the observation and prediction sides during the learning process. For the fourth of these methods, the previous work treats the traffic sensor readings as the features and views the incident duration prediction as a feature-driven regression, which typically suffers from three drawbacks: 1) ignoring the existence of the road-sensor hierarchical structure in the real-world traffic network, 2) unable to learn and modulate the hidden temporal patterns in the sensor readings, and 3) lack of consideration of the spatial connectivity between arterial roads and traffic sensors. This work makes three significant contributions: 1) designing a hierarchical graph convolutional network architecture for modeling the road-sensor hierarchy, 2) proposing novel spatiotemporal attention mechanism on the sensor- and road-level features for representation learning, and 3) presenting a graph convolutional network-based method for incident representation learning via spatial connectivity modeling and traffic characteristics modulation. / Doctor of Philosophy / Information mining and knowledge learning from sequential data is a field of growing importance in both industrial and academic fields. Sequential data, which is the natural representation format of the information flow in many applications, usually carries enormous information and is able to help researchers gain insights for various tasks such as airport threat detection, cyber-attack detection, recommender system, point-of-interest (POI) prediction, and citation forecasting. This dissertation focuses on developing the methods for sequential data-driven applications and evolutionary dynamics characterization for various topics such as transit service disruption detection, early event detection on social media, technology opportunity discovery, and traffic incident impact analysis. In particular, four specific applications are studied with four proposed novel methods, including a spatiotemporal feature learning framework for transit service disruption detection, a multi-task learning framework for cybersecurity event detection, citation dynamics modeling via multi-context attentional recurrent neural networks, and traffic incident impact forecasting via hierarchical spatiotemporal graph neural networks. For the first of these methods, the existing transit service disruption detection methods usually suffer from two significant shortcomings: 1) failing to modulate the sparsity of the social media feature domain, i.e., only a few important ``particles'' are indeed related to service disruption among the massive volume of data generated every day and 2) ignoring the real-world geographical connections of transit networks as well as the semantic consistency existing in the problem space. This work makes three contributions: 1) developing a spatiotemporal learning framework for metro disruption detection using open-source data, 2) modeling semantic similarity and spatial connectivity among metro lines in feature space, and 3) developing an optimization algorithm for solving the multi-convex and non-smooth objective function efficiently. For the second of these methods, the conventional studies in cybersecurity detection suffer from the following shortcomings: 1) unable to capture weak signals generated by the cyber-attacks on small organizations or individual accounts, 2) lack of generalization of distinct types of security incidents, and 3) failing to consider the relatedness across different types of cyber-attacks in the feature domain. Three contributions are made in this work: 1) formulating the problem of social media-based cyber-attack detection into the multi-task learning framework, 2) modeling multi-type task relatedness in feature space, and 3) developing an efficient algorithm to solve the non-smooth model with inequality constraints. For the third of these methods, conventional citation forecasting methods are using the traditional temporal point process, which suffers from several drawbacks: 1) unable to predict the technological categories of citing documents and thus are incapable of technological diversity assessment, and 2) require prior domain knowledge and thus are hard to extend to different research areas. Two contributions are made in this work: 1) formulating a novel framework to provide long-term citation predictions in an end-to-end fashion by integrating the process of learning intensity function representations and the process of predicting future citations and 2) designing two novel temporal attention mechanisms to improve the model's ability to modulate complicated temporal dependencies and to allow the model to dynamically combine the observation and prediction sides during the learning process. For the fourth of these methods, the previous work treats the traffic sensor readings as the features and views the incident duration prediction as a feature-driven regression, which typically suffers from three drawbacks: 1) ignoring the existence of the road-sensor hierarchical structure in the real-world traffic network, 2) unable to learn and modulate the hidden temporal patterns in the sensor readings, and 3) lack of consideration of the spatial connectivity between arterial roads and traffic sensors. This work makes three significant contributions: 1) designing a hierarchical graph convolutional network architecture for modeling the road-sensor hierarchy, 2) proposing novel spatiotemporal attention mechanism on the sensor- and road-level features for representation learning, and 3) presenting a graph convolutional network-based method for incident representation learning via spatial connectivity modeling and traffic characteristics modulation.
4

Signatures : detecting and characterizing complex recurrent behavior in sequential data / Détection et caractérisation de comportements complexes récurrents dans des données séquentielles

Gautrais, Clément 16 October 2018 (has links)
Cette thèse introduit un nouveau type de motif appelé signature. La signature segmente une séquence d'itemsets, afin de maximiser la taille de l'ensemble d'items qui apparaît dans tous les segments. La signature a été initialement introduite pour identifier les produits favoris d'un consommateur de supermarché à partir de son historique d'achat. L'originalité de la signature vient du fait qu'elle identifie les items récurrents qui 1) peuvent apparaître à différentes échelles temporelles, 2) peuvent avoir des occurrences irrégulières et 3) peuvent être rapidement compris par des analystes. Étant donné que les approches existantes en fouille de motifs n'ont pas ces 3 propriétés, nous avons introduit la signature. En comparant la signature avec les méthodes de l'état de l'art, nous avons montré que la signature est capable d'identifier de nouvelles régularités dans les données, tout en identifiant les régularités détectées par les méthodes existantes. Bien qu'initialement liée au domaine de la fouille de motifs, nous avons également lié le problème de la fouille de signatures au domaine de la segmentation de séquences. Nous avons ensuite défini différents algorithmes, utilisant des méthodes liées à la fouille de motifs et à la segmentation de séquences. Les signatures ont été utilisées pour analyser un large jeu de données issu d'un supermarché français. Une analyse qualitative des signatures calculées sur ces consommateurs réels a montré que les signatures sont capables d'identifier les produits favoris d'un consommateur. Les signatures ont également été capables de détecter et de caractériser l'attrition de consommateurs. Cette thèse définit également 2 extensions de la signature. La première extension est appelée la sky-signature. La sky-signature permet de présenter les items récurrents d'une séquence à différentes échelles de temps. La sky-signature peut être vue comme une manière efficace de résumer les signatures calculées à toutes les échelles de temps possibles. Les sky-signatures ont été utilisées pour analyser les discours de campagne des candidats à la présidentielle américaine de 2016. Les sky-signatures ont identifié les principaux thèmes de campagne de chaque candidat, ainsi que leur rythme de campagne. Cette analyse a également montré que les signatures peuvent être utilisées sur d'autres types de jeux de données. Cette thèse introduit également une deuxième extension de la signature, qui permet de calculer la signature qui correspond le plus aux données. Cette extension utilise une technique de sélection de modèle basée sur le principe de longueur de description minimale, communément utilisée en fouille de motifs. Cette extension a également été utilisée pour analyser des consommateurs de supermarché. / Cette thèse introduit un nouveau type de motif appelé signature. La signature segmente une séquence d'itemsets, afin de maximiser la taille de l'ensemble d'items qui apparaît dans tous les segments. La signature a été initialement introduite pour identifier les produits favoris d'un consommateur de supermarché à partir de son historique d'achat. L'originalité de la signature vient du fait qu'elle identifie les items récurrents qui 1) peuvent apparaître à différentes échelles temporelles, 2) peuvent avoir des occurrences irrégulières et 3) peuvent être rapidement compris par des analystes. Étant donné que les approches existantes en fouille de motifs n'ont pas ces 3 propriétés, nous avons introduit la signature. En comparant la signature avec les méthodes de l'état de l'art, nous avons montré que la signature est capable d'identifier de nouvelles régularités dans les données, tout en identifiant les régularités détectées par les méthodes existantes. Bien qu'initialement liée au domaine de la fouille de motifs, nous avons également lié le problème de la fouille de signatures au domaine de la segmentation de séquences. Nous avons ensuite défini différents algorithmes, utilisant des méthodes liées à la fouille de motifs et à la segmentation de séquences. Les signatures ont été utilisées pour analyser un large jeu de données issu d'un supermarché français. Une analyse qualitative des signatures calculées sur ces consommateurs réels a montré que les signatures sont capables d'identifier les produits favoris d'un consommateur. Les signatures ont également été capables de détecter et de caractériser l'attrition de consommateurs. Cette thèse définit également 2 extensions de la signature. La première extension est appelée la sky-signature. La sky-signature permet de présenter les items récurrents d'une séquence à différentes échelles de temps. La sky-signature peut être vue comme une manière efficace de résumer les signatures calculées à toutes les échelles de temps possibles. Les sky-signatures ont été utilisées pour analyser les discours de campagne des candidats à la présidentielle américaine de 2016. Les sky-signatures ont identifié les principaux thèmes de campagne de chaque candidat, ainsi que leur rythme de campagne. Cette analyse a également montré que les signatures peuvent être utilisées sur d'autres types de jeux de données. Cette thèse introduit également une deuxième extension de la signature, qui permet de calculer la signature qui correspond le plus aux données. Cette extension utilise une technique de sélection de modèle basée sur le principe de longueur de description minimale, communément utilisée en fouille de motifs. Cette extension a également été utilisée pour analyser des consommateurs de supermarché.
5

Extraction de motifs séquentiels dans des données séquentielles multidimensionnelles et hétérogènes : une application à l'analyse de trajectoires de patients / Mining heterogeneous multidimensional sequential data : An application to the analysis of patient healthcare trajectories

Egho, Elias 02 July 2014 (has links)
Tous les domaines de la science et de la technologie produisent de gros volume de données hétérogènes. L'exploration de tels volumes de données reste toujours un défi. Peu de travaux ciblent l'exploration et l'analyse de données séquentielles multidimensionnelles et hétérogènes. Dans ce travail, nous proposons une contribution à la découverte de connaissances dans les données séquentielles hétérogènes. Nous étudions trois axes de recherche différents: (i) l'extraction de motifs séquentiels, (ii) la classification et (iii) le clustering des données séquentielles. Tout d'abord, nous généralisons la notion de séquence multidimensionnelle en considérant la structure complexe et hétérogène. Nous présentons une nouvelle approche MMISP pour extraire des motifs séquentiels à partir de données séquentielles multidimensionnelles et hétérogènes. MMISP génère un grand nombre de motifs séquentiels comme cela est généralement le cas pour toues les algorithmes d'énumération des motifs. Pour surmonter ce problème, nous proposons une nouvelle façon de considérer les séquences multidimensionnelles hétérogènes en les associant à des structures de patrons. Nous développons une méthode pour énumérer seulement les motifs qui respectent certaines contraintes. La deuxième direction de recherche est la classification de séquences multidimensionnelles et hétérogènes. Nous utilisons l'analyse formelle de concept (AFC) comme une méthode de classification. Nous montrons l'intérêt des treillis de concepts et de l'indice de stabilité pour classer les séquences et pour choisir quelques groupes intéressants de séquences. La troisième direction de recherche dans cette thèse est préoccupé par le regroupement des données séquentielles multidimensionnelles et hétérogènes. Nous nous basons sur la notion de sous-séquences communes pour définir une mesure de similarité permettant d'évaluer la proximité entre deux séquences formées d'une liste d'ensemble d'items. Nous utilisons cette mesure de similarité pour construire une matrice de similarité entre les séquences et pour les segmenter en plusieurs groupes. Dans ce travail, nous présentons les résultats théoriques et un algorithme de programmation dynamique permettant de compter efficacement toutes les sous-séquences communes à deux séquences sans énumérer toutes les séquences. Le système résultant de cette recherches a été appliqué pour analyser et extraire les trajectoires de soins de santé des patients en cancérologie. Les données sont issues d' une base de données médico-administrative incluant des informations sur des patients hospitalisent en France. Le système permet d'identifier et de caractériser des épisodes de soins pour des ensembles spécifiques de patients. Les résultats ont été discutés et interprétés avec les experts du domaine / All domains of science and technology produce large and heterogeneous data. Although a lot of work was done in this area, mining such data is still a challenge. No previous research work targets the mining of heterogeneous multidimensional sequential data. This thesis proposes a contribution to knowledge discovery in heterogeneous sequential data. We study three different research directions: (i) Extraction of sequential patterns, (ii) Classification and (iii) Clustering of sequential data. Firstly we generalize the notion of a multidimensional sequence by considering complex and heterogeneous sequential structure. We present a new approach called MMISP to extract sequential patterns from heterogeneous sequential data. MMISP generates a large number of sequential patterns as this is usually the case for pattern enumeration algorithms. To overcome this problem, we propose a novel way of considering heterogeneous multidimensional sequences by mapping them into pattern structures. We develop a framework for enumerating only patterns satisfying given constraints. The second research direction is in concern with the classification of heterogeneous multidimensional sequences. We use Formal Concept Analysis (FCA) as a classification method. We show interesting properties of concept lattices and of stability index to classify sequences into a concept lattice and to select some interesting groups of sequences. The third research direction in this thesis is in concern with the clustering of heterogeneous multidimensional sequential data. We focus on the notion of common subsequences to define similarity between a pair of sequences composed of a list of itemsets. We use this similarity measure to build a similarity matrix between sequences and to separate them in different groups. In this work, we present theoretical results and an efficient dynamic programming algorithm to count the number of common subsequences between two sequences without enumerating all subsequences. The system resulting from this research work was applied to analyze and mine patient healthcare trajectories in oncology. Data are taken from a medico-administrative database including all information about the hospitalizations of patients in Lorraine Region (France). The system allows to identify and characterize episodes of care for specific sets of patients. Results were discussed and validated with domain experts
6

Learning from Structured Data: Scalability, Stability and Temporal Awareness

Pavlovski, Martin, 0000-0003-1495-2128 January 2021 (has links)
A plethora of high-impact applications involve predictive modeling of structured data. In various domains, from hospital readmission prediction in the medical realm, though weather forecasting and event detection in power systems, up to conversion prediction in online businesses, the data holds a certain underlying structure. Building predictive models from such data calls for leveraging the structure as an additional source of information. Thus, a broad range of structure-aware approaches have been introduced, yet certain common challenges in many structured learning scenarios remain unresolved. This dissertation revolves around addressing the challenges of scalability, algorithmic stability and temporal awareness in several scenarios of learning from either graphically or sequentially structured data. Initially, the first two challenges are discussed from a structured regression standpoint. The studies addressing these challenges aim at designing scalable and algorithmically stable models for structured data, without compromising their prediction performance. It is further inspected whether such models can be applied to both static and dynamic (time-varying) graph data. To that end, a structured ensemble model is proposed to scale with the size of temporal graphs, while making stable and reliable yet accurate predictions on a real-world application involving gene expression prediction. In the case of static graphs, a theoretical insight is provided on the relationship between algorithmic stability and generalization in a structured regression setting. A stability-based objective function is designed to indirectly control the stability of a collaborative ensemble regressor, yielding generalization performance improvements on structured regression applications as diverse as predicting housing prices based on real-estate transactions and readmission prediction from hospital records. Modeling data that holds a sequential rather than a graphical structure requires addressing temporal awareness as one of the major challenges. In that regard, a model is proposed to generate time-aware representations of user activity sequences, intended to be seamlessly applicable across different user-related tasks, while sidestepping the burden of task-driven feature engineering. The quality and effectiveness of the time-aware user representations led to predictive performance improvements over state-of-the-art models on multiple large-scale conversion prediction tasks. Sequential data is also analyzed from the perspective of a high-impact application in the realm of power systems. Namely, detecting and further classifying disturbance events, as an important aspect of risk mitigation in power systems, is typically centered on the challenges of capturing structural characteristics in sequential synchrophasor recordings. Therefore, a thorough comparative analysis was conducted by assessing various traditional as well as more sophisticated event classification models under different domain-expert-assisted labeling scenarios. The experimental findings provide evidence that hierarchical convolutional neural networks (HCNNs), capable of automatically learning time-invariant feature transformations that preserve the structural characteristics of the synchrophasor signals, consistently outperform traditional model variants. Their performance is observed to further improve as more data are inspected by a domain expert, while smaller fractions of solely expert-inspected signals are already sufficient for HCNNs to achieve satisfactory event classification accuracy. Finally, insights into the impact of the domain expertise on the downstream classification performance are also discussed. / Computer and Information Science
7

Assisted Annotation of Sequential Image Data With CNN and Pixel Tracking / Assisterande annotering av sekvensiell bilddata med CNN och pixelspårning

Chan, Jenny January 2021 (has links)
In this master thesis, different neural networks have investigated annotating objects in video streams with partially annotated data as input. Annotation in this thesis is referring to bounding boxes around the targeted objects. Two different methods have been used ROLO and GOTURN, object detection with tracking respective object tracking with pixels. The data set used for validation is surveillance footage consists of varying image resolution, image size and sequence length. Modifications of the original models have been executed to fit the test data.  Promising results for modified GOTURN were shown, where the partially annotated data was used as assistance in tracking. The model is robust and provides sufficiently accurate object detections for practical use. With the new model, human resources for image annotation can be reduced by at least half. / I detta examensarbete har olika neurala nätverk undersökts för att annotera objekt i videoströmmar med partiellt annoterade data som indata. Annotering i denna uppsats syftar på avgränsninglådor runt de eftertraktade objekten. Två olika metoder har använts ROLO och GOTURN, objektdetektering med spårning respektive objektspårning av pixlar. Datasetet som användes för validering är videoströmmar från övervakningskameror i varierande bildupplösning, bildstorlek och sekvenslängd. Modifieringar av ursprungsmodellerna har utförts för att anpassa testdatat. Lovande resultat för modifierade GOTURN visades, där den partiella annoterade datan användes som assistans vid spårning. Modellen är robust och ger tillräckligt noggranna objektdetektioner för praktiskt bruk. Med den nya modellen kan mänskliga resurser för bild annotering reduceras med minst hälften.
8

Order in the random forest

Karlsson, Isak January 2017 (has links)
In many domains, repeated measurements are systematically collected to obtain the characteristics of objects or situations that evolve over time or other logical orderings. Although the classification of such data series shares many similarities with traditional multidimensional classification, inducing accurate machine learning models using traditional algorithms are typically infeasible since the order of the values must be considered. In this thesis, the challenges related to inducing predictive models from data series using a class of algorithms known as random forests are studied for the purpose of efficiently and effectively classifying (i) univariate, (ii) multivariate and (iii) heterogeneous data series either directly in their sequential form or indirectly as transformed to sparse and high-dimensional representations. In the thesis, methods are developed to address the challenges of (a) handling sparse and high-dimensional data, (b) data series classification and (c) early time series classification using random forests. The proposed algorithms are empirically evaluated in large-scale experiments and practically evaluated in the context of detecting adverse drug events. In the first part of the thesis, it is demonstrated that minor modifications to the random forest algorithm and the use of a random projection technique can improve the effectiveness of random forests when faced with discrete data series projected to sparse and high-dimensional representations. In the second part of the thesis, an algorithm for inducing random forests directly from univariate, multivariate and heterogeneous data series using phase-independent patterns is introduced and shown to be highly effective in terms of both computational and predictive performance. Then, leveraging the notion of phase-independent patterns, the random forest is extended to allow for early classification of time series and is shown to perform favorably when compared to alternatives. The conclusions of the thesis not only reaffirm the empirical effectiveness of random forests for traditional multidimensional data but also indicate that the random forest framework can, with success, be extended to sequential data representations.
9

Extraction d'informations synthétiques à partir de données séquentielles : application à l'évaluation de la qualité des rivières / Extraction of synthetic information from sequential data : application to river quality assessment

Fabregue, Mickael 26 November 2014 (has links)
L'exploration des bases de données temporelles à l'aide de méthodes de fouille de données adaptées a fait l'objet de nombreux travaux de recherche. Cependant le volume d'informations extraites est souvent important et la tâche d'analyse reste alors difficile. Dans cette thèse, nous présentons des méthodes pour synthétiser et filtrer l'information extraite. L'objectif est de restituer des résultats qui soient interprétables. Pour cela, nous avons exploité la notion de séquence partiellement ordonnée et nous proposons (1) un algorithme qui extrait l'ensemble des motifs partiellement ordonnés clos; (2) un post-traitement pour filtrer un ensemble de motifs d'intérêt et(3) une approche qui extrait un consensus comme alternative à l'extraction de motifs. Les méthodes proposées ont été testées sur des données hydrobiologiques issues du projet ANR Fresqueau et elles ont été implantées dans un logiciel de visualisation destiné aux hydrobiologistes pour l'analyse de la qualité des cours d'eau. / Exploring temporal databases with suitable data mining methods have been the subject of several studies. However, it often leads to an excessive volume of extracted information and the analysis is difficult for the user. We addressed this issue and we specically focused on methods that synthesize and filter extracted information. The objective is to provide interpretable results for humans. Thus, we relied on the notion of partially ordered sequence and we proposed (1) an algorithm that extracts the set of closed partially ordered patterns ; (2) a post-processing to filter some interesting patterns for the user and (3) an approach that extracts a partially ordered consensus as an alternative to pattern extraction. The proposed methods were applied for validation on hydrobiological data from the Fresqueau ANR project. In addition, they have been implemented in a visualization tool designed for hydrobiologists for water course quality analysis.
10

Ajustement optimal des paramètres de forçage atmosphérique par assimilation de données de température de surface pour des simulations océaniques globales / Optimal adjustment of atmospheric forcing parameters for long term simulations of the global ocean circulation.

Meinvielle, Marion 17 January 2012 (has links)
La température de surface de l'océan (SST) est depuis l'avènement des satellites, l'une des variables océaniques la mieux observée. Les modèles réalistes de circulation générale océanique ne la prennent pourtant pas en compte explicitement dans leur fonction de forçage. Dans cette dernière, seules interviennent les variables atmosphériques à proximité de la surface (température, humidité, vitesse du vent, radiations descendantes et précipitations) connues pour être entachées d'incertitudes importantes dès lors qu'on considère l'objectif d'étudier la variabilité à long terme de l'océan et son rôle climatique. La SST est alors classiquement utilisée en assimilation de données pour contraindre l'état du modèle vers une solution en accord avec les observations mais sans corriger la fonction de forçage. Cette approche présente cependant les inconvénients de l'incohérence existant potentiellement entre la solution « forcée » et « assimilée ». On se propose dans cette thèse de développer dans un contexte réaliste une méthode d'assimilation de données de SST observée pour corriger les paramètres de forçage atmosphérique sans correction de l'état océanique. Le jeu de forçage faisant l'objet de ces corrections est composé des variables atmosphériques issues de la réanalyse ERAinterim entre 1989 et 2007. On utilise pour l'estimation de paramètres une méthode séquentielle basée sur le filtre de Kalman, où le vecteur d'état est augmenté des variables de forçage dont la distribution de probabilité a priori est évaluée via des expériences d'ensemble. On évalue ainsi des corrections de forçage mensuelles applicables dans un modèle libre pour la période 1989-2007 en assimilant la SST issue de la base de données de Hurrel (Hurrel, 2008), ainsi qu'une climatologie de salinité de surface (Levitus, 1994). Cette étude démontre la faisabilité d'une telle démarche dans un contexte réaliste, ainsi que l'amélioration de la représentation des flux océan-atmosphère par l'exploitation d'observations de la surface de l'océan. / Sea surface temperature (SST) is more accurately observed from space than near-surface atmospheric variables and air-sea fluxes. But ocean general circulation models for operational forecasting or simulations of the recent ocean variability use, as surface boundary conditions, bulk formulae which do not directly involve the observed SST. In brief, models do not use explicitly in their forcing one of the best observed ocean surface variable, except when assimilated to correct the model state. This classical approach presents however some inconsistency between the “assimilated” solution of the model and the “forced” one. The objective of this research is to develop in a realistic context a new assimilation scheme based on statistical methods that will use SST satellite observations to constrain (within observation-based air-sea flux uncertainties) the surface forcing function (surface atmospheric input variables) of ocean circulation simulations. The idea is to estimate a set of corrections for the atmospheric input data from ERAinterim reanalysis that cover the period from 1989 to 2007. We use a sequential method based on the SEEK filter, with an ensemble experiment to evaluate parameters uncertainties. The control vector is extended to correct forcing parameters (air temperature, air humidity, downward longwave and shortwave radiations, precipitation, wind velocity). Over experiments of one month duration, we assimilate observed monthly SST products (Hurrel, 2008) and SSS seasonal climatology (Levitus, 1994) data, to obtain monthly parameters corrections that we can use in a free run model This study shows that we can thus produce in a realistic case, on a global scale, and over a large time period, an optimal flux correction set that improves the forcing function of an ocean model using sea surface observations.

Page generated in 0.476 seconds