• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 15
  • 6
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 111
  • 111
  • 26
  • 23
  • 22
  • 20
  • 19
  • 18
  • 18
  • 17
  • 17
  • 14
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Platforms for Real-time Moving Object Location Stream Processing

Gadhoumi, Shérazade January 2017 (has links)
Boarder security is usually based on observing and analyzing the movement of MovingPoint Objects (MPOs): vehicle, boats, pedestrian or aircraft for example. This movementanalysis can directly be made by an operator observing the MPOs in real-time, but theprocess is time-consuming and approximate. This is why the states of each MPO (ID, location,speed, direction) are sensed in real-time using Global Navigation Satellite System(GNSS), Automatic Identification System (AIS) and radar sensing, thus creating a streamof MPO states. This research work proposes and carries out (1) a method for detectingfour different moving point patterns based on this input stream (2) a comparison betweenthree possible implementations of the moving point pattern detectors based on three differentData Stream Management Systems (DSMS). Moving point patterns can be dividedin two groups: (1) individual location patterns are based on the analysis of the successivestates of one MPO, (2) set-based relative motion patterns are based on the analysis ofthe relative motion of groups of MPOs within a set. This research focuses on detectingfour moving point patterns: (1) the geofence pattern consists of one MPO enteringor exiting one of the predefined areas called geofences, (2) the track pattern consists ofone MPO following the same direction for a given number of time steps and satisfying agiven spatial constraint, (3) the flock pattern consists of a group of geographically closeMPOs following the same direction, (4) the leadership pattern consists of a track patternwith the corresponding MPO anticipating the direction of geographically close MPOs atthe last time step. The two first patterns are individual location patterns, while the othersare set-based relative motion patterns. This research work proposes a method for detectinggeofence patterns based on the update of a table storing the last sensed state of eachMPO. The approach used for detecting track, flock and leadership patterns is based on theupdate of a REMO matrix (RElative MOtion matrix) where rows correspond to MPOs,columns to time steps and cells record the direction of movement. For the detection offlock patterns a simple but effective probabilistic grid-based approach is proposed in orderto detect clusters of MPOs within the MPOs following the same direction: (1) the Filteringphase partitions the study area into square-shaped cells -according to the dimensionof the spatial constraint- and selects spatially contiguous grid cells called candidate areasthat potentially contain flock patterns (2) for each candidate area, the Refinement phasegenerates disks of the size of the spatial constraint within the selected area until one diskcontains enough MPOs, so that the corresponding MPOs are considered to build a flockpattern. The pattern detectors are implemented on three DSMSs presenting differentcharacteristics: Esri ArcGIS GeoEvent Extension for Server (GeoEvent Ext.), a workflow-based technology that ingests each MPO state separately, Apache Spark Streaming(Spark), a MapReduce-based technology that processes the input stream in batches in ahighly-parallel processing framework and Apache Flink (Flink), a hybrid technology thatingests the states separately but offers several MapReduce semantics. GeoEvent Ext. onlylends itself for a nature implementation of the geofence detector, while the other DSMSsaccommodate the implementation of all detectors. Therefore, the geofence, track, flockand leadership pattern detectors are implemented on Spark and Flink, and empiricallyevaluated in terms of scalability in time/space based on the variation of parameters characterizingthe patterns and/or the input stream. The results of the empirical evaluationshows that the implementation on Flink uses globally less computer resources than theone on Spark. Moreover, the program based on Flink is less sensitive to the variability ofparameters describing either the input stream or the patterns to be detected.
52

Efficient Algorithms for Mining Data Streams

Boedihardjo, Arnold Priguna 06 September 2010 (has links)
Data streams are ordered sets of values that are fast, continuous, mutable, and potentially unbounded. Examples of data streams include the pervasive time series which span domains such as finance, medicine, and transportation. Mining data streams require approaches that are efficient, adaptive, and scalable. For several stream mining tasks, knowledge of the data's probability density function (PDF) is essential to deriving usable results. Providing an accurate model for the PDF benefits a variety of stream mining applications and its successful development can have far-reaching impact to the general discipline of stream analysis. Therefore, this research focuses on the construction of efficient and effective approaches for estimating the PDF of data streams. In this work, kernel density estimators (KDEs) are developed that satisfy the stringent computational stipulations of data streams, model unknown and dynamic distributions, and enhance the estimation quality of complex structures. Contributions of this work include: (1) theoretical development of the local region based KDE; (2) construction of a local region based estimation algorithm; (3) design of a generalized local region approach that can be applied to any global bandwidth KDE to enhance estimation accuracy; and (4) application extension of the local region based KDE to multi-scale outlier detection. Theoretical development includes the formulation of the local region concept to effectively approximate the computationally intensive adaptive KDE. This work also analyzes key theoretical properties of the local region based approach which include (amongst others) its expected performance, an alternative local region construction criterion, and its robustness under evolving distributions. Algorithmic design includes the development of a specific estimation technique that reduces the time/space complexities of the adaptive KDE. In order to accelerate mining tasks such as outlier detection, an integrated set of optimizations are proposed for estimating multiple density queries. Additionally, the local region concept is extended to an efficient algorithmic framework which can be applied to any global bandwidth KDEs. The combined solution can significantly improve estimation accuracy while retaining overall linear time/space costs. As an application extension, an outlier detection framework is designed which can effectively detect outliers within multiple data scale representations. / Ph. D.
53

Classificação de fluxos de dados com mudança de conceito e latência de verificação / Data stream classification with concept drift and verification latency

Reis, Denis Moreira dos 27 September 2016 (has links)
Apesar do grau relativamente alto de maturidade existente na área de pesquisa de aprendizado supervisionado em lote, na qual são utilizados dados originários de problemas estacionários, muitas aplicações reais lidam com fluxos de dados cujas distribuições de probabilidade se alteram com o tempo, ocasionando mudanças de conceito. Diversas pesquisas vêm sendo realizadas nos últimos anos com o objetivo de criar modelos precisos mesmo na presença de mudanças de conceito. A maioria delas, no entanto, assume que tão logo um evento seja classificado pelo algoritmo de aprendizado, seu rótulo verdadeiro se torna conhecido. Este trabalho explora as situações complementares, com revisão dos trabalhos mais importantes publicados e análise do impacto de atraso na disponibilidade dos rótulos verdadeiros ou sua não disponibilização. Ainda, propõe um novo algoritmo que reduz drasticamente a complexidade de aplicação do teste de hipótese não-paramétrico Kolmogorov-Smirnov, tornado eficiente seu uso em algoritmos que analisem fluxos de dados. A exemplo, mostramos sua potencial aplicação em um método de detecção de mudança de conceito não-supervisionado que, em conjunto com técnicas de Aprendizado Ativo e Aprendizado por Transferência, reduz a necessidade de rótulos verdadeiros para manter boa performance de um classificador ao longo do tempo, mesmo com a ocorrência de mudanças de conceito. / Despite the relatively maturity of batch-mode supervised learning research, in which the data typifies stationary problems, many real world applications deal with data streams whose statistical distribution changes over time, causing what is known as concept drift. A large body of research has been done in the last years, with the objective of creating new models that are accurate even in the presence of concept drifts. However, most of them assume that, once the classification algorithm labels an event, its actual label become readily available. This work explores the complementary situations, with a review of the most important published works and an analysis over the impact of delayed true labeling, including no true label availability at all. Furthermore, this work proposes a new algorithm that heavily reduces the complexity of applying Kolmogorov- Smirnov non-parametric hypotheis test, turning it into an uselful tool for analysis on data streams. As an instantiation of its usefulness, we present an unsupervised drift-detection method that, along with Active Learning and Transfer Learning approaches, decreases the number of true labels that are required to keep good classification performance over time, even in the presence of concept drifts.
54

Classification et apprentissage actif à partir d'un flux de données évolutif en présence d'étiquetage incertain / Classification and active learning from evolving data streams in the presence of incertain labeling

Bouguelia, Mohamed-Rafik 25 March 2015 (has links)
Cette thèse traite de l’apprentissage automatique pour la classification de données. Afin de réduire le coût de l’étiquetage, l’apprentissage actif permet de formuler des requêtes pour demander à un opérateur d’étiqueter seulement quelques données choisies selon un critère d’importance. Nous proposons une nouvelle mesure d’incertitude qui permet de caractériser l’importance des données et qui améliore les performances de l’apprentissage actif par rapport aux mesures existantes. Cette mesure détermine le plus petit poids nécessaire à associer à une nouvelle donnée pour que le classifieur change sa prédiction concernant cette donnée. Nous intégrons ensuite le fait que les données à traiter arrivent en continu dans un flux de longueur infinie. Nous proposons alors un seuil d’incertitude adaptatif qui convient pour un apprentissage actif à partir d’un flux de données et qui réalise un compromis entre le nombre d’erreurs de classification et le nombre d’étiquettes de classes demandées. Les méthodes existantes d’apprentissage actif à partir de flux de données, sont initialisées avec quelques données étiquetées qui couvrent toutes les classes possibles. Cependant, dans de nombreuses applications, la nature évolutive du flux fait que de nouvelles classes peuvent apparaître à tout moment. Nous proposons une méthode efficace de détection active de nouvelles classes dans un flux de données multi-classes. Cette méthode détermine de façon incrémentale une zone couverte par les classes connues, et détecte les données qui sont extérieures à cette zone et proches entre elles, comme étant de nouvelles classes. Enfin, il est souvent difficile d’obtenir un étiquetage totalement fiable car l’opérateur humain est sujet à des erreurs d’étiquetage qui réduisent les performances du classifieur appris. Cette problématique a été résolue par l’introduction d’une mesure qui reflète le degré de désaccord entre la classe donnée manuellement et la classe prédite et une nouvelle mesure d’"informativité" permettant d’exprimer la nécessité pour une donnée mal étiquetée d’être réétiquetée par un opérateur alternatif / This thesis focuses on machine learning for data classification. To reduce the labelling cost, active learning allows to query the class label of only some important instances from a human labeller.We propose a new uncertainty measure that characterizes the importance of data and improves the performance of active learning compared to the existing uncertainty measures. This measure determines the smallest instance weight to associate with new data, so that the classifier changes its prediction concerning this data. We then consider a setting where the data arrives continuously from an infinite length stream. We propose an adaptive uncertainty threshold that is suitable for active learning in the streaming setting and achieves a compromise between the number of classification errors and the number of required labels. The existing stream-based active learning methods are initialized with some labelled instances that cover all possible classes. However, in many applications, the evolving nature of the stream implies that new classes can appear at any time. We propose an effective method of active detection of novel classes in a multi-class data stream. This method incrementally maintains a feature space area which is covered by the known classes, and detects those instances that are self-similar and external to that area as novel classes. Finally, it is often difficult to get a completely reliable labelling because the human labeller is subject to labelling errors that reduce the performance of the learned classifier. This problem was solved by introducing a measure that reflects the degree of disagreement between the manually given class and the predicted class, and a new informativeness measure that expresses the necessity for a mislabelled instance to be re-labeled by an alternative labeller
55

Classificação de fluxos de dados não estacionários com algoritmos incrementais baseados no modelo de misturas gaussianas / Non-stationary data streams classification with incremental algorithms based on Gaussian mixture models

Oliveira, Luan Soares 18 August 2015 (has links)
Aprender conceitos provenientes de fluxos de dados é uma tarefa significamente diferente do aprendizado tradicional em lote. No aprendizado em lote, existe uma premissa implicita que os conceitos a serem aprendidos são estáticos e não evoluem significamente com o tempo. Por outro lado, em fluxos de dados os conceitos a serem aprendidos podem evoluir ao longo do tempo. Esta evolução é chamada de mudança de conceito, e torna a criação de um conjunto fixo de treinamento inaplicável neste cenário. O aprendizado incremental é uma abordagem promissora para trabalhar com fluxos de dados. Contudo, na presença de mudanças de conceito, conceitos desatualizados podem causar erros na classificação de eventos. Apesar de alguns métodos incrementais baseados no modelo de misturas gaussianas terem sido propostos na literatura, nota-se que tais algoritmos não possuem uma política explicita de descarte de conceitos obsoletos. Nesse trabalho um novo algoritmo incremental para fluxos de dados com mudanças de conceito baseado no modelo de misturas gaussianas é proposto. O método proposto é comparado com vários algoritmos amplamente utilizados na literatura, e os resultados mostram que o algoritmo proposto é competitivo com os demais em vários cenários, superando-os em alguns casos. / Learning concepts from data streams differs significantly from traditional batch learning. In batch learning there is an implicit assumption that the concept to be learned is static and does not evolve significantly over time. On the other hand, in data stream learning the concepts to be learned may evolve over time. This evolution is called concept drift, and makes the creation of a fixed training set be no longer applicable. Incremental learning paradigm is a promising approach for learning in a data stream setting. However, in the presence of concept drifts, out dated concepts can cause misclassifications. Several incremental Gaussian mixture models methods have been proposed in the literature, but these algorithms lack an explicit policy to discard outdated concepts. In this work, a new incremental algorithm for data stream with concept drifts based on Gaussian Mixture Models is proposed. The proposed methodis compared to various algorithms widely used in the literature, and the results show that it is competitive with them invarious scenarios, overcoming them in some cases.
56

Projeções multidimensionais para a análise de fluxos de dados / Multidimensional projections for data stream analysis

Neves, Tácito Trindade de Araújo Tiburtino 17 November 2016 (has links)
As técnicas de projeção multidimensional tornaram-se uma ferramenta de análise importante. Elas buscam mapear dados de um espaço multidimensional para um espaço visual, de menor dimensão, preservando as estruturas de distância ou de vizinhança no mapa visual produzido. Apesar dos recentes avanços, as técnicas existentes ainda apresentam deficiências que prejudicam a sua utilização como ferramentas exploratórias em certos domínios. Um exemplo está nos cenários streaming, nos quais os dados são produzidos e/ou coletados de forma contínua. Como a maioria das técnicas de projeção necessitam percorrer os dados mais de uma vez para produzir um layout final, e fluxos normalmente não podem ser carregados por completo em memória principal, a aplicação direta ou mesmo a adaptação das técnicas existentes em tais cenários é inviável. Nessa tese de doutorado é apresentado um novo modelo de projeção, chamado de Xtreaming, no qual as instâncias de dados são visitadas apenas uma vez durante o processo de projeção. Esse modelo é capaz de se adaptar a mudanças nos dados conforme eles são recebidos, atualizando o mapa visual para refletir as novas estruturas que surgem ao longo do tempo. Os resultados dos testes mostram que o Xtreaming é muito competitivo em termos de preservação de distâncias e tempo de execução se comparado com técnicas do estado-da-arte. Também é apresentada uma nova técnica de projeção multidimensional, chamada de User-assisted Projection Technique for Distance Information (UPDis), que foi projetada para permitir a intervenção do usuário exigindo apenas informações de distância entre as instâncias, e que é utilizada como parte do Xtreaming. Os resultados também mostram que a UPDis é tão rápida, precisa e flexível quanto as técnicas do estado-da-arte. / Multidimensional Projection techniques have become an important analytics tool. They map data from a multidimensional space into a visual space preserving the distance or neighborhood structures on the produced layout. Despite the recent advances, existing techniques still present drawbacks that impair their use as exploratory tools on certain domains. An example is the streaming scenario, in which data are captured or produced continuously. Since most projection techniques need to traverse the data more than once to produce a final layout, and streaming data typically cannot be completely loaded into the main memory, the direct use or even adaptation of the existing techniques in such scenarios is infeasible. In this dissertation, we present a novel projection model, called Xtreaming, wherein the data instances are visited only once during the projection process. This model is able to adapt itself to the changes in data as data is received, updating the visual layout to reflect the new structures that emerge over time. The tests show that Xtreaming is very competitive regarding distance preservation and running time when compared with state-of-the-art projection techniques. We also present a new multidimensional projection technique, called User-assisted Projection Technique for Distance Information (UPDis), that was designed to allow user intervention requiring only distance information between data instances. UPDis is used as part of the Xtreaming model. The results show that UPDis is as fast, accurate and flexible as state-of-the-art techniques.
57

D-CAPE: A Self-Tuning Continuous Query Plan Distribution Architecture

Sutherland, Timothy Michael 05 May 2004 (has links)
The study of systems for querying data streams, coined Data Stream Management Systems (DSMS), has gained in popularity over the last several years. This new area of research for the database community includes studies in areas such as Sensor Networks, Network Intrusion, and monitoring data such as Medicine, Stock, or Weather feeds. With this new popularity comes increased performance expectations, with increased data sizes and speed and larger more complex query plans as well as high volumes of possibly small queries. Due to the finite resources on a single query processor, future Data Stream Management Systems must distribute their workload to multiple query processors, working together in a synchronized manner. This thesis discusses a new Distributed Continuous Query System (D-CAPE) developed here at WPI that has the ability to distribute query plans over a large cluster of machines. We describe the architecture of the new system, policies for query plan distribution to improve overall performance, as well as techniques for self-tuning query plan re-distribution. D-CAPE is designed to be as flexible as possible for future research. We include a multi-tiered architecture that scales to a large number of query processors. D-CAPE has also been designed to minimize the cost of the communications network by bundling synchronization messages, thus minimizing packets sent between query processors. These messages are also incremental at run-time to aid in minimizing the communication cost of D-CAPE. The architecture allows for the flexible incorporation of different distribution algorithms and operator reallocation policies.. D-CAPE provides an operator reallocation algorithm that is able to seamlessly move an operator(s) across any query processors in our computing cluster. We do so by creating ``pipes" between query processors to allow the data streams to flow, and then filling these pipes with data streams once execution begins. Operator redistribution is accomplished by systematically reconnecting these pipes as to not interrupt the data flow. Experimental evaluation using our real prototype system (not just simulation) shows that executing a query plan distributed over multiple machines causes no more overhead than processing it on a single centralized query processor; even for rather lightly loaded machines. Further, we find that distributing a query plan among a cluster of query processors can boost performance up to twice that of a centralized DSMS. We conclude that the limitation of each query processor within the distributed network of cooperating processors is not primarily in the volume of the data nor the number of query operators, but rather the number of data connections per processor and the allocation of the stateful and thus most costly operators. We also find that the overhead of distributing query operators is very low, allowing for a potentially frequent dynamic redistribution of query plans during execution.
58

Dynamic Energy-Aware Database Storage and Operations

Behzadnia, Peyman 29 March 2018 (has links)
Energy consumption has become a first-class optimization goal in design and implementation of data-intensive computing systems. This is particularly true in the design of database management systems (DBMS), which is one of the most important servers in software stack of modern data centers. Data storage system is one of the essential components of database and has been under many research efforts aiming at reducing its energy consumption. In previous work, dynamic power management (DPM) techniques that make real-time decisions to transition the disks to low-power modes are normally used to save energy in storage systems. In this research, we tackle the limitations of DPM proposals in previous contributions and design a dynamic energy-aware disk storage system in database servers. We introduce a DPM optimization model integrated with model predictive control (MPC) strategy to minimize power consumption of the disk-based storage system while satisfying given performance requirements. It dynamically determines the state of disks and plans for inter-disk data fragment migration to achieve desirable balance between power consumption and query response time. Furthermore, via analyzing our optimization model to identify structural properties of optimal solutions, a fast-solution heuristic DPM algorithm is proposed that can be integrated in large-scale disk storage systems, where finding the most optimal solution might be long, to achieve near-optimal power saving solution within short periods of computational time. The proposed ideas are evaluated through running simulations using extensive set of synthetic workloads. The results show that our solution achieves up to 1.65 times more energy saving while providing up to 1.67 times shorter response time compared to the best existing algorithm in literature. Stream join is a dynamic and expensive database operation that performs join operation in real-time fashion on continuous data streams. Stream joins, also known as window joins, impose high computational time and potentially higher energy consumption compared to other database operations, and thus we also tackle energy-efficiency of stream join processing in this research. Given that there is a strong linear correlation between energy-efficiency and performance of in-memory parallel join algorithms in database servers, we study parallelization of stream join algorithms on multicore processors to achieve energy efficiency and high performance. Equi-join is the most frequent type of join in query workloads and symmetric hash join (SHJ) algorithm is the most effective algorithm to evaluate equi-joins in data streams. To best of our knowledge, we are the first to propose a shared-memory parallel symmetric hash join algorithm on multi-core CPUs. Furthermore, we introduce a novel parallel hash-based stream join algorithm called chunk-based pairing hash join that aims at elevating data throughput and scalability. We also tackle parallel processing of multi-way stream joins where there are more than two input data streams involved in the join operation. To best of our knowledge, we are also the first to propose an in-memory parallel multi-way hash-based stream join on multicore processors. Experimental evaluation on our proposed parallel algorithms demonstrates high throughput, significant scalability, and low latency while reducing the energy consumption. Our parallel symmetric hash join and chunk-based pairing hash join achieve up to 11 times and 12.5 times more throughput, respectively, compared to that of state-of-the-art parallel stream join algorithm. Also, these two algorithms provide up to around 22 times and 24.5 times more throughput, respectively, compared to that of non-parallel (sequential) stream join computation where there is one processing thread.
59

Gestion de flux de données pour l'observation de systèmes / Data stream management for systems monitoring

Petit, Loïc 10 December 2012 (has links)
La popularisation de la technologie a permis d'implanter des dispositifs et des applications de plus en plus développés à la portée d'utilisateurs non experts. Ces systèmes produisent des flux ainsi que des données persistantes dont les schémas et les dynamiques sont hétérogènes. Cette thèse s'intéresse à pouvoir observer les données de ces systèmes pour aider à les comprendre et à les diagnostiquer. Nous proposons tout d'abord un modèle algébrique Astral capable de traiter sans ambiguïtés sémantiques des données provenant de flux ou relations. Le moteur d'exécution Astronef a été développé sur l'architecture à composants orientés services pour permettre une grande adaptabilité. Il est doté d'un constructeur de requête permettant de choisir un plan d'exécution efficace. Son extension Asteroid permet de s'interfacer avec un SGBD pour gérer des données persistantes de manière intégrée. Nos contributions sont confrontées à la pratique par la mise en œuvre d'un système d'observation du réseau domestique ainsi que par l'étude des performances. Enfin, nous nous sommes intéressés à la mise en place de la personnalisation des résultats dans notre système par l'introduction d'un modèle de préférences top-k. / Due to the popularization of technology, non-expert people can now use more and more advanced devices and applications. Such systems produce data streams as well as persistent data with heterogeneous schemas and dynamics. This thesis is focused on monitoring data coming from those systems to help users to understand and to perform diagnosis on them. We propose an algebraic model Astral able to treat data coming from streams or relations without semantic ambiguity. The engine Astronef has been developed on top of a service-oriented component framework to enable a large adaptability. It embeds a query builder which can select a composition of components to provide an efficient query plan. Its extension Asteroid interfaces with a DBMS in order to manage persistent data in an integrated manner. Our contributions have been confronted to practice with the deployment of a monitoring system for the digital home and with a performance study. Finally, we extend our approach with an operator to personalize the results by introducing a top-k preference model.
60

Real-time Distributed Computation of Formal Concepts and Analytics / Calcul distribué des concepts formels en temps réel et analyse visuelle

De Alburquerque Melo, Cassio 19 July 2013 (has links)
Les progrès de la technologie pour la création, le stockage et la diffusion des données ont considérablement augmenté le besoin d’outils qui permettent effectivement aux utilisateurs les moyens d’identifier et de comprendre l’information pertinente. Malgré les possibilités de calcul dans les cadres distribuées telles que des outils comme Hadoop offrent, il a seulement augmenté le besoin de moyens pour identifier et comprendre les informations pertinentes. L’Analyse de Concepts Formels (ACF) peut jouer un rôle important dans ce contexte, en utilisant des moyens plus intelligents dans le processus d’analyse. ACF fournit une compréhension intuitive de la généralisation et de spécialisation des relations entre les objets et leurs attributs dans une structure connue comme un treillis de concepts. Cette thèse aborde le problème de l’exploitation et visualisation des concepts sur un flux de données. L’approche proposée est composé de plusieurs composants distribués qui effectuent le calcul des concepts d’une transaction de base, filtre et transforme les données, les stocke et fournit des fonctionnalités analytiques pour l’exploitation visuelle des données. La nouveauté de notre travail consiste à: (i) une architecture distribuée de traitement et d’analyse des concepts et l’exploitation en temps réel, (ii) la combinaison de l’ACF avec l’analyse des techniques d’exploration, y compris la visualisation des règles d’association, (iii) des nouveaux algorithmes pour condenser et filtrage des données conceptuelles et (iv) un système qui met en œuvre toutes les techniques proposées, Cubix, et ses étude de cas en biologie, dans la conception de systèmes complexes et dans les applications spatiales. / The advances in technology for creation, storage and dissemination of data have dramatically increased the need for tools that effectively provide users with means of identifying and understanding relevant information. Despite the great computing opportunities distributed frameworks such as Hadoop provide, it has only increased the need for means of identifying and understanding relevant information. Formal Concept Analysis (FCA) may play an important role in this context, by employing more intelligent means in the analysis process. FCA provides an intuitive understanding of generalization and specialization relationships among objects and their attributes in a structure known as a concept lattice. The present thesis addresses the problem of mining and visualising concepts over a data stream. The proposed approach is comprised of several distributed components that carry the computation of concepts from a basic transaction, filter and transforms data, stores and provides analytic features to visually explore data. The novelty of our work consists of: (i) a distributed processing and analysis architecture for mining concepts in real-time; (ii) the combination of FCA with visual analytics visualisation and exploration techniques, including association rules analytics; (iii) new algorithms for condensing and filtering conceptual data and (iv) a system that implements all proposed techniques, called Cubix, and its use cases in Biology, Complex System Design and Space Applications.

Page generated in 0.0797 seconds