• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 4
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 67
  • 67
  • 44
  • 15
  • 14
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Minimizing Overhead for Fault Tolerance in Event Stream Processing Systems

Martin, André 17 December 2015 (has links)
Event Stream Processing (ESP) is a well-established approach for low-latency data processing enabling users to quickly react to relevant situations in soft real-time. In order to cope with the sheer amount of data being generated each day and to cope with fluctuating workloads originating from data sources such as Twitter and Facebook, such systems must be highly scalable and elastic. Hence, ESP systems are typically long running applications deployed on several hundreds of nodes in either dedicated data-centers or cloud environments such as Amazon EC2. In such environments, nodes are likely to fail due to software aging, process or hardware errors whereas the unbounded stream of data asks for continuous processing. In order to cope with node failures, several fault tolerance approaches have been proposed in literature. Active replication and rollback recovery-based on checkpointing and in-memory logging (upstream backup) are two commonly used approaches in order to cope with such failures in the context of ESP systems. However, these approaches suffer either from a high resource footprint, low throughput or unresponsiveness due to long recovery times. Moreover, in order to recover applications in a precise manner using exactly once semantics, the use of deterministic execution is required which adds another layer of complexity and overhead. The goal of this thesis is to lower the overhead for fault tolerance in ESP systems. We first present StreamMine3G, our ESP system we built entirely from scratch in order to study and evaluate novel approaches for fault tolerance and elasticity. We then present an approach to reduce the overhead of deterministic execution by using a weak, epoch-based rather than strict ordering scheme for commutative and tumbling windowed operators that allows applications to recover precisely using active or passive replication. Since most applications are running in cloud environments nowadays, we furthermore propose an approach to increase the system availability by efficiently utilizing spare but paid resources for fault tolerance. Finally, in order to free users from the burden of choosing the correct fault tolerance scheme for their applications that guarantees the desired recovery time while still saving resources, we present a controller-based approach that adapts fault tolerance at runtime. We furthermore showcase the applicability of our StreamMine3G approach using real world applications and examples.
32

A Complex Event Processing Framework Implementation Using Heterogeneous Devices In Smart Environments

Kaya, Muammer Ozge 01 January 2012 (has links) (PDF)
Significant developments in microprocessor and sensor technology make wirelessly connected small computing devices widely available / hence they are being used frequently to collect data from the environment. In this study, we construct a framework in order to extract high level information in an environment containing such pervasive computing devices. In the framework, raw data originating from wireless sensors are collected using an event driven system and converted to simple events for transmission over a network to a central processing unit. We also utilize complex event processing approach incorporating temporal constraints, aggregation and sequencing of events in order to define complex events for extracting high level information from the collected simple events. We develop a prototype using easily accessible hardware and set it up in a classroom within our university. The results demonstrate the feasibility of our approach, ease of deployment and successful application of the complex event processing framework.
33

ssIoTa: A system software framework for the internet of things

Lillethun, David 08 June 2015 (has links)
Sensors are widely deployed in our environment, and their number is increasing rapidly. In the near future, billions of devices will all be connected to each other, creating an Internet of Things. Furthermore, computational intelligence is needed to make applications involving these devices truly exciting. In IoT, however, the vast amounts of data will not be statically prepared for batch processing, but rather continually produced and streamed live to data consumers and intelligent algorithms. We refer to applications that perform live analysis on live data streams, bringing intelligence to IoT, as the Analysis of Things. However, the Analysis of Things also comes with a new set of challenges. The data sources are not collected in a single, centralized location, but rather distributed widely across the environment. AoT applications need to be able to access (consume, produce, and share with each other) this data in a way that is natural considering its live streaming nature. The data transport mechanism must also allow easy access to sensors, actuators, and analysis results. Furthermore, analysis applications require computational resources on which to run. We claim that system support for AoT can reduce the complexity of developing and executing such applications. To address this, we make the following contributions: - A framework for systems support of Live Streaming Analysis in the Internet of Things, which we refer to as the Analysis of Things (AoT), including a set of requirements for system design - A system implementation that validates the framework by supporting Analysis of Things applications at a local scale, and a design for a federated system that supports AoT on a wide geographical scale - An empirical system evaluation that validates the system design and implementation, including simulation experiments across a wide-area distributed system We present five broad requirements for the Analysis of Things and discuss one set of specific system support features that can satisfy these requirements. We have implemented a system, called \textsubscript{SS}IoTa, that implements these features and supports AoT applications running on local resources. The programming model for the system allows applications to be specified simply as operator graphs, by connecting operator inputs to operator outputs and sensor streams. Operators are code components that run arbitrary continuous analysis algorithms on streaming data. By conforming to a provided interface, operators may be developed that can be composed into operator graphs and executed by the system. The system consists of an Execution Environment, in which a Resource Manager manages the available computational resources and the applications running on them, a Stream Registry, in which available data streams can be registered so that they may be discovered and used by applications, and an Operator Store, which serves as a repository for operator code so that components can be shared and reused. Experimental results for the system implementation validate its performance. Many applications are also widely distributed across a geographic area. To support such applications, \textsubscript{SS}IoTa must be able to run them on infrastructure resources that are also distributed widely. We have designed a system that does so by federating each of the three system components: Operator Store, Stream Registry, and Resource Manager. The Operator Store is distributed using a distributed hast table (DHT), however since temporal locality can be expected and data churn is low, caching may be employed to further improve performance. Since sensors exist at particular locations in physical space, queries on the Stream Registry will be based on location. We also introduce the concept of geographical locality. Therefore, range queries in two dimensions must be supported by the federated Stream Registry, while taking advantage of geographical locality for improved average-case performance. To accomplish these goals, we present a design sketch for SkipCAN, a modification of the SkipNet and Content Addressable Network DHTs. Finally, the fundamental issue in the federated Resource Manager is how to distributed the operators of multiple applications across the geographically distributed sites where computational resources can execute them. To address this, we introduce DistAl, a fully distributed algorithm that assigns operators to sites. DistAl also respects the system resource constraints and application preferences for performance and quality of results (QoR), using application-specific utility functions to allow applications to express their preferences. DistAl is validated by simulation results.
34

Location-Aware Business Process Management for Real-time Monitoring of Patient Care Processes

Bougueng Tchemeube, Renaud 24 July 2013 (has links)
Long wait times are a global issue in the healthcare sector, particularly in Canada. Despite numerous research findings on wait time management, the issue persists. This is partly because for a given hospital, the data required to conduct wait times analysis is currently scattered across various information systems. Moreover, such data is usually not accurate (because of possible human errors), imprecise and late. The whole situation contributes to the current state of wait times. This thesis proposes a location-aware business process management system for real-time care process monitoring. More precisely, the system enables an improved visibility of process execution by gathering, as processes execute, accurate and granular process information including wait time measurements. The major contributions of this thesis include an architecture for the system, a prototype taking advantages of commercial real-time location system combined with a business process management system to accurately measure wait times, as well as a case study based on a real cardiology process from an Ontario hospital.
35

Speculation in Parallel and Distributed Event Processing Systems

Brito, Andrey 09 August 2010 (has links) (PDF)
Event stream processing (ESP) applications enable the real-time processing of continuous flows of data. Algorithmic trading, network monitoring, and processing data from sensor networks are good examples of applications that traditionally rely upon ESP systems. In addition, technological advances are resulting in an increasing number of devices that are network enabled, producing information that can be automatically collected and processed. This increasing availability of on-line data motivates the development of new and more sophisticated applications that require low-latency processing of large volumes of data. ESP applications are composed of an acyclic graph of operators that is traversed by the data. Inside each operator, the events can be transformed, aggregated, enriched, or filtered out. Some of these operations depend only on the current input events, such operations are called stateless. Other operations, however, depend not only on the current event, but also on a state built during the processing of previous events. Such operations are, therefore, named stateful. As the number of ESP applications grows, there are increasingly strong requirements, which are often difficult to satisfy. In this dissertation, we address two challenges created by the use of stateful operations in a ESP application: (i) stateful operators can be bottlenecks because they are sensitive to the order of events and cannot be trivially parallelized by replication; and (ii), if failures are to be tolerated, the accumulated state of an stateful operator needs to be saved, saving this state traditionally imposes considerable performance costs. Our approach is to evaluate the use of speculation to address these two issues. For handling ordering and parallelization issues in a stateful operator, we propose a speculative approach that both reduces latency when the operator must wait for the correct ordering of the events and improves throughput when the operation in hand is parallelizable. In addition, our approach does not require that user understand concurrent programming or that he or she needs to consider out-of-order execution when writing the operations. For fault-tolerant applications, traditional approaches have imposed prohibitive performance costs due to pessimistic schemes. We extend such approaches, using speculation to mask the cost of fault tolerance.
36

Un framework de traitement semantic d'événement dans les réseaux des capteurs multimedias. / A Semantic-Based Framework for Processing Complex Events in Multimedia Sensor Networks.

Angsuchotmetee, Chinnapong 22 December 2017 (has links)
Les progrès de la technologie des capteurs, des communications sans fil et de l'Internet des Objets ont favorisé le développement des réseaux de capteurs multimédias. Ces derniers sont composés de capteurs interconnectés capables de fournir de façon omniprésente un suivi fin d’un espace connecté. Grâce à leurs propriétés, les réseaux de capteurs multimédias ont suscité un intérêt croissant ces dernières années des secteurs académiques et industriels et ont été adoptés dans de nombreux domaines d'application (tels que la maison intelligente, le bureau intelligent, ou la ville intelligente). L'un des avantages de l'adoption des réseaux de capteurs multimédias est le fait que les données collectées (vidéos, audios, images, etc.) à partir de capteurs connexes contiennent des informations sémantiques riches (en comparaison avec des capteurs uniquement scalaires) qui permettent de détecter des événements complexes et de mieux gérer les exigences du domaine d'application. Toutefois, la modélisation et la détection des événements dans les reséaux de capteurs multimédias restent une tâche difficile à réaliser, car la traduction de toutes les données multimédias collectées en événements n'est pas simple.Dans cette thèse, un framework complet pour le traitement des événements complexes dans les réseaux de capteurs multimédias est proposé pour éviter les algorithmes codés en dur et pour permettre une meilleure adaptation aux évolution des besoins d’un domaine d'application. Le Framework est appelé CEMiD et composé de :• MSSN-Onto: une ontologie nouvellement proposée pour la modélisation des réseaux de capteurs,• CEMiD-Language: un langage original pour la modélisation des réseaux de capteurs multimédias et des événements à détecter, et• GST-CEMiD: un moteur de traitement d'événement complexe basé sur un pipeline sémantique.Le framework CEMiD aide les utilisateurs à modéliser leur propre infrastructure de réseau de capteurs et les événements à détecter via le langage CEMiD. Le moteur de détection du framework prend en entrée le modèle fourni par les utilisateurs pour initier un pipeline de détection d'événements afin d'extraire des données multimédias correspondantes, de traduire des informations sémantiques et de les traduire automatiquement en événements. Notre framework est validé par des prototypes et des simulations. Les résultats montrent que notre framework peut détecter correctement les événements multimédias complexes dans un scénario de charge de travail élevée (avec une latence de détection moyenne inférieure à une seconde). / The dramatic advancement of low-cost hardware technology, wireless communications, and digital electronics have fostered the development of multifunctional (wireless) Multimedia Sensor Networks (MSNs). Those latter are composed of interconnected devices able to ubiquitously sense multimedia content (video, image, audio, etc.) from the environment. Thanks to their interesting features, MSNs have gained increasing attention in recent years from both academic and industrial sectors and have been adopted in wide range of application domains (such as smart home, smart office, smart city, to mention a few). One of the advantages of adopting MSNs is the fact that data gathered from related sensors contains rich semantic information (in comparison with using solely scalar sensors) which allows to detect complex events and copes better with application domain requirements. However, modeling and detecting events in MSNs remain a difficult task to carry out because translating all gathered multimedia data into events is not straightforward and challenging.In this thesis, a full-fledged framework for processing complex events in MSNs is proposed to avoid hard-coded algorithms. The framework is called Complex Event Modeling and Detection (CEMiD) framework. Core components of the framework are:• MSSN-Onto: a newly proposed ontology for modeling MSNs,• CEMiD-Language: an original language for modeling multimedia sensor networks and events to be detected, and• GST-CEMiD: a semantic pipelining-based complex event processing engine.CEMiD framework helps users model their own sensor network infrastructure and events to be detected through CEMiD language. The detection engine of the framework takes all the model provided by users to initiate an event detection pipeline for extracting multimedia data feature, translating semantic information, and interpret into events automatically. Our framework is validated by means of prototyping and simulations. The results show that our framework can properly detect complex multimedia events in a high work-load scenario (with average detection latency for less than one second).
37

Une approche événementielle pour le développement de services multi-métiers dédiés à l’assistance domiciliaire / An event-driven approach to developing interdisciplinary services dedicated to aging in place

Carteron, Adrien 22 December 2017 (has links)
La notion de contexte est fondamentale dans le champ de l’informatique ubiquitaire. En particulier lorsque des services assistent un utilisateur dans ses activités quotidiennes. Parce qu’elle implique plusieurs disciplines, une maison équipée d’informatique ubiquitaire dédiée au maintien à domicile de personnes âgées demande l’implication d’une variété d’intervenants, tant pour concevoir et développer des services d’assistance, que pour déployer et maintenir l’infrastructure sous-jacente. Cette grande diversité d’intervenants correspond à une diversité de contextes. Ces différents contextes sont généralement étudiés séparément, empêchant toute synergie. Cette thèse présente une méthodologie permettant d’unifier la conception et le développement de services sensibles au contexte et de répondre aux besoins de tout type d’intervenant. Dans un premier temps, nous traitons les besoins des intervenants concernant l’infrastructure de capteurs/actionneurs : installation, maintenance et exploitation. Le modèle d’infrastructure de capteurs et un ensemble de règles en résultant permettent de superviser en continu l’infrastructure et de détecter des dysfonctionnements. Cette supervision simplifie le processus de développement d’applications, en faisant abstraction des problèmes d’infrastructure. Dans un second temps, nous analysons un large éventail de services d’assistance domiciliaire dédié aux personnes âgées, en considérant la variété des besoins des intervenants. Grâce à cette analyse, nous généralisons l’approche de modèle d’infrastructure à tout type de services. Notre méthodologie permet de définir des services de façon unifiée, à travers un langage dédié, appelé Maloya, exprimant des règles manipulant les concepts d’état et d’évènement. Nous avons développé un compilateur de notre langage vers un langage événementiel dont l’exécution s’appuie sur un moteur de traitement d’évènements complexes (CEP). Nous avons validé notre approche en définissant un large éventail de services d’assistance à la personne, à partir de services existants, et concernant l’ensemble des intervenants du domaine. Nous avons compilé et exécuté les services Maloya sur un moteur de traitement d’évènements complexes. Les performances obtenues en terme de latence et d’occupation mémoire sont satisfaisantes pour le domaine et compatible avec une exécution 24 heures sur 24 sur le long terme. / The notion of context is fundamental to the field of pervasive computing, and in particular when such services are dedicated to assist a user in his daily activities. Being at the crossroad of various fields, a context-aware home dedicated to aging in place involves a variety of stakeholders to design and develop assistive services, as well as to deploy and maintain the underlying infrastructure. This considerable diversity of stakeholders raises correspondingly diverse context dimensions : each service relies on specific contexts (e.g., sensor status for a maintenance service, fridge usage for a meal activity recognition service). Typically, these contexts are considered separately, preventing any synergy. This dissertation presents a methodology for unifying the design and development of various domestic context-aware services, which addresses the requirements of all the stakeholders. In a first step, we handle the needs of stakeholders concerned by the sensors infrastructure : installers, maintainers and operators. We define an infrastructure model of a home and a set of rules to continuously monitor the sensor infrastructure and raise failure when appropriate. This continuous monitoring simplifies application development by abstracting it from infrastructure concerns. In a second step, we analyze a range of services for aging in place, considering the whole diversity of stakeholders. Based on this analysis, we generalize the approach developed for the infrastructure to all assistive services. Our methodology allows to define unified services, in the form of rules processing events and states. To express such rules, we define a domain-specific design language, named Maloya. We developed a compiler from our langage using as a backend an event processing language, which is executed on a complex event processing (CEP) engine. To validate our approach, we define a wide range of assistive services with our language, which reimplement existing deployed services belonging to all of the stakeholders. These Maloya services were deployed and successfully tested for their effectiveness in performing the specific tasks of the stakeholders. Latency and memory consumption performance turned out to be fully compatible with a 24/7 execution in the long run.
38

Détection d'évènements complexes dans les flux d'évènements massifs / Complex event detection over large event streams

Braik, William 15 May 2017 (has links)
La détection d’évènements complexes dans les flux d’évènements est un domaine qui a récemment fait surface dans le ecommerce. Notre partenaire industriel Cdiscount, parmi les sites ecommerce les plus importants en France, vise à identifier en temps réel des scénarios de navigation afin d’analyser le comportement des clients. Les objectifs principaux sont la performance et la mise à l’échelle : les scénarios de navigation doivent être détectés en moins de quelques secondes, alorsque des millions de clients visitent le site chaque jour, générant ainsi un flux d’évènements massif.Dans cette thèse, nous présentons Auros, un système permettant l’identification efficace et à grande échelle de scénarios de navigation conçu pour le eCommerce. Ce système s’appuie sur un langage dédié pour l’expression des scénarios à identifier. Les règles de détection définies sont ensuite compilées en automates déterministes, qui sont exécutés au sein d’une plateforme Big Data adaptée au traitement de flux. Notre évaluation montre qu’Auros répond aux exigences formulées par Cdiscount, en étant capable de traiter plus de 10,000 évènements par seconde, avec une latence de détection inférieure à une seconde. / Pattern detection over streams of events is gaining more and more attention, especially in the field of eCommerce. Our industrial partner Cdiscount, which is one of the largest eCommerce companies in France, aims to use pattern detection for real-time customer behavior analysis. The main challenges to consider are efficiency and scalability, as the detection of customer behaviors must be achieved within a few seconds, while millions of unique customers visit the website every day,thus producing a large event stream. In this thesis, we present Auros, a system for large-scale an defficient pattern detection for eCommerce. It relies on a domain-specific language to define behavior patterns. Patterns are then compiled into deterministic finite automata, which are run on a BigData streaming platform. Our evaluation shows that our approach is efficient and scalable, and fits the requirements of Cdiscount.
39

Abnahmetestgetriebene Entwicklung von ereignisbasierten Anwendungen

Weiß, Johannes 16 June 2017 (has links) (PDF)
Die Menge an verfügbaren, elektronisch auswertbaren Informationen nimmt stetig zu. Mobiltelefone mit unterschiedlichsten Sensoren, soziale Netzwerke und das Internet der Dinge sind Beispiele für Erzeuger von potentiell interessanten und verwertbaren Daten. Das Themenfeld der ereignisverarbeitenden Systeme (Event Processing – EP) bietet Technologien und Werkzeuge an, um eintreffende Daten, sog. Ereignisse, in nahezu Echtzeit zu verarbeiten. So können z.B. Muster in den Ereignissen erkannt werden. Durch die Erstellung von abgeleiteten Ereignissen können somit weitere Systemen auf diese Mustererkennung reagieren. So können u.a. zeitbasierte Funktionalitäten realisiert werden, wie z.B. das Überwachen von Aktienkursen in einem definierten Zeitraum. Im Gegensatz zu einem nachrichtenorientierten Kommunikationssystem können in EP-Anwendungen fachlich relevante Anwendungsfunktionalitäten umgesetzt werden. Die Validierung dieser Anwendungen durch Fachexperten gewinnt dadurch eine gesteigerte Bedeutung. Die abnahmetestgetriebene Entwicklung (Acceptance Test Driven Development – ATDD) ist eine Methode der agilen Softwareentwicklung und fokussiert sich auf die Integration von Fachexperten in die Erstellung und Auswertung von automatisierbaren Testfällen. Neben dem Potential der Automatisierung von manuellen Regressionstests liegt in der Methode die Möglichkeit den Wissenstransfer zwischen Entwicklern und Fachexperten zu verbessern. Die vorliegende Arbeit leistet mehrere Beiträge zur Untersuchung von ATDD im Bereich der EP-Anwendungsentwicklung. Zunächst wurden Anforderungen für eine entsprechende Werkzeugunterstützung auf Basis der Eigenschaften von EP-Anwendungen ermittelt und der Produktqualitätsklassifikationen funktionalen Eignung, Modularität und Benutzbarkeit zugeordnet. Im Rahmen einer systematischen Literaturrecherche wurden Ansätze aus der Literatur sowie die Werkzeugunterstützung der vorhandenen Produktlösungen analysiert. Dabei wurde deutlich, dass die verwandten Lösungen die identifizierten Anforderungen nicht ausreichend erfüllen. Dadurch motiviert wurde eine Testbeschreibungssprache sowie ein ausführendes, verteiltes Testsystem konzipiert und formal beschrieben. Die Testbeschreibungssprache bietet Kommandos zur produktunabhängigen Spezifikation von Testfällen an. Mit Hilfe des Testsystems ist es möglich, diese Testfälle gegen EP-Produktlösungen auszuführen. Anhand von ausgewählten Fallstudien und einer prototypischen Umsetzung des Lösungsansatzes wurde eine Validierung vorgenommen. Dabei wird ersichtlich, dass der vorgestellte Lösungsansatz den aktuellen Stand der Technik hinsichtlich funktionaler Eignung und Modularität in diesem Anwendungsbereich übersteigt. Die Benutzbarkeit wurde anhand von zwei Benutzerstudien tiefergehend untersucht. Dabei sind erste Erkenntnisse über die praktische Nutzung der Testbeschreibungssprache sowie zukünftige Fragestellungen aufgedeckt worden. In der ersten Studie wurde das Verstehen von Testfällen untersucht und dabei die automatisierbare Testbeschreibungssprache mit einer klassischen Testbeschreibungsvorlage verglichen. Hinsichtlich der Bearbeitungsdauer wurde ein signifikanter Effekt zugunsten der automatisierbaren Sprache ermittelt. Die zweite Studie betrachtet das Spezifizieren von Testfällen. Auch hier wurden Vorteile hinsichtlich der Bearbeitungsdauer aufgedeckt.
40

Langage dédié au traitement des événements complexes et modélisation des usages pour les réseaux de capteurs / Complex event processing domain-specific language and modelling of usages for sensors networks

Garnier, Alexandre 15 December 2016 (has links)
On assiste ces dernières années à une explosion des usages dans l’Internet des objets. La démocratisation de ce monde de capteurs est le fruit, d’une part de la baisse drastique des coûts dans l’informatique embarquée, d’autre part d’un support logiciel toujours plus mature. Que ce soit au niveau des protocoles et des réseaux (CoAP, IPv6, etc) ou de la standardisation des supports de développement, notamment sur microprocesseurs ATMEL, les outils à disposition permettent chaque jour une plus grande homogénéisation dans la communication entre des capteurs toujours plus variés. Cette diversification rassemble chaque jour des utilisateurs aux attentes et aux domaines de compétence différents, avec chacun leur propre compréhension des objets connectés. La complexification des réseaux de capteurs, confrontée à cette nécessité d’adresser des usages fondamentalement différents, pose problème. Sur la base d’un même réseau de capteurs hétéroclite, il est crucial de pouvoir répondre aux besoins de chacun des utilisateurs, sans réclamer d’eux une maîtrise du réseau de capteurs dépassant exagérément leur domaine de compétence. L’outil décrit dans ce document se propose d’adresser cette problématique au travers d’un moteur de requête dédié au traitement des données issus des capteurs. Pour ce faire, il repose sur une modélisation des capteurs au sein de différents contextes, chacun à même de répondre à un besoin utilisateur précis. Sur la base de ce modèle est mis à disposition un langage dédié pour le traitement des événements complexes issus des données mesurées par les capteurs. L’implémentation de cet outil permet en outre d’interagir avec d’éventuelles fonctionnalités d’actuation du réseau de capteurs. / Usages of the internet of things experience an exponential growth these last few years. As a matter of fact, this is the result of, on one hand the significantly lowercosts in embedded computing systems, on the other hand the maturing of the software layers. From protocols and networks (CoAP, IPv6, etc) to standardization of ATMEL microcontrollers, tools at hand allow a better communication between more and more various sensors. This diversification gather every day users with different needs, expectations and fields of expertise, each one of them having his own approch, his own understanding of the connected things. The main issue concerns the complexity of the sensor networks, with regard to this necessity to address deeply different usages. Based on a single heterogeneous sensor network, it is critical to be able to meet the needs of each user, without having them to master the network beyond their own field of expertise. The tool described in this document aims at addressing this issue via a query engine dedicated to the processing of data collected from the sensors. Towards this end, it relies on a modelling of the sensors within several contexts, each of them reflecting a specific usage. On this basis a domain-specific language is provided, allowing complex event processing over the data monitored by the sensors. Furthermore, the implementation of this tool allows to interact with optional actuation functionalities of the sensor network.

Page generated in 0.4992 seconds