• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 44
  • 14
  • 11
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A SENSOR-BASED APPROACH TO MONITORING WEB SERVICE

Li, JUN 12 November 2008 (has links)
As the use of Web expands, Web Service is gradually becoming the basic system infrastructure. However, as it matures and a large number of Web Service becomes available, the focus will shift from service development to service management. One key component in management systems is monitoring. The growing complexity of Web Service platforms and their dynamically varying workloads make manually monitoring them a demanding task. Therefore monitoring tools are required to support the management efforts. Our approach, Web Service Monitoring System (WSMS), utilizes Autonomic Computing technology to monitor Web Service for an automated manager. WSMS correlates lower level events into a meaningful diagnosed symptom which provides higher level information for problem determination. It also gains the ability to take autonomic actions and solve the original problem using corrective actions. In this thesis, a complete design of WSMS is presented along with a practical implementation showing viability and proof of concept of WSMS. / Thesis (Master, Computing) -- Queen's University, 2008-11-12 16:20:13.738
12

A situation refinement model for complex event processing

Alakari, Alaa A. 07 January 2021 (has links)
Complex Event Processing (CEP) systems aim at processing large flows of events to discover situations of interest (SOI). Primarily, CEP uses predefined pattern templates to detect occurrences of complex events in an event stream. Extracting complex event is achieved by employing techniques such as filtering and aggregation to detect complex patterns of many simple events. In general, CEP systems rely on domain experts to de fine complex pattern rules to recognize SOI. However, the task of fine tuning complex pattern rules in the event streaming environment face two main challenges: the issue of increased pattern complexity and the event streaming constraints where such rules must be acquired and processed in near real-time. Therefore, to fine-tune the CEP pattern to identify SOI, the following requirements must be met: First, a minimum number of rules must be used to re fine the CEP pattern to avoid increased pattern complexity, and second, domain knowledge must be incorporated in the refinement process to improve awareness about emerging situations. Furthermore, the event data must be processed upon arrival to cope with the continuous arrival of events in the stream and to respond in near real-time. In this dissertation, we present a Situation Refi nement Model (SRM) that considers these requirements. In particular, by developing a Single-Scan Frequent Item Mining algorithm to acquire the minimal number of CEP rules with the ability to adjust the level of re refinement to t the applied scenario. In addition, a cost-gain evaluation measure to determine the best tradeoff to identify a particular SOI is presented. / Graduate
13

Fault Tolerant Distributed Complex Event Processing on Stream Computing Platforms

Carbone, Paris January 2013 (has links)
Recent advances in reliable distributed computing have made it possible to provide high availability and scalability to traditional systems and thus serve them as reliable services. For some systems, their parallel nature in addition to weak consistency requirements allowed a more trivial transision such as distributed storage, online data analysis, batch processing and distributed stream processing. On the other hand, systems such as Complex Event Processing (CEP) still maintain a monolithic architecture, being able to offer high expressiveness at the expense of low distribution. In this work, we address the main challenges of providing a highly-available Distributed CEP service with a focus on reliability, since it is the most crucial and untouched aspect of that transition. The experimental solution presented targets low average detection latency and leverages event delegation mechanisms present on existing stream execution platforms and in-memory logging to provide availability of any complex event processing abstraction on top via redundancy and partial recovery.
14

Minimizing Overhead for Fault Tolerance in Event Stream Processing Systems

Martin, André 20 September 2016 (has links) (PDF)
Event Stream Processing (ESP) is a well-established approach for low-latency data processing enabling users to quickly react to relevant situations in soft real-time. In order to cope with the sheer amount of data being generated each day and to cope with fluctuating workloads originating from data sources such as Twitter and Facebook, such systems must be highly scalable and elastic. Hence, ESP systems are typically long running applications deployed on several hundreds of nodes in either dedicated data-centers or cloud environments such as Amazon EC2. In such environments, nodes are likely to fail due to software aging, process or hardware errors whereas the unbounded stream of data asks for continuous processing. In order to cope with node failures, several fault tolerance approaches have been proposed in literature. Active replication and rollback recovery-based on checkpointing and in-memory logging (upstream backup) are two commonly used approaches in order to cope with such failures in the context of ESP systems. However, these approaches suffer either from a high resource footprint, low throughput or unresponsiveness due to long recovery times. Moreover, in order to recover applications in a precise manner using exactly once semantics, the use of deterministic execution is required which adds another layer of complexity and overhead. The goal of this thesis is to lower the overhead for fault tolerance in ESP systems. We first present StreamMine3G, our ESP system we built entirely from scratch in order to study and evaluate novel approaches for fault tolerance and elasticity. We then present an approach to reduce the overhead of deterministic execution by using a weak, epoch-based rather than strict ordering scheme for commutative and tumbling windowed operators that allows applications to recover precisely using active or passive replication. Since most applications are running in cloud environments nowadays, we furthermore propose an approach to increase the system availability by efficiently utilizing spare but paid resources for fault tolerance. Finally, in order to free users from the burden of choosing the correct fault tolerance scheme for their applications that guarantees the desired recovery time while still saving resources, we present a controller-based approach that adapts fault tolerance at runtime. We furthermore showcase the applicability of our StreamMine3G approach using real world applications and examples.
15

Minimizing Overhead for Fault Tolerance in Event Stream Processing Systems

Martin, André 17 December 2015 (has links)
Event Stream Processing (ESP) is a well-established approach for low-latency data processing enabling users to quickly react to relevant situations in soft real-time. In order to cope with the sheer amount of data being generated each day and to cope with fluctuating workloads originating from data sources such as Twitter and Facebook, such systems must be highly scalable and elastic. Hence, ESP systems are typically long running applications deployed on several hundreds of nodes in either dedicated data-centers or cloud environments such as Amazon EC2. In such environments, nodes are likely to fail due to software aging, process or hardware errors whereas the unbounded stream of data asks for continuous processing. In order to cope with node failures, several fault tolerance approaches have been proposed in literature. Active replication and rollback recovery-based on checkpointing and in-memory logging (upstream backup) are two commonly used approaches in order to cope with such failures in the context of ESP systems. However, these approaches suffer either from a high resource footprint, low throughput or unresponsiveness due to long recovery times. Moreover, in order to recover applications in a precise manner using exactly once semantics, the use of deterministic execution is required which adds another layer of complexity and overhead. The goal of this thesis is to lower the overhead for fault tolerance in ESP systems. We first present StreamMine3G, our ESP system we built entirely from scratch in order to study and evaluate novel approaches for fault tolerance and elasticity. We then present an approach to reduce the overhead of deterministic execution by using a weak, epoch-based rather than strict ordering scheme for commutative and tumbling windowed operators that allows applications to recover precisely using active or passive replication. Since most applications are running in cloud environments nowadays, we furthermore propose an approach to increase the system availability by efficiently utilizing spare but paid resources for fault tolerance. Finally, in order to free users from the burden of choosing the correct fault tolerance scheme for their applications that guarantees the desired recovery time while still saving resources, we present a controller-based approach that adapts fault tolerance at runtime. We furthermore showcase the applicability of our StreamMine3G approach using real world applications and examples.
16

A Complex Event Processing Framework Implementation Using Heterogeneous Devices In Smart Environments

Kaya, Muammer Ozge 01 January 2012 (has links) (PDF)
Significant developments in microprocessor and sensor technology make wirelessly connected small computing devices widely available / hence they are being used frequently to collect data from the environment. In this study, we construct a framework in order to extract high level information in an environment containing such pervasive computing devices. In the framework, raw data originating from wireless sensors are collected using an event driven system and converted to simple events for transmission over a network to a central processing unit. We also utilize complex event processing approach incorporating temporal constraints, aggregation and sequencing of events in order to define complex events for extracting high level information from the collected simple events. We develop a prototype using easily accessible hardware and set it up in a classroom within our university. The results demonstrate the feasibility of our approach, ease of deployment and successful application of the complex event processing framework.
17

ssIoTa: A system software framework for the internet of things

Lillethun, David 08 June 2015 (has links)
Sensors are widely deployed in our environment, and their number is increasing rapidly. In the near future, billions of devices will all be connected to each other, creating an Internet of Things. Furthermore, computational intelligence is needed to make applications involving these devices truly exciting. In IoT, however, the vast amounts of data will not be statically prepared for batch processing, but rather continually produced and streamed live to data consumers and intelligent algorithms. We refer to applications that perform live analysis on live data streams, bringing intelligence to IoT, as the Analysis of Things. However, the Analysis of Things also comes with a new set of challenges. The data sources are not collected in a single, centralized location, but rather distributed widely across the environment. AoT applications need to be able to access (consume, produce, and share with each other) this data in a way that is natural considering its live streaming nature. The data transport mechanism must also allow easy access to sensors, actuators, and analysis results. Furthermore, analysis applications require computational resources on which to run. We claim that system support for AoT can reduce the complexity of developing and executing such applications. To address this, we make the following contributions: - A framework for systems support of Live Streaming Analysis in the Internet of Things, which we refer to as the Analysis of Things (AoT), including a set of requirements for system design - A system implementation that validates the framework by supporting Analysis of Things applications at a local scale, and a design for a federated system that supports AoT on a wide geographical scale - An empirical system evaluation that validates the system design and implementation, including simulation experiments across a wide-area distributed system We present five broad requirements for the Analysis of Things and discuss one set of specific system support features that can satisfy these requirements. We have implemented a system, called \textsubscript{SS}IoTa, that implements these features and supports AoT applications running on local resources. The programming model for the system allows applications to be specified simply as operator graphs, by connecting operator inputs to operator outputs and sensor streams. Operators are code components that run arbitrary continuous analysis algorithms on streaming data. By conforming to a provided interface, operators may be developed that can be composed into operator graphs and executed by the system. The system consists of an Execution Environment, in which a Resource Manager manages the available computational resources and the applications running on them, a Stream Registry, in which available data streams can be registered so that they may be discovered and used by applications, and an Operator Store, which serves as a repository for operator code so that components can be shared and reused. Experimental results for the system implementation validate its performance. Many applications are also widely distributed across a geographic area. To support such applications, \textsubscript{SS}IoTa must be able to run them on infrastructure resources that are also distributed widely. We have designed a system that does so by federating each of the three system components: Operator Store, Stream Registry, and Resource Manager. The Operator Store is distributed using a distributed hast table (DHT), however since temporal locality can be expected and data churn is low, caching may be employed to further improve performance. Since sensors exist at particular locations in physical space, queries on the Stream Registry will be based on location. We also introduce the concept of geographical locality. Therefore, range queries in two dimensions must be supported by the federated Stream Registry, while taking advantage of geographical locality for improved average-case performance. To accomplish these goals, we present a design sketch for SkipCAN, a modification of the SkipNet and Content Addressable Network DHTs. Finally, the fundamental issue in the federated Resource Manager is how to distributed the operators of multiple applications across the geographically distributed sites where computational resources can execute them. To address this, we introduce DistAl, a fully distributed algorithm that assigns operators to sites. DistAl also respects the system resource constraints and application preferences for performance and quality of results (QoR), using application-specific utility functions to allow applications to express their preferences. DistAl is validated by simulation results.
18

Location-Aware Business Process Management for Real-time Monitoring of Patient Care Processes

Bougueng Tchemeube, Renaud 24 July 2013 (has links)
Long wait times are a global issue in the healthcare sector, particularly in Canada. Despite numerous research findings on wait time management, the issue persists. This is partly because for a given hospital, the data required to conduct wait times analysis is currently scattered across various information systems. Moreover, such data is usually not accurate (because of possible human errors), imprecise and late. The whole situation contributes to the current state of wait times. This thesis proposes a location-aware business process management system for real-time care process monitoring. More precisely, the system enables an improved visibility of process execution by gathering, as processes execute, accurate and granular process information including wait time measurements. The major contributions of this thesis include an architecture for the system, a prototype taking advantages of commercial real-time location system combined with a business process management system to accurately measure wait times, as well as a case study based on a real cardiology process from an Ontario hospital.
19

Un framework de traitement semantic d'événement dans les réseaux des capteurs multimedias. / A Semantic-Based Framework for Processing Complex Events in Multimedia Sensor Networks.

Angsuchotmetee, Chinnapong 22 December 2017 (has links)
Les progrès de la technologie des capteurs, des communications sans fil et de l'Internet des Objets ont favorisé le développement des réseaux de capteurs multimédias. Ces derniers sont composés de capteurs interconnectés capables de fournir de façon omniprésente un suivi fin d’un espace connecté. Grâce à leurs propriétés, les réseaux de capteurs multimédias ont suscité un intérêt croissant ces dernières années des secteurs académiques et industriels et ont été adoptés dans de nombreux domaines d'application (tels que la maison intelligente, le bureau intelligent, ou la ville intelligente). L'un des avantages de l'adoption des réseaux de capteurs multimédias est le fait que les données collectées (vidéos, audios, images, etc.) à partir de capteurs connexes contiennent des informations sémantiques riches (en comparaison avec des capteurs uniquement scalaires) qui permettent de détecter des événements complexes et de mieux gérer les exigences du domaine d'application. Toutefois, la modélisation et la détection des événements dans les reséaux de capteurs multimédias restent une tâche difficile à réaliser, car la traduction de toutes les données multimédias collectées en événements n'est pas simple.Dans cette thèse, un framework complet pour le traitement des événements complexes dans les réseaux de capteurs multimédias est proposé pour éviter les algorithmes codés en dur et pour permettre une meilleure adaptation aux évolution des besoins d’un domaine d'application. Le Framework est appelé CEMiD et composé de :• MSSN-Onto: une ontologie nouvellement proposée pour la modélisation des réseaux de capteurs,• CEMiD-Language: un langage original pour la modélisation des réseaux de capteurs multimédias et des événements à détecter, et• GST-CEMiD: un moteur de traitement d'événement complexe basé sur un pipeline sémantique.Le framework CEMiD aide les utilisateurs à modéliser leur propre infrastructure de réseau de capteurs et les événements à détecter via le langage CEMiD. Le moteur de détection du framework prend en entrée le modèle fourni par les utilisateurs pour initier un pipeline de détection d'événements afin d'extraire des données multimédias correspondantes, de traduire des informations sémantiques et de les traduire automatiquement en événements. Notre framework est validé par des prototypes et des simulations. Les résultats montrent que notre framework peut détecter correctement les événements multimédias complexes dans un scénario de charge de travail élevée (avec une latence de détection moyenne inférieure à une seconde). / The dramatic advancement of low-cost hardware technology, wireless communications, and digital electronics have fostered the development of multifunctional (wireless) Multimedia Sensor Networks (MSNs). Those latter are composed of interconnected devices able to ubiquitously sense multimedia content (video, image, audio, etc.) from the environment. Thanks to their interesting features, MSNs have gained increasing attention in recent years from both academic and industrial sectors and have been adopted in wide range of application domains (such as smart home, smart office, smart city, to mention a few). One of the advantages of adopting MSNs is the fact that data gathered from related sensors contains rich semantic information (in comparison with using solely scalar sensors) which allows to detect complex events and copes better with application domain requirements. However, modeling and detecting events in MSNs remain a difficult task to carry out because translating all gathered multimedia data into events is not straightforward and challenging.In this thesis, a full-fledged framework for processing complex events in MSNs is proposed to avoid hard-coded algorithms. The framework is called Complex Event Modeling and Detection (CEMiD) framework. Core components of the framework are:• MSSN-Onto: a newly proposed ontology for modeling MSNs,• CEMiD-Language: an original language for modeling multimedia sensor networks and events to be detected, and• GST-CEMiD: a semantic pipelining-based complex event processing engine.CEMiD framework helps users model their own sensor network infrastructure and events to be detected through CEMiD language. The detection engine of the framework takes all the model provided by users to initiate an event detection pipeline for extracting multimedia data feature, translating semantic information, and interpret into events automatically. Our framework is validated by means of prototyping and simulations. The results show that our framework can properly detect complex multimedia events in a high work-load scenario (with average detection latency for less than one second).
20

Une approche événementielle pour le développement de services multi-métiers dédiés à l’assistance domiciliaire / An event-driven approach to developing interdisciplinary services dedicated to aging in place

Carteron, Adrien 22 December 2017 (has links)
La notion de contexte est fondamentale dans le champ de l’informatique ubiquitaire. En particulier lorsque des services assistent un utilisateur dans ses activités quotidiennes. Parce qu’elle implique plusieurs disciplines, une maison équipée d’informatique ubiquitaire dédiée au maintien à domicile de personnes âgées demande l’implication d’une variété d’intervenants, tant pour concevoir et développer des services d’assistance, que pour déployer et maintenir l’infrastructure sous-jacente. Cette grande diversité d’intervenants correspond à une diversité de contextes. Ces différents contextes sont généralement étudiés séparément, empêchant toute synergie. Cette thèse présente une méthodologie permettant d’unifier la conception et le développement de services sensibles au contexte et de répondre aux besoins de tout type d’intervenant. Dans un premier temps, nous traitons les besoins des intervenants concernant l’infrastructure de capteurs/actionneurs : installation, maintenance et exploitation. Le modèle d’infrastructure de capteurs et un ensemble de règles en résultant permettent de superviser en continu l’infrastructure et de détecter des dysfonctionnements. Cette supervision simplifie le processus de développement d’applications, en faisant abstraction des problèmes d’infrastructure. Dans un second temps, nous analysons un large éventail de services d’assistance domiciliaire dédié aux personnes âgées, en considérant la variété des besoins des intervenants. Grâce à cette analyse, nous généralisons l’approche de modèle d’infrastructure à tout type de services. Notre méthodologie permet de définir des services de façon unifiée, à travers un langage dédié, appelé Maloya, exprimant des règles manipulant les concepts d’état et d’évènement. Nous avons développé un compilateur de notre langage vers un langage événementiel dont l’exécution s’appuie sur un moteur de traitement d’évènements complexes (CEP). Nous avons validé notre approche en définissant un large éventail de services d’assistance à la personne, à partir de services existants, et concernant l’ensemble des intervenants du domaine. Nous avons compilé et exécuté les services Maloya sur un moteur de traitement d’évènements complexes. Les performances obtenues en terme de latence et d’occupation mémoire sont satisfaisantes pour le domaine et compatible avec une exécution 24 heures sur 24 sur le long terme. / The notion of context is fundamental to the field of pervasive computing, and in particular when such services are dedicated to assist a user in his daily activities. Being at the crossroad of various fields, a context-aware home dedicated to aging in place involves a variety of stakeholders to design and develop assistive services, as well as to deploy and maintain the underlying infrastructure. This considerable diversity of stakeholders raises correspondingly diverse context dimensions : each service relies on specific contexts (e.g., sensor status for a maintenance service, fridge usage for a meal activity recognition service). Typically, these contexts are considered separately, preventing any synergy. This dissertation presents a methodology for unifying the design and development of various domestic context-aware services, which addresses the requirements of all the stakeholders. In a first step, we handle the needs of stakeholders concerned by the sensors infrastructure : installers, maintainers and operators. We define an infrastructure model of a home and a set of rules to continuously monitor the sensor infrastructure and raise failure when appropriate. This continuous monitoring simplifies application development by abstracting it from infrastructure concerns. In a second step, we analyze a range of services for aging in place, considering the whole diversity of stakeholders. Based on this analysis, we generalize the approach developed for the infrastructure to all assistive services. Our methodology allows to define unified services, in the form of rules processing events and states. To express such rules, we define a domain-specific design language, named Maloya. We developed a compiler from our langage using as a backend an event processing language, which is executed on a complex event processing (CEP) engine. To validate our approach, we define a wide range of assistive services with our language, which reimplement existing deployed services belonging to all of the stakeholders. These Maloya services were deployed and successfully tested for their effectiveness in performing the specific tasks of the stakeholders. Latency and memory consumption performance turned out to be fully compatible with a 24/7 execution in the long run.

Page generated in 0.1148 seconds