• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 48
  • 48
  • 45
  • 14
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A SENSOR-BASED APPROACH TO MONITORING WEB SERVICE

Li, JUN 12 November 2008 (has links)
As the use of Web expands, Web Service is gradually becoming the basic system infrastructure. However, as it matures and a large number of Web Service becomes available, the focus will shift from service development to service management. One key component in management systems is monitoring. The growing complexity of Web Service platforms and their dynamically varying workloads make manually monitoring them a demanding task. Therefore monitoring tools are required to support the management efforts. Our approach, Web Service Monitoring System (WSMS), utilizes Autonomic Computing technology to monitor Web Service for an automated manager. WSMS correlates lower level events into a meaningful diagnosed symptom which provides higher level information for problem determination. It also gains the ability to take autonomic actions and solve the original problem using corrective actions. In this thesis, a complete design of WSMS is presented along with a practical implementation showing viability and proof of concept of WSMS. / Thesis (Master, Computing) -- Queen's University, 2008-11-12 16:20:13.738
12

A situation refinement model for complex event processing

Alakari, Alaa A. 07 January 2021 (has links)
Complex Event Processing (CEP) systems aim at processing large flows of events to discover situations of interest (SOI). Primarily, CEP uses predefined pattern templates to detect occurrences of complex events in an event stream. Extracting complex event is achieved by employing techniques such as filtering and aggregation to detect complex patterns of many simple events. In general, CEP systems rely on domain experts to de fine complex pattern rules to recognize SOI. However, the task of fine tuning complex pattern rules in the event streaming environment face two main challenges: the issue of increased pattern complexity and the event streaming constraints where such rules must be acquired and processed in near real-time. Therefore, to fine-tune the CEP pattern to identify SOI, the following requirements must be met: First, a minimum number of rules must be used to re fine the CEP pattern to avoid increased pattern complexity, and second, domain knowledge must be incorporated in the refinement process to improve awareness about emerging situations. Furthermore, the event data must be processed upon arrival to cope with the continuous arrival of events in the stream and to respond in near real-time. In this dissertation, we present a Situation Refi nement Model (SRM) that considers these requirements. In particular, by developing a Single-Scan Frequent Item Mining algorithm to acquire the minimal number of CEP rules with the ability to adjust the level of re refinement to t the applied scenario. In addition, a cost-gain evaluation measure to determine the best tradeoff to identify a particular SOI is presented. / Graduate
13

Fault Tolerant Distributed Complex Event Processing on Stream Computing Platforms

Carbone, Paris January 2013 (has links)
Recent advances in reliable distributed computing have made it possible to provide high availability and scalability to traditional systems and thus serve them as reliable services. For some systems, their parallel nature in addition to weak consistency requirements allowed a more trivial transision such as distributed storage, online data analysis, batch processing and distributed stream processing. On the other hand, systems such as Complex Event Processing (CEP) still maintain a monolithic architecture, being able to offer high expressiveness at the expense of low distribution. In this work, we address the main challenges of providing a highly-available Distributed CEP service with a focus on reliability, since it is the most crucial and untouched aspect of that transition. The experimental solution presented targets low average detection latency and leverages event delegation mechanisms present on existing stream execution platforms and in-memory logging to provide availability of any complex event processing abstraction on top via redundancy and partial recovery.
14

Learning Hierarchical Representations For Video Analysis Using Deep Learning

Yang, Yang 01 January 2013 (has links)
With the exponential growth of the digital data, video content analysis (e.g., action, event recognition) has been drawing increasing attention from computer vision researchers. Effective modeling of the objects, scenes, and motions is critical for visual understanding. Recently there has been a growing interest in the bio-inspired deep learning models, which has shown impressive results in speech and object recognition. The deep learning models are formed by the composition of multiple non-linear transformations of the data, with the goal of yielding more abstract and ultimately more useful representations. The advantages of the deep models are three fold: 1) They learn the features directly from the raw signal in contrast to the hand-designed features. 2) The learning can be unsupervised, which is suitable for large data where labeling all the data is expensive and unpractical. 3) They learn a hierarchy of features one level at a time and the layerwise stacking of feature extraction, this often yields better representations. However, not many deep learning models have been proposed to solve the problems in video analysis, especially videos “in a wild”. Most of them are either dealing with simple datasets, or limited to the low-level local spatial-temporal feature descriptors for action recognition. Moreover, as the learning algorithms are unsupervised, the learned features preserve generative properties rather than the discriminative ones which are more favorable in the classification tasks. In this context, the thesis makes two major contributions. First, we propose several formulations and extensions of deep learning methods which learn hierarchical representations for three challenging video analysis tasks, including complex event recognition, object detection in videos and measuring action similarity. The proposed methods are extensively demonstrated for each work on the state-of-the-art challenging datasets. Besides learning the low-level local features, higher level representations are further designed to be learned in the context of applications. The data-driven concept representations and sparse representation of the events are learned for complex event recognition; the representations for object body parts iii and structures are learned for object detection in videos; and the relational motion features and similarity metrics between video pairs are learned simultaneously for action verification. Second, in order to learn discriminative and compact features, we propose a new feature learning method using a deep neural network based on auto encoders. It differs from the existing unsupervised feature learning methods in two ways: first it optimizes both discriminative and generative properties of the features simultaneously, which gives our features a better discriminative ability. Second, our learned features are more compact, while the unsupervised feature learning methods usually learn a redundant set of over-complete features. Extensive experiments with quantitative and qualitative results on the tasks of human detection and action verification demonstrate the superiority of our proposed models.
15

Minimizing Overhead for Fault Tolerance in Event Stream Processing Systems

Martin, André 20 September 2016 (has links) (PDF)
Event Stream Processing (ESP) is a well-established approach for low-latency data processing enabling users to quickly react to relevant situations in soft real-time. In order to cope with the sheer amount of data being generated each day and to cope with fluctuating workloads originating from data sources such as Twitter and Facebook, such systems must be highly scalable and elastic. Hence, ESP systems are typically long running applications deployed on several hundreds of nodes in either dedicated data-centers or cloud environments such as Amazon EC2. In such environments, nodes are likely to fail due to software aging, process or hardware errors whereas the unbounded stream of data asks for continuous processing. In order to cope with node failures, several fault tolerance approaches have been proposed in literature. Active replication and rollback recovery-based on checkpointing and in-memory logging (upstream backup) are two commonly used approaches in order to cope with such failures in the context of ESP systems. However, these approaches suffer either from a high resource footprint, low throughput or unresponsiveness due to long recovery times. Moreover, in order to recover applications in a precise manner using exactly once semantics, the use of deterministic execution is required which adds another layer of complexity and overhead. The goal of this thesis is to lower the overhead for fault tolerance in ESP systems. We first present StreamMine3G, our ESP system we built entirely from scratch in order to study and evaluate novel approaches for fault tolerance and elasticity. We then present an approach to reduce the overhead of deterministic execution by using a weak, epoch-based rather than strict ordering scheme for commutative and tumbling windowed operators that allows applications to recover precisely using active or passive replication. Since most applications are running in cloud environments nowadays, we furthermore propose an approach to increase the system availability by efficiently utilizing spare but paid resources for fault tolerance. Finally, in order to free users from the burden of choosing the correct fault tolerance scheme for their applications that guarantees the desired recovery time while still saving resources, we present a controller-based approach that adapts fault tolerance at runtime. We furthermore showcase the applicability of our StreamMine3G approach using real world applications and examples.
16

Minimizing Overhead for Fault Tolerance in Event Stream Processing Systems

Martin, André 17 December 2015 (has links)
Event Stream Processing (ESP) is a well-established approach for low-latency data processing enabling users to quickly react to relevant situations in soft real-time. In order to cope with the sheer amount of data being generated each day and to cope with fluctuating workloads originating from data sources such as Twitter and Facebook, such systems must be highly scalable and elastic. Hence, ESP systems are typically long running applications deployed on several hundreds of nodes in either dedicated data-centers or cloud environments such as Amazon EC2. In such environments, nodes are likely to fail due to software aging, process or hardware errors whereas the unbounded stream of data asks for continuous processing. In order to cope with node failures, several fault tolerance approaches have been proposed in literature. Active replication and rollback recovery-based on checkpointing and in-memory logging (upstream backup) are two commonly used approaches in order to cope with such failures in the context of ESP systems. However, these approaches suffer either from a high resource footprint, low throughput or unresponsiveness due to long recovery times. Moreover, in order to recover applications in a precise manner using exactly once semantics, the use of deterministic execution is required which adds another layer of complexity and overhead. The goal of this thesis is to lower the overhead for fault tolerance in ESP systems. We first present StreamMine3G, our ESP system we built entirely from scratch in order to study and evaluate novel approaches for fault tolerance and elasticity. We then present an approach to reduce the overhead of deterministic execution by using a weak, epoch-based rather than strict ordering scheme for commutative and tumbling windowed operators that allows applications to recover precisely using active or passive replication. Since most applications are running in cloud environments nowadays, we furthermore propose an approach to increase the system availability by efficiently utilizing spare but paid resources for fault tolerance. Finally, in order to free users from the burden of choosing the correct fault tolerance scheme for their applications that guarantees the desired recovery time while still saving resources, we present a controller-based approach that adapts fault tolerance at runtime. We furthermore showcase the applicability of our StreamMine3G approach using real world applications and examples.
17

A Complex Event Processing Framework Implementation Using Heterogeneous Devices In Smart Environments

Kaya, Muammer Ozge 01 January 2012 (has links) (PDF)
Significant developments in microprocessor and sensor technology make wirelessly connected small computing devices widely available / hence they are being used frequently to collect data from the environment. In this study, we construct a framework in order to extract high level information in an environment containing such pervasive computing devices. In the framework, raw data originating from wireless sensors are collected using an event driven system and converted to simple events for transmission over a network to a central processing unit. We also utilize complex event processing approach incorporating temporal constraints, aggregation and sequencing of events in order to define complex events for extracting high level information from the collected simple events. We develop a prototype using easily accessible hardware and set it up in a classroom within our university. The results demonstrate the feasibility of our approach, ease of deployment and successful application of the complex event processing framework.
18

ssIoTa: A system software framework for the internet of things

Lillethun, David 08 June 2015 (has links)
Sensors are widely deployed in our environment, and their number is increasing rapidly. In the near future, billions of devices will all be connected to each other, creating an Internet of Things. Furthermore, computational intelligence is needed to make applications involving these devices truly exciting. In IoT, however, the vast amounts of data will not be statically prepared for batch processing, but rather continually produced and streamed live to data consumers and intelligent algorithms. We refer to applications that perform live analysis on live data streams, bringing intelligence to IoT, as the Analysis of Things. However, the Analysis of Things also comes with a new set of challenges. The data sources are not collected in a single, centralized location, but rather distributed widely across the environment. AoT applications need to be able to access (consume, produce, and share with each other) this data in a way that is natural considering its live streaming nature. The data transport mechanism must also allow easy access to sensors, actuators, and analysis results. Furthermore, analysis applications require computational resources on which to run. We claim that system support for AoT can reduce the complexity of developing and executing such applications. To address this, we make the following contributions: - A framework for systems support of Live Streaming Analysis in the Internet of Things, which we refer to as the Analysis of Things (AoT), including a set of requirements for system design - A system implementation that validates the framework by supporting Analysis of Things applications at a local scale, and a design for a federated system that supports AoT on a wide geographical scale - An empirical system evaluation that validates the system design and implementation, including simulation experiments across a wide-area distributed system We present five broad requirements for the Analysis of Things and discuss one set of specific system support features that can satisfy these requirements. We have implemented a system, called \textsubscript{SS}IoTa, that implements these features and supports AoT applications running on local resources. The programming model for the system allows applications to be specified simply as operator graphs, by connecting operator inputs to operator outputs and sensor streams. Operators are code components that run arbitrary continuous analysis algorithms on streaming data. By conforming to a provided interface, operators may be developed that can be composed into operator graphs and executed by the system. The system consists of an Execution Environment, in which a Resource Manager manages the available computational resources and the applications running on them, a Stream Registry, in which available data streams can be registered so that they may be discovered and used by applications, and an Operator Store, which serves as a repository for operator code so that components can be shared and reused. Experimental results for the system implementation validate its performance. Many applications are also widely distributed across a geographic area. To support such applications, \textsubscript{SS}IoTa must be able to run them on infrastructure resources that are also distributed widely. We have designed a system that does so by federating each of the three system components: Operator Store, Stream Registry, and Resource Manager. The Operator Store is distributed using a distributed hast table (DHT), however since temporal locality can be expected and data churn is low, caching may be employed to further improve performance. Since sensors exist at particular locations in physical space, queries on the Stream Registry will be based on location. We also introduce the concept of geographical locality. Therefore, range queries in two dimensions must be supported by the federated Stream Registry, while taking advantage of geographical locality for improved average-case performance. To accomplish these goals, we present a design sketch for SkipCAN, a modification of the SkipNet and Content Addressable Network DHTs. Finally, the fundamental issue in the federated Resource Manager is how to distributed the operators of multiple applications across the geographically distributed sites where computational resources can execute them. To address this, we introduce DistAl, a fully distributed algorithm that assigns operators to sites. DistAl also respects the system resource constraints and application preferences for performance and quality of results (QoR), using application-specific utility functions to allow applications to express their preferences. DistAl is validated by simulation results.
19

Location-Aware Business Process Management for Real-time Monitoring of Patient Care Processes

Bougueng Tchemeube, Renaud 24 July 2013 (has links)
Long wait times are a global issue in the healthcare sector, particularly in Canada. Despite numerous research findings on wait time management, the issue persists. This is partly because for a given hospital, the data required to conduct wait times analysis is currently scattered across various information systems. Moreover, such data is usually not accurate (because of possible human errors), imprecise and late. The whole situation contributes to the current state of wait times. This thesis proposes a location-aware business process management system for real-time care process monitoring. More precisely, the system enables an improved visibility of process execution by gathering, as processes execute, accurate and granular process information including wait time measurements. The major contributions of this thesis include an architecture for the system, a prototype taking advantages of commercial real-time location system combined with a business process management system to accurately measure wait times, as well as a case study based on a real cardiology process from an Ontario hospital.
20

Un framework de traitement semantic d'événement dans les réseaux des capteurs multimedias. / A Semantic-Based Framework for Processing Complex Events in Multimedia Sensor Networks.

Angsuchotmetee, Chinnapong 22 December 2017 (has links)
Les progrès de la technologie des capteurs, des communications sans fil et de l'Internet des Objets ont favorisé le développement des réseaux de capteurs multimédias. Ces derniers sont composés de capteurs interconnectés capables de fournir de façon omniprésente un suivi fin d’un espace connecté. Grâce à leurs propriétés, les réseaux de capteurs multimédias ont suscité un intérêt croissant ces dernières années des secteurs académiques et industriels et ont été adoptés dans de nombreux domaines d'application (tels que la maison intelligente, le bureau intelligent, ou la ville intelligente). L'un des avantages de l'adoption des réseaux de capteurs multimédias est le fait que les données collectées (vidéos, audios, images, etc.) à partir de capteurs connexes contiennent des informations sémantiques riches (en comparaison avec des capteurs uniquement scalaires) qui permettent de détecter des événements complexes et de mieux gérer les exigences du domaine d'application. Toutefois, la modélisation et la détection des événements dans les reséaux de capteurs multimédias restent une tâche difficile à réaliser, car la traduction de toutes les données multimédias collectées en événements n'est pas simple.Dans cette thèse, un framework complet pour le traitement des événements complexes dans les réseaux de capteurs multimédias est proposé pour éviter les algorithmes codés en dur et pour permettre une meilleure adaptation aux évolution des besoins d’un domaine d'application. Le Framework est appelé CEMiD et composé de :• MSSN-Onto: une ontologie nouvellement proposée pour la modélisation des réseaux de capteurs,• CEMiD-Language: un langage original pour la modélisation des réseaux de capteurs multimédias et des événements à détecter, et• GST-CEMiD: un moteur de traitement d'événement complexe basé sur un pipeline sémantique.Le framework CEMiD aide les utilisateurs à modéliser leur propre infrastructure de réseau de capteurs et les événements à détecter via le langage CEMiD. Le moteur de détection du framework prend en entrée le modèle fourni par les utilisateurs pour initier un pipeline de détection d'événements afin d'extraire des données multimédias correspondantes, de traduire des informations sémantiques et de les traduire automatiquement en événements. Notre framework est validé par des prototypes et des simulations. Les résultats montrent que notre framework peut détecter correctement les événements multimédias complexes dans un scénario de charge de travail élevée (avec une latence de détection moyenne inférieure à une seconde). / The dramatic advancement of low-cost hardware technology, wireless communications, and digital electronics have fostered the development of multifunctional (wireless) Multimedia Sensor Networks (MSNs). Those latter are composed of interconnected devices able to ubiquitously sense multimedia content (video, image, audio, etc.) from the environment. Thanks to their interesting features, MSNs have gained increasing attention in recent years from both academic and industrial sectors and have been adopted in wide range of application domains (such as smart home, smart office, smart city, to mention a few). One of the advantages of adopting MSNs is the fact that data gathered from related sensors contains rich semantic information (in comparison with using solely scalar sensors) which allows to detect complex events and copes better with application domain requirements. However, modeling and detecting events in MSNs remain a difficult task to carry out because translating all gathered multimedia data into events is not straightforward and challenging.In this thesis, a full-fledged framework for processing complex events in MSNs is proposed to avoid hard-coded algorithms. The framework is called Complex Event Modeling and Detection (CEMiD) framework. Core components of the framework are:• MSSN-Onto: a newly proposed ontology for modeling MSNs,• CEMiD-Language: an original language for modeling multimedia sensor networks and events to be detected, and• GST-CEMiD: a semantic pipelining-based complex event processing engine.CEMiD framework helps users model their own sensor network infrastructure and events to be detected through CEMiD language. The detection engine of the framework takes all the model provided by users to initiate an event detection pipeline for extracting multimedia data feature, translating semantic information, and interpret into events automatically. Our framework is validated by means of prototyping and simulations. The results show that our framework can properly detect complex multimedia events in a high work-load scenario (with average detection latency for less than one second).

Page generated in 0.0547 seconds