• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 4
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 67
  • 67
  • 44
  • 15
  • 14
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An event-based approach to process environmental data = Um enfoque baseado em eventos para processar dados ambientais / Um enfoque baseado em eventos para processar dados ambientais

Koga, Ivo Kenji, 1981- 23 August 2018 (has links)
Orientador: Claudia Maria Bauzer Medeiros / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-23T23:06:49Z (GMT). No. of bitstreams: 1 Koga_IvoKenji_D.pdf: 2109870 bytes, checksum: 7ac5400b2e71be3e15b3bdf5504e3adf (MD5) Previous issue date: 2013 / Resumo: O resumo poderá ser visualizado no texto completo da tese digital / Abstract: The complete abstract is available with the full electronic document. / Doutorado / Ciência da Computação / Doutor em Ciência da Computação
22

A situation refinement model for complex event processing

Alakari, Alaa A. 07 January 2021 (has links)
Complex Event Processing (CEP) systems aim at processing large flows of events to discover situations of interest (SOI). Primarily, CEP uses predefined pattern templates to detect occurrences of complex events in an event stream. Extracting complex event is achieved by employing techniques such as filtering and aggregation to detect complex patterns of many simple events. In general, CEP systems rely on domain experts to de fine complex pattern rules to recognize SOI. However, the task of fine tuning complex pattern rules in the event streaming environment face two main challenges: the issue of increased pattern complexity and the event streaming constraints where such rules must be acquired and processed in near real-time. Therefore, to fine-tune the CEP pattern to identify SOI, the following requirements must be met: First, a minimum number of rules must be used to re fine the CEP pattern to avoid increased pattern complexity, and second, domain knowledge must be incorporated in the refinement process to improve awareness about emerging situations. Furthermore, the event data must be processed upon arrival to cope with the continuous arrival of events in the stream and to respond in near real-time. In this dissertation, we present a Situation Refi nement Model (SRM) that considers these requirements. In particular, by developing a Single-Scan Frequent Item Mining algorithm to acquire the minimal number of CEP rules with the ability to adjust the level of re refinement to t the applied scenario. In addition, a cost-gain evaluation measure to determine the best tradeoff to identify a particular SOI is presented. / Graduate
23

Fault Tolerant Distributed Complex Event Processing on Stream Computing Platforms

Carbone, Paris January 2013 (has links)
Recent advances in reliable distributed computing have made it possible to provide high availability and scalability to traditional systems and thus serve them as reliable services. For some systems, their parallel nature in addition to weak consistency requirements allowed a more trivial transision such as distributed storage, online data analysis, batch processing and distributed stream processing. On the other hand, systems such as Complex Event Processing (CEP) still maintain a monolithic architecture, being able to offer high expressiveness at the expense of low distribution. In this work, we address the main challenges of providing a highly-available Distributed CEP service with a focus on reliability, since it is the most crucial and untouched aspect of that transition. The experimental solution presented targets low average detection latency and leverages event delegation mechanisms present on existing stream execution platforms and in-memory logging to provide availability of any complex event processing abstraction on top via redundancy and partial recovery.
24

A Data-Descriptive Feedback Framework for Data Stream Management Systems

Fernández Moctezuma, Rafael J. 01 January 2012 (has links)
Data Stream Management Systems (DSMSs) provide support for continuous query evaluation over data streams. Data streams provide processing challenges due to their unbounded nature and varying characteristics, such as rate and density fluctuations. DSMSs need to adapt stream processing to these changes within certain constraints, such as available computational resources and minimum latency requirements in producing results. The proposed research develops an inter-operator feedback framework, where opportunities for run-time adaptation of stream processing are expressed in terms of descriptions of substreams and actions applicable to the substreams, called feedback punctuations. Both the discovery of adaptation opportunities and the exploitation of these opportunities are performed in the query operators. DSMSs are also concerned with state management, in particular, state derived from tuple processing. The proposed research also introduces the Contracts Framework, which provides execution guarantees about state purging in continuous query evaluation for systems with and without inter-operator feedback. This research provides both theoretical and design contributions. The research also includes an implementation and evaluation of the feedback techniques in the NiagaraST DSMS, and a reference implementation of the Contracts Framework.
25

The Force of Language: How Children Acquire the Semantic Categories of Force Dynamics

George, Nathan R. January 2014 (has links)
Verbs and prepositions encode relations within events, such as a child running towards the top of a hill or a second child pushing the first away from the top. These relational terms present significant challenges in language acquisition, requiring the mapping of the categorical system of language onto the continuous stream of information in events. This challenge is magnified when considering the complexities of events themselves. Events consist of part-whole relations, or partonomic hierarchies, in which events defined by smaller boundaries, such as the child running up the hill, can be integrated into broader categories, such as the second child preventing the first from reaching the top (Zacks & Tversky, 2001). This dissertation addresses how this partonomic hierarchy in events is paralleled in the structure of relational language. I examine the semantic category of force dynamics, or "how entities interact with respect to force" (Talmy, 1988, p. 49), which introduces broad categories (e.g., help, prevent) that incorporate previously independent relations in events, such as paths, goals, and causality. Two studies ask how children and adults navigate the tension between fine and broad categories in their nonlinguistic representations of force and motion events and whether language - in the form of both labels and syntactic cues - helps children to integrate previously independent relations into these higher order constructs. Participants completed a novel task designed to assess the saliency of force dynamics relations across events. Participants viewed an animated event depicting a force dynamics relation (e.g., prevent, cause) and were asked to identify which of two perceptually varied events (i.e., different characters and setting) depicted the same relation. Study One extends previous research, showing that adults encode force dynamics relations in nonlinguistic contexts. Study Two examined these representations in 4-year-olds, both with and without linguistic cues. Absent linguistic cues, children showed no evidence of encoding force dynamics; however, the presence of language highlighted these relations, improving children's attention to these broader categories in events. The results are the first to explore the problem of hierarchies in relational language and demonstrate a novel role for language in drawing children's attention to the presence of relations between relations. / Psychology
26

Resource Allocation Algorithms for Event-Based Enterprise Systems

Cheung, Alex King Yeung 30 August 2011 (has links)
Distributed event processing systems suffer from poor scalability and inefficient resource usage caused by load distributions typical in real-world applications. The results of these shortcomings are availability issues, poor system performance, and high operating costs. This thesis proposes three remedies to solve these limitations in content-based publish/subscribe, which is a practical realization of an event processing system. First, we present a load balancing algorithm that relocates subscribers to distribute load and avoid overloads. Second, we propose publisher relocation algorithms that reduces both the load imposed onto brokers and delivery delay experienced by subscribers. Third, we present ``green" resource allocation algorithms that allocate as few brokers as possible while maximizing their resource usage efficiency by reconfiguring the publishers, subscribers, and the broker topology. We implemented and evaluated all of our approaches on an open source content-based publish/subscribe system called PADRES and evaluated them on SciNet, PlanetLab, a cluster testbed, and in simulations to prove the effectiveness of our solutions. Our evaluation findings are summarized as follows. One, the proposed load balancing algorithm is effective in distributing and balancing load originating from a single server to all available servers in the network. Two, our publisher relocation algorithm reduces the average input load of the system by up to 68%, average broker message rate by up to 85%, and average delivery delay by up to 68%. Three, our resource allocation algorithm reduces the average broker message rate even further by up to 92% and the number of allocated brokers by up to 91%.
27

Resource Allocation Algorithms for Event-Based Enterprise Systems

Cheung, Alex King Yeung 30 August 2011 (has links)
Distributed event processing systems suffer from poor scalability and inefficient resource usage caused by load distributions typical in real-world applications. The results of these shortcomings are availability issues, poor system performance, and high operating costs. This thesis proposes three remedies to solve these limitations in content-based publish/subscribe, which is a practical realization of an event processing system. First, we present a load balancing algorithm that relocates subscribers to distribute load and avoid overloads. Second, we propose publisher relocation algorithms that reduces both the load imposed onto brokers and delivery delay experienced by subscribers. Third, we present ``green" resource allocation algorithms that allocate as few brokers as possible while maximizing their resource usage efficiency by reconfiguring the publishers, subscribers, and the broker topology. We implemented and evaluated all of our approaches on an open source content-based publish/subscribe system called PADRES and evaluated them on SciNet, PlanetLab, a cluster testbed, and in simulations to prove the effectiveness of our solutions. Our evaluation findings are summarized as follows. One, the proposed load balancing algorithm is effective in distributing and balancing load originating from a single server to all available servers in the network. Two, our publisher relocation algorithm reduces the average input load of the system by up to 68%, average broker message rate by up to 85%, and average delivery delay by up to 68%. Three, our resource allocation algorithm reduces the average broker message rate even further by up to 92% and the number of allocated brokers by up to 91%.
28

An Efficient, Extensible, Hardware-aware Indexing Kernel

Sadoghi Hamedani, Mohammad 20 June 2014 (has links)
Modern hardware has the potential to play a central role in scalable data management systems. A realization of this potential arises in the context of indexing queries, a recurring theme in real-time data analytics, targeted advertising, algorithmic trading, and data-centric workflows, and of indexing data, a challenge in multi-version analytical query processing. To enhance query and data indexing, in this thesis, we present an efficient, extensible, and hardware-aware indexing kernel. This indexing kernel rests upon novel data structures and (parallel) algorithms that utilize the capabilities offered by modern hardware, especially abundance of main memory, multi-core architectures, hardware accelerators, and solid state drives. This thesis focuses on presenting our query indexing techniques to cope with processing queries in data-intensive applications that are susceptible to ever increasing data volume and velocity. At the core of our query indexing kernel lies the BE-Tree family of memory-resident indexing structures that scales by overcoming the curse of dimensionality through a novel two-phase space-cutting technique, an effective Top-k processing, and adaptive parallel algorithms to operate directly on compressed data (that exploits the multi-core architecture). Furthermore, we achieve line-rate processing by harnessing the unprecedented degrees of parallelism and pipelining only available through low-level logic design using FPGAs. Finally, we present a comprehensive evaluation that establishes the superiority of BE-Tree in comparison with state-of-the-art algorithms. In this thesis, we further expand the scope of our indexing kernel and describe how to accelerate analytical queries on (multi-version) databases by enabling indexes on the most recent data. Our goal is to reduce the overhead of index maintenance, so that indexes can be used effectively for analytical queries without being a heavy burden on transaction throughput. To achieve this end, we re-design the data structures in the storage hierarchy to employ an extra level of indirection over solid state drives. This indirection layer dramatically reduces the amount of magnetic disk I/Os that is needed for updating indexes and localizes the index maintenance. As a result, by rethinking how data is indexed, we eliminate the dilemma between update vs. query performance and reduce index maintenance and query processing cost substantially.
29

An Efficient, Extensible, Hardware-aware Indexing Kernel

Sadoghi Hamedani, Mohammad 20 June 2014 (has links)
Modern hardware has the potential to play a central role in scalable data management systems. A realization of this potential arises in the context of indexing queries, a recurring theme in real-time data analytics, targeted advertising, algorithmic trading, and data-centric workflows, and of indexing data, a challenge in multi-version analytical query processing. To enhance query and data indexing, in this thesis, we present an efficient, extensible, and hardware-aware indexing kernel. This indexing kernel rests upon novel data structures and (parallel) algorithms that utilize the capabilities offered by modern hardware, especially abundance of main memory, multi-core architectures, hardware accelerators, and solid state drives. This thesis focuses on presenting our query indexing techniques to cope with processing queries in data-intensive applications that are susceptible to ever increasing data volume and velocity. At the core of our query indexing kernel lies the BE-Tree family of memory-resident indexing structures that scales by overcoming the curse of dimensionality through a novel two-phase space-cutting technique, an effective Top-k processing, and adaptive parallel algorithms to operate directly on compressed data (that exploits the multi-core architecture). Furthermore, we achieve line-rate processing by harnessing the unprecedented degrees of parallelism and pipelining only available through low-level logic design using FPGAs. Finally, we present a comprehensive evaluation that establishes the superiority of BE-Tree in comparison with state-of-the-art algorithms. In this thesis, we further expand the scope of our indexing kernel and describe how to accelerate analytical queries on (multi-version) databases by enabling indexes on the most recent data. Our goal is to reduce the overhead of index maintenance, so that indexes can be used effectively for analytical queries without being a heavy burden on transaction throughput. To achieve this end, we re-design the data structures in the storage hierarchy to employ an extra level of indirection over solid state drives. This indirection layer dramatically reduces the amount of magnetic disk I/Os that is needed for updating indexes and localizes the index maintenance. As a result, by rethinking how data is indexed, we eliminate the dilemma between update vs. query performance and reduce index maintenance and query processing cost substantially.
30

Minimizing Overhead for Fault Tolerance in Event Stream Processing Systems

Martin, André 20 September 2016 (has links) (PDF)
Event Stream Processing (ESP) is a well-established approach for low-latency data processing enabling users to quickly react to relevant situations in soft real-time. In order to cope with the sheer amount of data being generated each day and to cope with fluctuating workloads originating from data sources such as Twitter and Facebook, such systems must be highly scalable and elastic. Hence, ESP systems are typically long running applications deployed on several hundreds of nodes in either dedicated data-centers or cloud environments such as Amazon EC2. In such environments, nodes are likely to fail due to software aging, process or hardware errors whereas the unbounded stream of data asks for continuous processing. In order to cope with node failures, several fault tolerance approaches have been proposed in literature. Active replication and rollback recovery-based on checkpointing and in-memory logging (upstream backup) are two commonly used approaches in order to cope with such failures in the context of ESP systems. However, these approaches suffer either from a high resource footprint, low throughput or unresponsiveness due to long recovery times. Moreover, in order to recover applications in a precise manner using exactly once semantics, the use of deterministic execution is required which adds another layer of complexity and overhead. The goal of this thesis is to lower the overhead for fault tolerance in ESP systems. We first present StreamMine3G, our ESP system we built entirely from scratch in order to study and evaluate novel approaches for fault tolerance and elasticity. We then present an approach to reduce the overhead of deterministic execution by using a weak, epoch-based rather than strict ordering scheme for commutative and tumbling windowed operators that allows applications to recover precisely using active or passive replication. Since most applications are running in cloud environments nowadays, we furthermore propose an approach to increase the system availability by efficiently utilizing spare but paid resources for fault tolerance. Finally, in order to free users from the burden of choosing the correct fault tolerance scheme for their applications that guarantees the desired recovery time while still saving resources, we present a controller-based approach that adapts fault tolerance at runtime. We furthermore showcase the applicability of our StreamMine3G approach using real world applications and examples.

Page generated in 0.4863 seconds