• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • Tagged with
  • 9
  • 9
  • 8
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Application of a Temporal Database Framework for Processing Event Queries

January 2012 (has links)
abstract: This dissertation presents the Temporal Event Query Language (TEQL), a new language for querying event streams. Event Stream Processing enables online querying of streams of events to extract relevant data in a timely manner. TEQL enables querying of interval-based event streams using temporal database operators. Temporal databases and temporal query languages have been a subject of research for more than 30 years and are a natural fit for expressing queries that involve a temporal dimension. However, operators developed in this context cannot be directly applied to event streams. The research extends a preexisting relational framework for event stream processing to support temporal queries. The language features and formal semantic extensions to extend the relational framework are identified. The extended framework supports continuous, step-wise evaluation of temporal queries. The incremental evaluation of TEQL operators is formalized to avoid re-computation of previous results. The research includes the development of a prototype that supports the integrated event and temporal query processing framework, with support for incremental evaluation and materialization of intermediate results. TEQL enables reporting temporal data in the output, direct specification of conditions over timestamps, and specification of temporal relational operators. Through the integration of temporal database operators with event languages, a new class of temporal queries is made possible for querying event streams. New features include semantic aggregation, extraction of temporal patterns using set operators, and a more accurate specification of event co-occurrence. / Dissertation/Thesis / Ph.D. Computer Science 2012
2

STT Event Stream Feature to Assist Software Testing of Impantable Devices in St. Jude Medical

Park, Yong J 01 March 2009 (has links) (PDF)
During development and testing of the pacemaker and defibrillator device functionality, engineers in the cardiac rhythm management industry use a patient simulator to ensure device functionality properly before device is tested with an animal or a human. The patient simulator is also used in the formal device product testing. In St. Jude Medical, a patient simulator called Simulation Test Tool (STT) has been developed and used by engineers in the company. While the Heart Simulator (HS) feature based on physiological heart model in the STT has been served as a main cardiac rhythm simulation feature, there has been an increasing need of a new feature in the STT for engineers to create heart rhythm scenarios more easily and effectively. This thesis covers the design and implementation of the new STT feature, called Event Stream, which allows users to create heart rhythm scenarios using simple text string based syntax for testing device functionality.
3

CBPsp: complex business processes for stream processing

Kamaleswaran, Rishikesan 01 April 2011 (has links)
This thesis presents the framework of a complex business process driven event stream processing system to produce meaningful output with direct implications to the business objectives of an organization. This framework is demonstrated using a case study instantiating the management of a newborn infant with hypoglycaemia. Business processes defined within guidelines, are defined at build-time while critical knowledge found in the definition of business processes are used to support their enactment for stream analysis. Four major research contributions are delivered. The first contribution enables the definition and enactment of complex business processes in real-time. The second contribution supports the extraction of business process using knowledge found within the initial expression of the business process. The third contribution allows for the explicit use of temporal abstraction and stream analysis knowledge to support enactment in real-time. Finally, the last contribution is the real-time integration of heterogeneous streams based on Service-Oriented Architecture principles. / UOIT
4

Escalonamento adaptativo para sistemas de processamento contínuo de eventos. / Adaptive scheduling for continuous event processing systems.

SOUSA, Rodrigo Duarte. 13 April 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-04-13T17:23:58Z No. of bitstreams: 1 RODRIGO DUARTE SOUSA - DISSERTAÇÃO - PPGCC 2014..pdf: 3708263 bytes, checksum: d9e59ec276a62382b6317ec8ce6bf880 (MD5) / Made available in DSpace on 2018-04-13T17:23:58Z (GMT). No. of bitstreams: 1 RODRIGO DUARTE SOUSA - DISSERTAÇÃO - PPGCC 2014..pdf: 3708263 bytes, checksum: d9e59ec276a62382b6317ec8ce6bf880 (MD5) Previous issue date: 2014-08-04 / Sistemasde processamento contínuo de eventos vêm sendo utilizados em aplicações que necessitam de um processamento quase em tempo real. Essa necessidade, junto da quantidade elevada de dados processados nessas aplicações, provocam que tais sistemas possuam fortes requisitos de desempenho e tolerância a falhas. Sendo assim, escalonadores geralmente fazem uso de informações de utilização dos recursos das máquinas do sistema (como utilização de CPU, memória RAM, rede e disco) natentativadereagirapossíveissobrecargasque possam aumentar a utilização dos recursos, provocando uma piora no desempenho da aplicação. Entretanto, devido aos diferentes perfis de aplicações e componentes, a complexidade de se decidir, de forma flexível e genérica, o que deve ser monitorado e a diferença entre o que torna um recurso mais importante que outro em um dado momento, podem provocar escolhas não adequadas por parte do escalonador. O trabalho apresentado nesta dissertação propõe um algoritmo de escalonamento que, através de uma abordagem reativa, se adapta a diferentes perfis de aplicações e de carga, tomando decisões baseadas no monitoramento da variação do desempenho de seus operadores. Periodicamente,o escalonador realiza uma avaliação de quais operadores apresentaram uma piora em seu desempenho e, posteriormente, tenta migrar tais operadores para nós menos sobrecarregados. Foram executados experimentos onde um protótipo do algoritmo foi avaliado e os resultados demonstraram uma melhora no desempenho do sistema, apartirdadiminuiçãodalatênciadeprocessamentoedamanutenção da quantidade de eventos processados. Em execuções com variações bruscas da carga de trabalho, a latência média de processamento dos operadores foi reduzida em mais de 84%, enquanto queaquantidadedeeventos processados diminuiuapenas 1,18%. / The usage of event stream processing systems is growing lately, mainly at applications that have a near real-time processing as a requirement. That need, combined with the high amount of data processed by these applications, increases the dependency on performance and fault tolerance of such systems. Therefore, to handle these requirements, schedulers usually make use of the resources utilization (like CPU, RAM, disk and network bandwidth) in an attempt to react to potential over loads that may further increase their utilization, causing the application’s performance to deteriorate. However, due to different application profiles and components, the complexity of deciding, in a flexible and generic way, what resources should be monitored and the difference between what makes a resource utilization more important than another in a given time, can provoke the scheduler to perform wrong actions. In this work, we propose a scheduling algorithm that, via a reactive approach, adapts to different applications profiles and load, taking decisions based at the latency variation from its operators. Periodically, the system scheduler performs an evaluation of which operators are giving evidence of beingin an over loaded state, then, the scheduler tries to migrate those operators to a machine with less utilization. The experiments showed an improvement in the system performance, in scenarios with a bursty workload, the operators’ average processing latency was reduced by more than 84%, while the number of processed events decreased by only1.18%.
5

A situation refinement model for complex event processing

Alakari, Alaa A. 07 January 2021 (has links)
Complex Event Processing (CEP) systems aim at processing large flows of events to discover situations of interest (SOI). Primarily, CEP uses predefined pattern templates to detect occurrences of complex events in an event stream. Extracting complex event is achieved by employing techniques such as filtering and aggregation to detect complex patterns of many simple events. In general, CEP systems rely on domain experts to de fine complex pattern rules to recognize SOI. However, the task of fine tuning complex pattern rules in the event streaming environment face two main challenges: the issue of increased pattern complexity and the event streaming constraints where such rules must be acquired and processed in near real-time. Therefore, to fine-tune the CEP pattern to identify SOI, the following requirements must be met: First, a minimum number of rules must be used to re fine the CEP pattern to avoid increased pattern complexity, and second, domain knowledge must be incorporated in the refinement process to improve awareness about emerging situations. Furthermore, the event data must be processed upon arrival to cope with the continuous arrival of events in the stream and to respond in near real-time. In this dissertation, we present a Situation Refi nement Model (SRM) that considers these requirements. In particular, by developing a Single-Scan Frequent Item Mining algorithm to acquire the minimal number of CEP rules with the ability to adjust the level of re refinement to t the applied scenario. In addition, a cost-gain evaluation measure to determine the best tradeoff to identify a particular SOI is presented. / Graduate
6

Minimizing Overhead for Fault Tolerance in Event Stream Processing Systems

Martin, André 20 September 2016 (has links) (PDF)
Event Stream Processing (ESP) is a well-established approach for low-latency data processing enabling users to quickly react to relevant situations in soft real-time. In order to cope with the sheer amount of data being generated each day and to cope with fluctuating workloads originating from data sources such as Twitter and Facebook, such systems must be highly scalable and elastic. Hence, ESP systems are typically long running applications deployed on several hundreds of nodes in either dedicated data-centers or cloud environments such as Amazon EC2. In such environments, nodes are likely to fail due to software aging, process or hardware errors whereas the unbounded stream of data asks for continuous processing. In order to cope with node failures, several fault tolerance approaches have been proposed in literature. Active replication and rollback recovery-based on checkpointing and in-memory logging (upstream backup) are two commonly used approaches in order to cope with such failures in the context of ESP systems. However, these approaches suffer either from a high resource footprint, low throughput or unresponsiveness due to long recovery times. Moreover, in order to recover applications in a precise manner using exactly once semantics, the use of deterministic execution is required which adds another layer of complexity and overhead. The goal of this thesis is to lower the overhead for fault tolerance in ESP systems. We first present StreamMine3G, our ESP system we built entirely from scratch in order to study and evaluate novel approaches for fault tolerance and elasticity. We then present an approach to reduce the overhead of deterministic execution by using a weak, epoch-based rather than strict ordering scheme for commutative and tumbling windowed operators that allows applications to recover precisely using active or passive replication. Since most applications are running in cloud environments nowadays, we furthermore propose an approach to increase the system availability by efficiently utilizing spare but paid resources for fault tolerance. Finally, in order to free users from the burden of choosing the correct fault tolerance scheme for their applications that guarantees the desired recovery time while still saving resources, we present a controller-based approach that adapts fault tolerance at runtime. We furthermore showcase the applicability of our StreamMine3G approach using real world applications and examples.
7

Minimizing Overhead for Fault Tolerance in Event Stream Processing Systems

Martin, André 17 December 2015 (has links)
Event Stream Processing (ESP) is a well-established approach for low-latency data processing enabling users to quickly react to relevant situations in soft real-time. In order to cope with the sheer amount of data being generated each day and to cope with fluctuating workloads originating from data sources such as Twitter and Facebook, such systems must be highly scalable and elastic. Hence, ESP systems are typically long running applications deployed on several hundreds of nodes in either dedicated data-centers or cloud environments such as Amazon EC2. In such environments, nodes are likely to fail due to software aging, process or hardware errors whereas the unbounded stream of data asks for continuous processing. In order to cope with node failures, several fault tolerance approaches have been proposed in literature. Active replication and rollback recovery-based on checkpointing and in-memory logging (upstream backup) are two commonly used approaches in order to cope with such failures in the context of ESP systems. However, these approaches suffer either from a high resource footprint, low throughput or unresponsiveness due to long recovery times. Moreover, in order to recover applications in a precise manner using exactly once semantics, the use of deterministic execution is required which adds another layer of complexity and overhead. The goal of this thesis is to lower the overhead for fault tolerance in ESP systems. We first present StreamMine3G, our ESP system we built entirely from scratch in order to study and evaluate novel approaches for fault tolerance and elasticity. We then present an approach to reduce the overhead of deterministic execution by using a weak, epoch-based rather than strict ordering scheme for commutative and tumbling windowed operators that allows applications to recover precisely using active or passive replication. Since most applications are running in cloud environments nowadays, we furthermore propose an approach to increase the system availability by efficiently utilizing spare but paid resources for fault tolerance. Finally, in order to free users from the burden of choosing the correct fault tolerance scheme for their applications that guarantees the desired recovery time while still saving resources, we present a controller-based approach that adapts fault tolerance at runtime. We furthermore showcase the applicability of our StreamMine3G approach using real world applications and examples.
8

Investigation of How Real-Time Event Streams Can be Analysed and Utilised in Distributed Systems at BookBeat AB : Comparing the industry standard and custom implementations / Undersökning av hur realtidsströmmar av händelser kan analyseras och användas inom distribuerade system hos BookBeat AB : En jämförelse mellan industristandarden och skräddarsydda implementationer

Elmdahl, Kalle, Nilsson, Hampus January 2023 (has links)
Today’s technology companies have large amounts of streamed data flowing through their distributed systems in real time. In order to optimise and understand the effectiveness of their systems, they need to measure and analyse the data without disturbances in business flows. One way of doing this is to make use of Event Stream Processing (ESP). As more detailed insights are constantly requested, faster and more reliable real time processing is needed. The question is, if and how such a solution would affect the pre-existing systems performance and capabilities and how it could be implemented. This thesis develops an ESP system and compares different ways of solving the stated issue by comparing architectures, network protocols and data management methods. The system is then tested, analysed and compared to today’s commercially available software with the purpose of investigating how real-time event streams can be analysed and utilised in distributed systems. / Dagens teknikföretag har enorma mängder data som flödar i realtid genom distribuerade system. För att optimera och förstå effektiviteten av systemen behövs ett system för att analysera data utan att påverka pågående affärsflöden. Ett sätt att göra detta är genom processning av eventströmmar (ESP). Då mer detaljerade insikter ständigt efterfrågas, behövs snabbare och mer tillförlitlig realtidsprocessering. Frågan är hur en sådan lösning skulle påverka de existerande systemens prestanda och hur den kan implementeras. Denna uppsats visar utvecklingen av ett ESP system och jämför olika sätt att lösa det angivna problemet genom att jämföra arkitekturer, nätverksprotokoll och datahanteringsmetoder. Systemets prestanda har sedan testats, analyserats och jämförts med dagens existerande kommersiella program i syfta att undersöka hur realtidseventströmmar kan analyseras och användas inom distribuerade system.
9

NETAH, un framework pour la composition distribuée de flux d'événements / NETAH, A Framework for Composing Distributed Event Streams

Epal Njamen, Orleant 11 October 2016 (has links)
La réduction de la taille des équipements et l’avènement des communications sans fil ont fortement contribué à l’avènement d’une informatique durable. La plupart des applications informatiques sont aujourd’hui construites en tenant compte de cet environnement ambiant dynamique. Leur développement et exécution nécessite des infrastructures logicielles autorisant des entités à s’exécuter, à interagir à travers divers modes (synchrone et asynchrone), à s’adapter à leur(s) environnement(s) notamment en termes : - de consommation de ressources (calcul, mémoire, support de stockage, bases de données, connexions réseaux, ...), - de multiplicité des sources de données (illustrée par le Web, les capteurs, compteurs intelligents, satellites, les bases de données existantes, ...) - des formats multiples des objets statiques ou en flux (images, son, vidéos). Notons que dans beaucoup de cas, les objets des flux doivent être homogénéisées, enrichies, croisées, filtrées et agrégées pour constituer in fine des produits informationnels riches en sémantique et stratégiques pour les applications ou utilisateurs. Les systèmes à base d'événements particulièrement bien adaptés à la programmation de ce type d’applications. Ils peuvent offrir des communications anonymes et asynchrones (émetteurs/serveurs et récepteurs /clients ne se connaissent pas) qui facilitent l'interopération et la collaboration entre des services autonomes et hétérogènes. Les systèmes d’événements doivent être capables d'observer, transporter, filtrer, agréger, corréler et analyser de nombreux flux d’événements produits de manière distribuée. Ces services d’observation doivent pouvoir être déployés sur des architectures distribuées telles que les réseaux de capteurs, les smart-grid, et le cloud pour contribuer à l’observation des systèmes complexes et à leur contrôle autonome grâce à des processus réactifs de prise de décision. L’objectif de la thèse est de proposer un modèle de composition distribuée de flux d’événements et de spécifier un service d’événements capable de réaliser efficacement l’agrégation, la corrélation temporelle et causale, et l’analyse de flux d’événements dans des plateformes distribuées à base de services. TRAVAIL A REALISER (i) Etat de l’art - Systèmes de gestion de flux événements - Services et infrastructures d’événements distribués - Modèles d’événements (ii) Définition d’un scénario d’expérimentation et de comparaison des approches existantes. (iii) Définition d’un modèle de composition distribuée de flux d’événements à base de suscriptions (iv) Spécification et implantation d’un service distribuée de composition de flux d’événements. / The reduction in the size of equipments and the advent of wireless communications have greatly contributed to the advent of sustainable IT . Most computer applications today are built taking into account the dynamic ambient environment. Their development and execution need software infrastructure allowing entities to execute , interact through a variety of modes (synchronous and asynchronous ) , has to adapt to their (s) environment (s ), particularly in terms of: - resource consumption ( computation , memory , storage media , databases , networks connections , ... ) - the multiplicity of data sources ( illustrated by the Web , sensors, smart meters, satellites, existing data bases .. . ) - multiple formats of static objects or streams (images , sounds, videos ) . Note that in many cases , stream's objects have to be homogenized, enriched, filtered and aggregated to form informations rich in semantic and strategic for applications or end users. Event based systems are particularly well suited to the programming of such applications. They can offer anonymous and asynchronous communications ( transmitters / receivers and servers / clients do not know each others) that facilitate interoperation and cooperation between autonomous and heterogeneous services. The event systems should be able to observe, transport, filter, aggregate, correlate and analyze many events streams produced in a distributed way. These observation services must be able to be deployed on distributed architectures , such as sensor networks , smart -grid and cloud, to contribute to the observation of complex systems and their self-control via reactive decisions making processes. The aim of the thesis is to propose a model for distributed event flows composition and specify an event service that can effectively realize the aggregation , temporal and causal correlation , and analysis of flow events in distributed service -based platforms. WORK TO BE PERFORMED (i) State of the art: - Events flow management systems - distributed event services - event model ( ii ) Definition of a scenario for experimentation and comparison of existing approaches. ( iii ) Definition of a model of composition delivered a stream of events based superscriptions ( iv ) Specification and implementation of a distributed event flow composition service

Page generated in 0.0368 seconds