• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 4
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 67
  • 67
  • 44
  • 15
  • 14
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Location-Aware Business Process Management for Real-time Monitoring of Patient Care Processes

Bougueng Tchemeube, Renaud January 2013 (has links)
Long wait times are a global issue in the healthcare sector, particularly in Canada. Despite numerous research findings on wait time management, the issue persists. This is partly because for a given hospital, the data required to conduct wait times analysis is currently scattered across various information systems. Moreover, such data is usually not accurate (because of possible human errors), imprecise and late. The whole situation contributes to the current state of wait times. This thesis proposes a location-aware business process management system for real-time care process monitoring. More precisely, the system enables an improved visibility of process execution by gathering, as processes execute, accurate and granular process information including wait time measurements. The major contributions of this thesis include an architecture for the system, a prototype taking advantages of commercial real-time location system combined with a business process management system to accurately measure wait times, as well as a case study based on a real cardiology process from an Ontario hospital.
42

Runtime MPI Correctness Checking with a Scalable Tools Infrastructure

Hilbrich, Tobias 08 June 2015 (has links)
Increasing computational demand of simulations motivates the use of parallel computing systems. At the same time, this parallelism poses challenges to application developers. The Message Passing Interface (MPI) is a de-facto standard for distributed memory programming in high performance computing. However, its use also enables complex parallel programing errors such as races, communication errors, and deadlocks. Automatic tools can assist application developers in the detection and removal of such errors. This thesis considers tools that detect such errors during an application run and advances them towards a combination of both precise checks (neither false positives nor false negatives) and scalability. This includes novel hierarchical checks that provide scalability, as well as a formal basis for a distributed deadlock detection approach. At the same time, the development of parallel runtime tools is challenging and time consuming, especially if scalability and portability are key design goals. Current tool development projects often create similar tool components, while component reuse remains low. To provide a perspective towards more efficient tool development, which simplifies scalable implementations, component reuse, and tool integration, this thesis proposes an abstraction for a parallel tools infrastructure along with a prototype implementation. This abstraction overcomes the use of multiple interfaces for different types of tool functionality, which limit flexible component reuse. Thus, this thesis advances runtime error detection tools and uses their redesign and their increased scalability requirements to apply and evaluate a novel tool infrastructure abstraction. The new abstraction ultimately allows developers to focus on their tool functionality, rather than on developing or integrating common tool components. The use of such an abstraction in wide ranges of parallel runtime tool development projects could greatly increase component reuse. Thus, decreasing tool development time and cost. An application study with up to 16,384 application processes demonstrates the applicability of both the proposed runtime correctness concepts and of the proposed tools infrastructure.
43

Abnahmetestgetriebene Entwicklung von ereignisbasierten Anwendungen: Werkzeugunterstützung und empirische Studien

Weiß, Johannes 14 June 2017 (has links)
Die Menge an verfügbaren, elektronisch auswertbaren Informationen nimmt stetig zu. Mobiltelefone mit unterschiedlichsten Sensoren, soziale Netzwerke und das Internet der Dinge sind Beispiele für Erzeuger von potentiell interessanten und verwertbaren Daten. Das Themenfeld der ereignisverarbeitenden Systeme (Event Processing – EP) bietet Technologien und Werkzeuge an, um eintreffende Daten, sog. Ereignisse, in nahezu Echtzeit zu verarbeiten. So können z.B. Muster in den Ereignissen erkannt werden. Durch die Erstellung von abgeleiteten Ereignissen können somit weitere Systemen auf diese Mustererkennung reagieren. So können u.a. zeitbasierte Funktionalitäten realisiert werden, wie z.B. das Überwachen von Aktienkursen in einem definierten Zeitraum. Im Gegensatz zu einem nachrichtenorientierten Kommunikationssystem können in EP-Anwendungen fachlich relevante Anwendungsfunktionalitäten umgesetzt werden. Die Validierung dieser Anwendungen durch Fachexperten gewinnt dadurch eine gesteigerte Bedeutung. Die abnahmetestgetriebene Entwicklung (Acceptance Test Driven Development – ATDD) ist eine Methode der agilen Softwareentwicklung und fokussiert sich auf die Integration von Fachexperten in die Erstellung und Auswertung von automatisierbaren Testfällen. Neben dem Potential der Automatisierung von manuellen Regressionstests liegt in der Methode die Möglichkeit den Wissenstransfer zwischen Entwicklern und Fachexperten zu verbessern. Die vorliegende Arbeit leistet mehrere Beiträge zur Untersuchung von ATDD im Bereich der EP-Anwendungsentwicklung. Zunächst wurden Anforderungen für eine entsprechende Werkzeugunterstützung auf Basis der Eigenschaften von EP-Anwendungen ermittelt und der Produktqualitätsklassifikationen funktionalen Eignung, Modularität und Benutzbarkeit zugeordnet. Im Rahmen einer systematischen Literaturrecherche wurden Ansätze aus der Literatur sowie die Werkzeugunterstützung der vorhandenen Produktlösungen analysiert. Dabei wurde deutlich, dass die verwandten Lösungen die identifizierten Anforderungen nicht ausreichend erfüllen. Dadurch motiviert wurde eine Testbeschreibungssprache sowie ein ausführendes, verteiltes Testsystem konzipiert und formal beschrieben. Die Testbeschreibungssprache bietet Kommandos zur produktunabhängigen Spezifikation von Testfällen an. Mit Hilfe des Testsystems ist es möglich, diese Testfälle gegen EP-Produktlösungen auszuführen. Anhand von ausgewählten Fallstudien und einer prototypischen Umsetzung des Lösungsansatzes wurde eine Validierung vorgenommen. Dabei wird ersichtlich, dass der vorgestellte Lösungsansatz den aktuellen Stand der Technik hinsichtlich funktionaler Eignung und Modularität in diesem Anwendungsbereich übersteigt. Die Benutzbarkeit wurde anhand von zwei Benutzerstudien tiefergehend untersucht. Dabei sind erste Erkenntnisse über die praktische Nutzung der Testbeschreibungssprache sowie zukünftige Fragestellungen aufgedeckt worden. In der ersten Studie wurde das Verstehen von Testfällen untersucht und dabei die automatisierbare Testbeschreibungssprache mit einer klassischen Testbeschreibungsvorlage verglichen. Hinsichtlich der Bearbeitungsdauer wurde ein signifikanter Effekt zugunsten der automatisierbaren Sprache ermittelt. Die zweite Studie betrachtet das Spezifizieren von Testfällen. Auch hier wurden Vorteile hinsichtlich der Bearbeitungsdauer aufgedeckt.
44

Speculation in Parallel and Distributed Event Processing Systems

Brito, Andrey 10 May 2010 (has links)
Event stream processing (ESP) applications enable the real-time processing of continuous flows of data. Algorithmic trading, network monitoring, and processing data from sensor networks are good examples of applications that traditionally rely upon ESP systems. In addition, technological advances are resulting in an increasing number of devices that are network enabled, producing information that can be automatically collected and processed. This increasing availability of on-line data motivates the development of new and more sophisticated applications that require low-latency processing of large volumes of data. ESP applications are composed of an acyclic graph of operators that is traversed by the data. Inside each operator, the events can be transformed, aggregated, enriched, or filtered out. Some of these operations depend only on the current input events, such operations are called stateless. Other operations, however, depend not only on the current event, but also on a state built during the processing of previous events. Such operations are, therefore, named stateful. As the number of ESP applications grows, there are increasingly strong requirements, which are often difficult to satisfy. In this dissertation, we address two challenges created by the use of stateful operations in a ESP application: (i) stateful operators can be bottlenecks because they are sensitive to the order of events and cannot be trivially parallelized by replication; and (ii), if failures are to be tolerated, the accumulated state of an stateful operator needs to be saved, saving this state traditionally imposes considerable performance costs. Our approach is to evaluate the use of speculation to address these two issues. For handling ordering and parallelization issues in a stateful operator, we propose a speculative approach that both reduces latency when the operator must wait for the correct ordering of the events and improves throughput when the operation in hand is parallelizable. In addition, our approach does not require that user understand concurrent programming or that he or she needs to consider out-of-order execution when writing the operations. For fault-tolerant applications, traditional approaches have imposed prohibitive performance costs due to pessimistic schemes. We extend such approaches, using speculation to mask the cost of fault tolerance.:1 Introduction 1 1.1 Event stream processing systems ......................... 1 1.2 Running example ................................. 3 1.3 Challenges and contributions ........................... 4 1.4 Outline ...................................... 6 2 Background 7 2.1 Event stream processing ............................. 7 2.1.1 State in operators: Windows and synopses ............................ 8 2.1.2 Types of operators ............................ 12 2.1.3 Our prototype system........................... 13 2.2 Software transactional memory.......................... 18 2.2.1 Overview ................................. 18 2.2.2 Memory operations............................ 19 2.3 Fault tolerance in distributed systems ...................................... 23 2.3.1 Failure model and failure detection ...................................... 23 2.3.2 Recovery semantics............................ 24 2.3.3 Active and passive replication ...................... 24 2.4 Summary ..................................... 26 3 Extending event stream processing systems with speculation 27 3.1 Motivation..................................... 27 3.2 Goals ....................................... 28 3.3 Local versus distributed speculation ....................... 29 3.4 Models and assumptions ............................. 29 3.4.1 Operators................................. 30 3.4.2 Events................................... 30 3.4.3 Failures .................................. 31 4 Local speculation 33 4.1 Overview ..................................... 33 4.2 Requirements ................................... 35 4.2.1 Order ................................... 35 4.2.2 Aborts................................... 37 4.2.3 Optimism control ............................. 38 4.2.4 Notifications ............................... 39 4.3 Applications.................................... 40 4.3.1 Out-of-order processing ......................... 40 4.3.2 Optimistic parallelization......................... 42 4.4 Extensions..................................... 44 4.4.1 Avoiding unnecessary aborts ....................... 44 4.4.2 Making aborts unnecessary........................ 45 4.5 Evaluation..................................... 47 4.5.1 Overhead of speculation ......................... 47 4.5.2 Cost of misspeculation .......................... 50 4.5.3 Out-of-order and parallel processing micro benchmarks ........... 53 4.5.4 Behavior with example operators .................... 57 4.6 Summary ..................................... 60 5 Distributed speculation 63 5.1 Overview ..................................... 63 5.2 Requirements ................................... 64 5.2.1 Speculative events ............................ 64 5.2.2 Speculative accesses ........................... 69 5.2.3 Reliable ordered broadcast with optimistic delivery .................. 72 5.3 Applications .................................... 75 5.3.1 Passive replication and rollback recovery ................................ 75 5.3.2 Active replication ............................. 80 5.4 Extensions ..................................... 82 5.4.1 Active replication and software bugs ..................................... 82 5.4.2 Enabling operators to output multiple events ........................ 87 5.5 Evaluation .................................... 87 5.5.1 Passive replication ............................ 88 5.5.2 Active replication ............................. 88 5.6 Summary ..................................... 93 6 Related work 95 6.1 Event stream processing engines ......................... 95 6.2 Parallelization and optimistic computing ................................ 97 6.2.1 Speculation ................................ 97 6.2.2 Optimistic parallelization ......................... 98 6.2.3 Parallelization in event processing .................................... 99 6.2.4 Speculation in event processing ..................... 99 6.3 Fault tolerance .................................. 100 6.3.1 Passive replication and rollback recovery ............................... 100 6.3.2 Active replication ............................ 101 6.3.3 Fault tolerance in event stream processing systems ............. 103 7 Conclusions 105 7.1 Summary of contributions ............................ 105 7.2 Challenges and future work ............................ 106 Appendices Publications 107 Pseudocode for the consensus protocol 109
45

A Test of the Impaired Attentional Disengagement Hypothesis in Social Anxiety

Giffi, Aryn 21 June 2018 (has links)
No description available.
46

A Formal Framework for Process Interoperability in Dynamic Collaboration Environments / Un cadre formel pour l'interopérabilité des processus dans les environnements collaboratifs dynamiques

Khalfallah, Malik 03 December 2014 (has links)
Concevoir les produits complexes tels que les avions, les hélicoptères, et les lanceurs requière l'utilisation de processus standardisés ayant des fondements robustes. Ces processus doivent être exécutés dans le contexte d'environnements collaboratifs interorganisationnels souvent dynamiques. Dans ce manuscrit, nous présentons un cadre formel qui assure une interopérabilité continue dans le temps pour les processus inter-organisationnels dans les environnements dynamiques. Nous proposons un langage de modélisation déclaratif pour définir des contrats qui capturent les objectifs de chaque partenaire intervenant dans la collaboration. Les modèles de contrats construits avec ce langage sous-spécifient les objectifs de la collaboration en limitant les détails capturés durant la phase de construction du contrat. Cette sous-spécification réduit le couplage entre les partenaires de la collaboration. Néanmoins, moins de couplage implique l'apparition de certaines inadéquations quand les processus des partenaires vont s'échanger des messages lors de la phase d'exécution. Par conséquent, nous développons un algorithme de médiation automatique qui est bien adapté pour les environnements dynamiques. Nous conduisons des évaluations de performance sur cet algorithme qui vont démontrer son efficience par rapport aux approches de médiation existantes. Ensuite, nous étendons notre cadre avec un ensemble d'opérations d'administration qui permettent la réalisation de modifications sur l'environnement collaboratif. Nous développons un algorithme qui évalue l'impact des modifications sur les partenaires. Cet algorithme va ensuite décider si la modification doit être réalisée à l'instant ou bien retardée en attendant que des conditions appropriées sur la configuration de l'environnement dynamique soient satisfaites. Pour savoir comment atteindre ces conditions, nous utilisons l'algorithme de planning à base de graphe. Cet algorithme détermine l'ensemble des opérations qui doivent être exécutées pour atteindre ces conditions / Designing complex products such as aircrafts, helicopters and launchers must rely on well-founded and standardized processes. These processes should be executed in the context of dynamic cross-organizational collaboration environments. In this dissertation, we present a formal framework that ensures sustainable interoperability for cross-organizational processes in dynamic environments. We propose a declarative modeling language to define contracts that capture the objectives of each partner in the collaboration. Contract models built using this language under-specify the objectives of the collaboration by limiting the details captured at design-time. This under-specification decreases the coupling between partners in the collaboration. Nevertheless, less coupling leads to the creation of mismatches when partners’ processes will exchange messages at run-time. Accordingly, we develop an automatic mediation algorithm that is well adapted for dynamic environments. We conduct a thorough evaluation of this algorithm in the context of dynamic environments and compare it with existing mediation approaches which will prove its efficiency. We then extend our framework with a set of management operations that help realize the modifications on the collaboration environment at run-time. We develop an algorithm that assesses the impact of modifications on the partners in the collaboration environment. Then, this algorithm decides if the modification can be realized or should be postponed to wait for appropriate conditions. In order to figure out how to reach these appropriate conditions, we use the planning graph algorithm. This algorithm determines the raw set of management operations that should be executed in order to realize these conditions. A raw set of management operations cannot be executed by an engine unless its operations are encapsulated in the right workflow patterns. Accordingly, we extend this planning algorithm in order to generate an executable workflow from the raw set of operations. We evaluate our extension against existing approaches regarding the number and the nature of workflow patterns considered when generating the executable workflow. Finally, we believe that monitoring contributes in decreasing the coupling between partners in a collaboration environment
47

Gouvernance et supervision décentralisée des chorégraphies inter-organisationnelles / Decentralized Monitoring of Cross-Organizational Service Choreographies

Baouab, Aymen 27 June 2013 (has links)
Durant la dernière décennie, les architectures orientées services (SOA) d'une part et la gestion des processus business (BPM) d'autre part ont beaucoup évolué et semblent maintenant en train de converger vers un but commun qui est de permettre à des organisations complètement hétérogènes de partager de manière flexible leurs ressources dans le but d'atteindre des objectifs communs, et ce, à travers des schémas de collaboration avancée. Ces derniers permettent de spécifier l'interconnexion des processus métier de différentes organisations. La nature dynamique et la complexité de ces processus posent des défis majeurs quant à leur bonne exécution. Certes, les langages de description de chorégraphie aident à réduire cette complexité en fournissant des moyens pour décrire des systèmes complexes à un niveau abstrait. Toutefois, rien ne garantit que des situations erronées ne se produisent pas suite, par exemple, à des interactions "mal" spécifiées ou encore des comportements malhonnêtes d'un des partenaires. Dans ce manuscrit, nous proposons une approche décentralisée qui permet la supervision de chorégraphies au moment de leur exécution et la détection instantanée de violations de séquences d'interaction. Nous définissons un modèle de propagation hiérarchique pour l'échange de notifications externes entre les partenaires. Notre approche permet une génération optimisée de requêtes de supervision dans un environnement événementiel, et ce, d'une façon automatique et à partir de tout modèle de chorégraphie / Cross-organizational service-based processes are increasingly adopted by different companies when they can not achieve goals on their own. The dynamic nature of these processes poses various challenges to their successful execution. In order to guarantee that all involved partners are informed about errors that may happen in the collaboration, it is necessary to monitor the execution process by continuously observing and checking message exchanges during runtime. This allows a global process tracking and evaluation of process metrics. Complex event processing can address this concern by analyzing and evaluating message exchange events, to the aim of checking if the actual behavior of the interacting entities effectively adheres to the modeled business constraints. In this thesis, we present an approach for decentralized monitoring of cross-organizational choreographies. We define a hierarchical propagation model for exchanging external notifications between the collaborating parties. We also propose a runtime event-based approach to deal with the problem of monitoring conformance of interaction sequences. Our approach allows for an automatic and optimized generation of rules. After parsing the choreography graph into a hierarchy of canonical blocks, tagging each event by its block ascendancy, an optimized set of monitoring queries is generated. We evaluate the concepts based on a scenario showing how much the number of queries can be significantly reduced
48

An Energy-efficient And Reactive Remote Surveillance Framework Using Wireless Multimedia Sensor Networks

Oztarak, Hakan 01 May 2012 (has links) (PDF)
With the introduction of Wireless Multimedia Sensor Networks, large-scale remote outdoor surveillance applications where the majority of the cameras will be battery-operated are envisioned. These are the applications where the frequency of incidents is too low to employ permanent staffing such as monitoring of land and marine border, critical infrastructures, bridges, water supplies, etc. Given the inexpensive costs of wireless resource constrained camera sensors, the size of these networks will be significantly larger than the traditional multi-camera systems. While large number of cameras may increase the coverage of the network, such a large size along with resource constraints poses new challenges, e.g., localization, classification, tracking or reactive behavior. This dissertation proposes a framework that transforms current multi-camera networks into low-cost and reactive systems which can be used in large-scale remote surveillance applications. Specifically, a remote surveillance system framework with three components is proposed: 1) Localization and tracking of objects / 2) Classification and identification of objects / and 3) Reactive behavior at the base-station. For each component, novel lightweight, storage-efficient and real-time algorithms both at the computation and communication level are designed, implemented and tested under a variety of conditions. The results have indicated the feasibility of this framework working with limited energy but having high object localization/classification accuracies. The results of this research will facilitate the design and development of very large-scale remote border surveillance systems and improve the systems effectiveness in dealing with the intrusions with reduced human involvement and labor costs.
49

Adaptive root cause analysis and diagnosis

Zhu, Qin 06 December 2010 (has links)
In this dissertation we describe the event processing autonomic computing reference architecture (EPACRA), an innovative reference architecture that solves many important problems related to adaptive root cause analysis and diagnosis (RCAD). Along with the research progress for defining EPACRA, we also identified a set of autonomic computing architecture patterns and proposed a new information seeking model called net-casting model. EPACRA is important because today, root cause analysis and diagnosis (RCAD) in enterprise systems is still largely performed manually by experienced system administrators. The goal of this research is to characterize, simplify, improve, and automate RCAD processes to ease selected tasks for system administrators and end-users. Research on RCAD processes involves three domains: (1) autonomic computing architecture patterns, (2) information seeking models, and (3) complex event processing (CEP) technologies. These domains as well as existing technologies and standards contribute to the synthesized knowledge of this dissertation. To minimize human involvement in RCAD, we investigated architecture patterns to be utilized in RCAD processes. We identified a set of autonomic computing architecture patterns and analyzed the interactions among the feedback loops in these individual architecture patterns and how the autonomic elements interact with each other. By illustrating the architecture patterns, we recognized ambiguity in the aggregator-escalator-peer pattern. This problem has been solved by adding a new architecture pattern, namely the chain-of-monitors pattern, to the lattice of autonomic computing architecture patterns. To facilitate the autonomic information seeking process, we developed the net-casting information seeking model. After identifying the commonalities among three traditional information seeking models, we defined the net-casting model as a five stage process and then tailored it to describe our automated RCAD process. One of the main contributions of this dissertation is an innovative autonomic computing reference architecture called event processing autonomic computing reference architecture (EPACRA). This reference architecture is based on (1) complex event processing (CEP) concepts, (2) autonomic computing architecture patterns, (3) real use-case workflows, and (4) our net-casting information seeking model. This reference architecture can be leveraged to relieve the system administrator’s burden of routinely performing RCAD tasks in a heterogeneous environment. EPACRA can be viewed as a variant of the IBM ACRA model—extended with CEP to deal with large event clouds in real-time environments. In the middle layer of the reference model, EPACRA introduces an innovative design referred to as use-case-unit—a use case is the scenario of an RCAD process initiated by a symptom—event processing network (EPN) for RCAD. Each use-case-unit EPN reflects our automation approach, including identification of events from the use cases and classifying those events into event types. Apart from defining individual event processing agents (EPAs) to process the different types of events, dynamically constructing use-case unit EPNs is also an innovative approach which may lead to fully autonomic RCAD systems in the future. Finally, this dissertation presents a case study for EPACRA. As a case study we use a prototype of a Web application intrusion detection tool to demonstrate the autonomic mechanisms of our RCAD process. Specifically, this tool recognizes two types of malicious attacks on web application systems and then takes actions to prevent intrusion attempts. This case study validates both our chain-of-monitors autonomic architecture pattern and our net-casting model. It also validates our use-case-unit EPN approach as an innovative approach to realizing RCAD workflows. Hopefully, this research platform will be beneficial for other RCAD projects and researchers with similar interests and goals.
50

NETAH, un framework pour la composition distribuée de flux d'événements / NETAH, A Framework for Composing Distributed Event Streams

Epal Njamen, Orleant 11 October 2016 (has links)
La réduction de la taille des équipements et l’avènement des communications sans fil ont fortement contribué à l’avènement d’une informatique durable. La plupart des applications informatiques sont aujourd’hui construites en tenant compte de cet environnement ambiant dynamique. Leur développement et exécution nécessite des infrastructures logicielles autorisant des entités à s’exécuter, à interagir à travers divers modes (synchrone et asynchrone), à s’adapter à leur(s) environnement(s) notamment en termes : - de consommation de ressources (calcul, mémoire, support de stockage, bases de données, connexions réseaux, ...), - de multiplicité des sources de données (illustrée par le Web, les capteurs, compteurs intelligents, satellites, les bases de données existantes, ...) - des formats multiples des objets statiques ou en flux (images, son, vidéos). Notons que dans beaucoup de cas, les objets des flux doivent être homogénéisées, enrichies, croisées, filtrées et agrégées pour constituer in fine des produits informationnels riches en sémantique et stratégiques pour les applications ou utilisateurs. Les systèmes à base d'événements particulièrement bien adaptés à la programmation de ce type d’applications. Ils peuvent offrir des communications anonymes et asynchrones (émetteurs/serveurs et récepteurs /clients ne se connaissent pas) qui facilitent l'interopération et la collaboration entre des services autonomes et hétérogènes. Les systèmes d’événements doivent être capables d'observer, transporter, filtrer, agréger, corréler et analyser de nombreux flux d’événements produits de manière distribuée. Ces services d’observation doivent pouvoir être déployés sur des architectures distribuées telles que les réseaux de capteurs, les smart-grid, et le cloud pour contribuer à l’observation des systèmes complexes et à leur contrôle autonome grâce à des processus réactifs de prise de décision. L’objectif de la thèse est de proposer un modèle de composition distribuée de flux d’événements et de spécifier un service d’événements capable de réaliser efficacement l’agrégation, la corrélation temporelle et causale, et l’analyse de flux d’événements dans des plateformes distribuées à base de services. TRAVAIL A REALISER (i) Etat de l’art - Systèmes de gestion de flux événements - Services et infrastructures d’événements distribués - Modèles d’événements (ii) Définition d’un scénario d’expérimentation et de comparaison des approches existantes. (iii) Définition d’un modèle de composition distribuée de flux d’événements à base de suscriptions (iv) Spécification et implantation d’un service distribuée de composition de flux d’événements. / The reduction in the size of equipments and the advent of wireless communications have greatly contributed to the advent of sustainable IT . Most computer applications today are built taking into account the dynamic ambient environment. Their development and execution need software infrastructure allowing entities to execute , interact through a variety of modes (synchronous and asynchronous ) , has to adapt to their (s) environment (s ), particularly in terms of: - resource consumption ( computation , memory , storage media , databases , networks connections , ... ) - the multiplicity of data sources ( illustrated by the Web , sensors, smart meters, satellites, existing data bases .. . ) - multiple formats of static objects or streams (images , sounds, videos ) . Note that in many cases , stream's objects have to be homogenized, enriched, filtered and aggregated to form informations rich in semantic and strategic for applications or end users. Event based systems are particularly well suited to the programming of such applications. They can offer anonymous and asynchronous communications ( transmitters / receivers and servers / clients do not know each others) that facilitate interoperation and cooperation between autonomous and heterogeneous services. The event systems should be able to observe, transport, filter, aggregate, correlate and analyze many events streams produced in a distributed way. These observation services must be able to be deployed on distributed architectures , such as sensor networks , smart -grid and cloud, to contribute to the observation of complex systems and their self-control via reactive decisions making processes. The aim of the thesis is to propose a model for distributed event flows composition and specify an event service that can effectively realize the aggregation , temporal and causal correlation , and analysis of flow events in distributed service -based platforms. WORK TO BE PERFORMED (i) State of the art: - Events flow management systems - distributed event services - event model ( ii ) Definition of a scenario for experimentation and comparison of existing approaches. ( iii ) Definition of a model of composition delivered a stream of events based superscriptions ( iv ) Specification and implementation of a distributed event flow composition service

Page generated in 0.4875 seconds