• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Adaptation dynamique de données pour la synthèse et le déploiement de protocoles de médiation / Dynamic Data Adaptation for the Synthesis and Deployment of Protocol Mediators

Andriescu, Emil-Mircea 08 February 2016 (has links)
Dans la plupart des systèmes disponibles aujourd'hui l'interopérabilité est fournie comme une capacité statique qui est le résultat d'une intégration conçue manuellement. En conséquence, un nombre important de systèmes fonctionnellement compatibles ne sont pas conçues pour être interopérables. L'objectif de cette thèse est de permettre l'interopérabilité automatisée des protocoles pour les systèmes, services et applications à travers des médiateurs de protocoles synthétisés dynamiquement. Les médiateurs représentent des composants logiciels qui peuvent coordonner les interactions entre deux ou plusieurs systèmes fonctionnellement compatibles, en utilisant les différents moyens de communication (réseaux IP, IPC, la mémoire partagée, etc.). Les médiateurs synthétisés dynamiquement devraient permettre aux applications de s'adapter sans difficulté aux protocoles a priori inconnus, soutiennent l'évolution de ces protocoles tout en contournant les contraintes des systèmes du monde réel, comme celles introduites par la mobilité de l'appareil et des systèmes d'exploitation. Dans cette thèse, nous nous concentrons sur les problèmes de recherche liés à l'automatisation du processus d'adaptation de données dans le cadre de la médiation de protocoles. L'adaptation des données est une étape clé dans la médiation de protocoles qui ne peut pas être résolu indépendamment. Cette dépendance devient visible lorsque les systèmes reposant sur des piles de protocoles complexes doivent être rendus interopérables, malgré les dépendances inter-couche à l'intérieur des données échangées. Il y a la nécessité d'un cadre qui synthétise les médiateurs, tout en tenant compte de l’adaptation de données. / In most systems available today interoperability is provided as a static capability that is the result of manually designed and hand coded integration. In consequence, a substantial number of functionally-compatible systems are not conceived to be interoperable. The focus of this thesis is to enable automated protocol interoperability for systems, services and applications through the means of dynamically synthesized protocol mediators. Protocol mediators represent concrete software components which can coordinate interactions between two or more functionally-compatible systems, relying on various means of communication (IP networks, personal area networks, inter-process communication, shared memory, etc.). Dynamically synthesised mediators should allow applications to seamlessly adapt to a priori unknown protocols, support the evolution of such protocols while circumventing real-world system constraints, such as those introduced by device mobility and operating system differences. In this thesis we focus on the research problems related to automating the process of data adaptation in the context of protocol mediation. Data adaptation is a key phase in protocol mediation that cannot be solved independently. This strong dependence becomes visible when systems relying on multilayer protocol stacks have to be made interoperable, despite cross-layer dependencies inside the exchanged data. There is the need of a frame- work that synthesises mediators while taking into account cross-layer data adaptation.
2

Data Perspectives of Workflow Schema Evolution : Cases of Task Deletion and Insertion

Arunagiri, Aravindhan January 2013 (has links) (PDF)
Dynamic changes in the business environment requires their business process to be up-to-date. The Workflow Management Systems supporting these business processes need to adapt to these changes rapidly. The Work Flow Management Systems however lacks the ability to dynamically propagate the process changes to their process model schemas (Workflow templates). The literature on workflow schema evolution emphasizes the impact of changes in control flow with very ittle attention to other aspects of a workflow schema. This thesis studies the data aspect (data flow and data model) of workflow schema during its evolution. Workflow schema changes can lead to inconsistencies between the underlying database model and the workflow. A rather straight forward approach to the problem would be to abandon the existing database model and start afresh. However this introduces data persistence issues. Also there could be significant system downtimes involved in the process of migrating data from the old database model to the current one. In this research we develop an approach to address this problem. The business changes demand various types of control flow changes to its business process model (workflow schema). The control flow changes include task insertion, deletion, swapping, movement, replacement, extraction, in-lining, Parallelizing etc. Many of the control flow changes to the workflow can be made by using the combination of a simple task insertion and deletion, while some like embedding task in loop/ conditional branch and Parallelizing tasks also requires the addition/removal of control dependency between the tasks. Since many of the control flow change patterns involves task insertion and deletion at its core, in this thesis we study its impact on the underlying data model. We propose algorithms to dynamically handle the changes in the underlying relational database schema. First we identify the basic change patterns that can be implemented using atomic task insertion and deletions. Then we characterize these basic pattern in terms of their data flow anomalies (Missing, Redundant, Conflicting data) that they can generate. The Data schema compliance criteria are developed to identify the data changes: (i) that makes the underlying database schema inconsistent with the modified workflow and (ii) generating the aforementioned data anomalies. The Data schema compliance criteria characterizes the change patterns in terms of its ability to work with the current relational data model. The Data schema compliance criteria show various properties required of the modified workflow to be consistent with the underlying database model. The data of any workflow instance conforming to Data schema compliance criteria can be directly accommodated in the database model. The data anomalies (of task insertion and deletion) identified using DSC are handled dynamically using respective Data adaptation algorithms. The algorithm uses the functional dependency constraints in the relational database model to adapt/handle these data anomalies. Such handled data (changes) that conform to DSC can be directly accommodated in the underlying database schema. Hence with this approach the workflow can be modified (using task insertion and deletion) and their data changes can be implemented on-the-fly using the Data adaptation algorithms. In this research the same old data model is evolved without abandoning it even after the modification of the workflow schema. This maintains the old data persistence in the existing database schema. Detailed implementation procedures to deploy the Data adaptation algorithms are presented with illustrative examples.

Page generated in 0.1292 seconds