• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 39
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 32
  • 31
  • 23
  • 21
  • 21
  • 19
  • 18
  • 16
  • 15
  • 14
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Estudo comparativo de técnicas de escalonamento de tarefas dependentes para grades computacionais / Comparative Study of Task Dependent Scheduling Algorithms to Grid Computing

Aliaga, Alvaro Henry Mamani 22 August 2011 (has links)
À medida que a ciência avança, muitas aplicações em diferentes áreas precisam de grande poder computacional. A computação em grade é uma importante alternativa para a obtenção de alto poder de processamento, no entanto, esse alto poder computacional deve ser bem aproveitado. Mediante o uso de técnicas de escalonamento especializadas, os recursos podem ser utilizados adequadamente. Atualmente existem vários algoritmos propostos para computação em grade, portanto, é necessário seguir uma boa metodologia para escolher o algoritmo que ofereça melhor desempenho, dadas determinadas características. No presente trabalho comparamos os algoritmos de escalonamento: Heterogeneous Earliest Finish Time (HEFT), (b) Critical Path on a Processor (CPOP) e (c) Path Clustering Heuristic (PCH); cada algoritmo é avaliado com diferentes aplicações e sobre diferentes arquiteturas usando técnicas de simulação, seguindo quatro critérios: (i) desempenho, (ii) escalabilidade, (iii) adaptabilidade e (iv) distribuição da carga do trabalho. Diferenciamos as aplicações para grade em dois tipos: (i) aplicações regulares e (ii) aplicações irregulares; dado que em aplicações irregulares não é facil comparar o critério de escalabilidade. Seguindo esse conjunto de critérios o algoritmo HEFT possui o melhor desempenho e escalabilidade; enquanto que os três algoritmos possuem o mesmo nível de adaptabilidade. Na distribuição de carga de trabalho o algoritmo HEFT aproveita melhor os recursos do que os outros. Por outro lado os algoritmos CPOP e PCH usam a técnica de escalonar o caminho crítico no processador que ofereça o melhor tempo de término, mas essa abordagem nem sempre é a mais adequada. / As science advances, many applications in different areas need a big amount of computational power. Grid computing is an important alternative to obtain high processing power, but this high computational power must be well used. By using specialized scheduling techniques, resources can be properly used. Currently there are several algorithms for grid computing, therefore, is necessary to follow a good methodology to choose an algorithm that offers better performance given certain settings. In this work, we compare task dependent scheduling algorithms: (a) Heterogeneous Earliest Finish Time (HEFT), (b) Critical Path on a Processor (CPOP) e Path Clustering Heuristic (PCH); each algorithm is evaluated with different applications and on different architectures using simulation techniques, following four criterias: (i) performance, (ii) scalability, (iii) adaptability and (iv) workload distribution. We distinguish two kinds of grid applications: (i) regular applications and (ii) irregular applications, since in irregular applications is not easy to compare scalability criteria. Following this set of criteria the HEFT algorithm reaches the best performance and scalability, while the three algorithms have the same level of adaptability. In workload distribution HEFT algorithm makes better use of resources than others. On the other hand, CPOP and PCH algorithms use scheduling of tasks which belong to the critical path on the processor which minimizes the earliest finish time, but this approach is not always the most appropriate.
82

Workflows conceptuels / Conceptual workflows

Cerezo, Nadia 20 December 2013 (has links)
Les workflows sont de plus en plus souvent adoptés pour la modélisation de simulations scientifiques de grande échelle, aussi bien en matière de données que de calculs. Ils profitent de l'abondance de sources de données et infrastructures de calcul distribuées. Néanmoins, la plupart des formalismes de workflows scientifiques restent difficiles à exploiter pour des utilisateurs n'ayant pas une grande expertise de l'algorithmique distribuée, car ces formalismes mélangent les processus scientifiques qu'ils modélisent avec les implémentations. Ainsi, ils ne permettent pas de distinguer entre les objectifs et les méthodes, ni de repérer les particularités d'une implémentation ou de l'infrastructure sous-jacente. Le but de ce travail est d'améliorer l'accessibilité aux workflows scientifiques et de faciliter leur création et leur réutilisation. Pour ce faire, nous proposons d'élever le niveau d'abstraction, de mettre en valeur l'expérience scientifique plutôt que les aspects techniques, de séparer les considérations fonctionnelles et non-fonctionnelles et de tirer profit des connaissances et du savoir-faire du domaine.Les principales contributions de ce travail sont : (i) un modèle de workflows scientifiques à structure flexible, sémantique et multi-niveaux appelé "Conceptual Workflow Model", qui permet aux utilisateurs de construire des simulations indépendamment de leur implémentation afin de se concentrer sur les objectifs et les méthodes scientifiques; et (ii) un processus de transformation assisté par ordinateur pour aider les utilisateurs à convertir leurs modèles de simulation de haut niveau en workflows qui peuvent être délégués à des systèmes externes pour exécution. / Workflows are increasingly adopted to describe large-scale data- and compute-intensive scientific simulations which leverage the wealth of distributed data sources and computing infrastructures. Nonetheless, most scientific workflow formalisms remain difficult to exploit for scientists who are neither experts nor enthusiasts of distributed computing, because they mix the scientific processes they model with their implementations, blurring the lines between what is done and how it is done, as well as between what is and what is not infrastructure-dependent. Our objective is to improve scientific workflow accessibility and ease scientific workflow design and reuse, by elevating the abstraction level, emphasizing the scientific experiment over technicalities, ensuring proper separation between functional and non-functional concerns and leveraging domain knowledge and know-how. The main contributions of this work are: (i) a multi-level structurally flexible semantic scientific workflow model, called the Conceptual Workflow Model, which lets users design simulations at a computation-independent level and focus on domain goals and methods; and (ii) a computer-assisted Transformation Process relying on knowledge engineering technologies to help users transform their high-level simulation models into executable workflow artifacts which can be delegated to third-party frameworks for enactment.
83

Distributed knowledge sharing and production through collaborative e-Science platforms

Gaignard, Alban 15 March 2013 (has links) (PDF)
This thesis addresses the issues of coherent distributed knowledge production and sharing in the Life-science area. In spite of the continuously increasing computing and storage capabilities of computing infrastructures, the management of massive scientific data through centralized approaches became inappropriate, for several reasons: (i) they do not guarantee the autonomy property of data providers, constrained, for either ethical or legal concerns, to keep the control over the data they host, (ii) they do not scale and adapt to the massive scientific data produced through e-Science platforms. In the context of the NeuroLOG and VIP Life-science collaborative platforms, we address on one hand, distribution and heterogeneity issues underlying, possibly sensitive, resource sharing ; and on the other hand, automated knowledge production through the usage of these e-Science platforms, to ease the exploitation of the massively produced scientific data. We rely on an ontological approach for knowledge modeling and propose, based on Semantic Web technologies, to (i) extend these platforms with efficient, static and dynamic, transparent federated semantic querying strategies, and (ii) to extend their data processing environment, from both provenance information captured at run-time and domain-specific inference rules, to automate the semantic annotation of ''in silico'' experiment results. The results of this thesis have been evaluated on the Grid'5000 distributed and controlled infrastructure. They contribute to addressing three of the main challenging issues faced in the area of computational science platforms through (i) a model for secured collaborations and a distributed access control strategy allowing for the setup of multi-centric studies while still considering competitive activities, (ii) semantic experiment summaries, meaningful from the end-user perspective, aimed at easing the navigation into massive scientific data resulting from large-scale experimental campaigns, and (iii) efficient distributed querying and reasoning strategies, relying on Semantic Web standards, aimed at sharing capitalized knowledge and providing connectivity towards the Web of Linked Data.
84

Semantics and planning based workflow composition and execution for video processing

Nadarajan, Gayathri January 2011 (has links)
Traditional workflow systems have several drawbacks, e.g. in their inabilities to rapidly react to changes, to construct workflow automatically (or with user involvement) and to improve performance autonomously (or with user involvement) in an incremental manner according to specified goals. Overcoming these limitations would be highly beneficial for complex domains where such adversities are exhibited. Video processing is one such domain that increasingly requires attention as larger amounts of images and videos are becoming available to persons who are not technically adept in modelling the processes that are involved in constructing complex video processing workflows. Conventional video and image processing systems, on the other hand, are developed by programmers possessing image processing expertise. These systems are tailored to produce highly specialised hand-crafted solutions for very specific tasks, making them rigid and non-modular. The knowledge-based vision community have attempted to produce more modular solutions by incorporating ontologies. However, they have not been maximally utilised to encompass aspects such as application context descriptions (e.g. lighting and clearness effects) and qualitative measures. This thesis aims to tackle some of the research gaps yet to be addressed by the workflow and knowledge-based image processing communities by proposing a novel workflow composition and execution approach within an integrated framework. This framework distinguishes three levels of abstraction via the design, workflow and processing layers. The core technologies that drive the workflow composition mechanism are ontologies and planning. Video processing problems provide a fitting domain for investigating the effectiveness of this integratedmethod as tackling such problems have not been fully explored by the workflow, planning and ontological communities despite their combined beneficial traits to confront this known hard problem. In addition, the pervasiveness of video data has proliferated the need for more automated assistance for image processing-naive users, but no adequate support has been provided as of yet. A video and image processing ontology that comprises three sub-ontologies was constructed to capture the goals, video descriptions and capabilities (video and image processing tools). The sub-ontologies are used for representation and inference. In particular, they are used in conjunction with an enhanced Hierarchical Task Network (HTN) domain independent planner to help with performance-based selection of solution steps based on preconditions, effects and postconditions. The planner, in turn, makes use of process models contained in a process library when deliberating on the steps and then consults the capability ontology to retrieve a suitable tool at each step. Two key features of the planner are the ability to support workflow execution (interleaves planning with execution) and can perform in automatic or semi-automatic (interactive) mode. The first feature is highly desirable for video processing problems because execution of image processing steps yield visual results that are intuitive and verifiable by the human user, as automatic validation is non trivial. In the semiautomaticmode, the planner is interactive and prompts the user tomake a tool selection when there is more than one tool available to perform a task. The user makes the tool selection based on the recommended descriptions provided by the workflow system. Once planning is complete, the result of applying the tool of their choice is presented to the user textually and visually for verification. This plays a pivotal role in providing the user with control and the ability to make informed decisions. Hence, the planner extends the capabilities of typical planners by guiding the user to construct more optimal solutions. Video processing problems can also be solved in more modular, reusable and adaptable ways as compared to conventional image processing systems. The integrated approach was evaluated on a test set consisting of videos originating from open sea environment of varying quality. Experiments to evaluate the efficiency, adaptability to user’s changing needs and user learnability of this approach were conducted on users who did not possess image processing expertise. The findings indicate that using this integrated workflow composition and execution method: 1) provides a speed up of over 90% in execution time for video classification tasks using full automatic processing compared to manual methods without loss of accuracy; 2) is more flexible and adaptable in response to changes in user requests (be it in the task, constraints to the task or descriptions of the video) than modifying existing image processing programs when the domain descriptions are altered; 3) assists the user in selecting optimal solutions by providing recommended descriptions.
85

Adaptation à la volée de situations d'apprentissage modélisées conformément à un langage de modélisation pédagogique

Ouari, Salim 25 November 2011 (has links) (PDF)
Le travail présenté dans ce mémoire s'inscrit dans le domaine des Environnements Informatiques pour l'Apprentissage Humain (EIAH), plus précisément celui de l'ingénierie des EIAH dans le cadre d'une approche de type " Learning Design ". Cette approche propose de construire des EIAH à partir de la description formelle d'une activité d'apprentissage. Elle suppose l'existence d'un langage de modélisation communément appelé EML (Educational Modelling Language) et d'un moteur capable d'interpréter ce langage. LDL est le langage sur lequel nous avons travaillé, en relation avec l'infrastructure LDI intégrant un moteur d'interprétation de LDL. L'EML est utilisé pour produire un scénario, modèle formel d'une activité d'apprentissage. L'EIAH servant de support au déroulement de l'activité décrite dans le scénario est alors construit de manière semi-automatique au sein de l'infrastructure associée au langage selon le processus suivant : le scénario est créé lors d'une phase de conception ; il est instancié et déployé sur une plate-forme de services lors d'une phase d'opérationnalisation (choix des participants à l'activité, affectation des rôles, choix des ressources et services) ; le scénario instancié et déployé est pris en charge par le moteur qui va l'interpréter pour en assurer l'exécution. Dans ce cadre, l'activité se déroule conformément à ce qui a été spécifié dans le scénario. Or il est impossible de prévoir par avance tout ce qui peut se produire dans une activité, les activités étant par nature imprévisibles. Des situations non prévues peuvent survenir et conduire à des perturbations dans l'activité, voire à des blocages. Il devient alors primordial de fournir les moyens de débloquer la situation. L'enseignant peut par ailleurs vouloir exploiter une situation ou une opportunité en modifiant l'activité en cours d'exécution. C'est le problème qui est traité dans cette thèse : il s'agit de fournir les moyens d'adapter une activité " à la volée ", c'est-à-dire pendant son exécution, de manière à pouvoir gérer une situation non prévue et poursuivre l'activité. La proposition que nous formulons s'appuie sur la différentiation entre les données convoquées dans chacune des trois phases du processus de construction de l'EIAH : la conception, l'opérationnalisation et l'exécution. Nous exhibons un modèle pour chacune de ces phases, qui organise ces données et les positionne les unes par rapport aux autres. Adapter une activité " à la volée " revient alors à modifier ces modèles en fonction des situations à traiter. Certaines nécessitent la modification d'un seul de ses modèles, d'autres conduisent à propager les modifications d'un modèle vers un autre. Nous considérons l'adaptation " à la volée " comme une activité à part entière menée, en parallèle de l'activité d'apprentissage, par un superviseur humain qui dispose d'un environnement adéquat pour observer l'activité, détecter les éventuels problèmes et y remédier par intervention dans l'activité d'apprentissage en modifiant les modèles qui la spécifient. Pour développer les outils support à la modification et les intégrer dans l'infrastructure LDI, nous avons eu recours à des techniques de l'Ingénierie Dirigée par les Modèles. Les modèles manipulés dans ces outils en sont ainsi des données à part entière : les outils réalisés n'en offrent ainsi que plus de flexibilité et d'abstraction. Les modèles sont alors exploités comme des leviers pour atteindre et modifier les données ciblées par l'adaptation.
86

Scheduling workflows to optimize for execution time

Peters, Mathias January 2018 (has links)
Many functions in today’s society are immensely dependent on data. Data drives everything from business decisions to self-driving cars to intelligent home assistants like Amazon Echo and Google Home. To make good decisions based on data, of which exabytes are generated every day, somehow that data has to be processed. Data processing can be complex and time-consuming. One way of reducing the complexity is to create workflows that consist of several steps that together produce the right result. Klarna is an example of a company that relies on workflows for transforming and analyzing data. As a company whose core business involves analyzing customer data, being able to do those analyses faster will lead to direct business value in the form of more well-informed decisions. The workflows Klarna use are currently all written in a sequential form. However, workflows, where independent tasks are executed in parallel, are more performant than workflows where only one task is executed at any point in time. Due to limitations in human attention span, parallelized workflows are harder for humans to write, compared to sequential workflows. In this work, a computer application was created that automates the parallelization of a workflow to let humans write sequential workflows while still getting the performance of parallelized workflows. The application does this by taking a simple sequential workflow, identifies dependencies in the workflow and then schedules it in a way that is as parallel as possible given the identified dependencies. Such a solution has not been created before. However, experimental evaluation shows that parallelization of a sequential workflow used in daily production at Klarna can reduce execution time by up to 80%, showing that the application can bring value to Klarna and other organizations that use workflows to analyze big data.
87

Estudo comparativo de técnicas de escalonamento de tarefas dependentes para grades computacionais / Comparative Study of Task Dependent Scheduling Algorithms to Grid Computing

Alvaro Henry Mamani Aliaga 22 August 2011 (has links)
À medida que a ciência avança, muitas aplicações em diferentes áreas precisam de grande poder computacional. A computação em grade é uma importante alternativa para a obtenção de alto poder de processamento, no entanto, esse alto poder computacional deve ser bem aproveitado. Mediante o uso de técnicas de escalonamento especializadas, os recursos podem ser utilizados adequadamente. Atualmente existem vários algoritmos propostos para computação em grade, portanto, é necessário seguir uma boa metodologia para escolher o algoritmo que ofereça melhor desempenho, dadas determinadas características. No presente trabalho comparamos os algoritmos de escalonamento: Heterogeneous Earliest Finish Time (HEFT), (b) Critical Path on a Processor (CPOP) e (c) Path Clustering Heuristic (PCH); cada algoritmo é avaliado com diferentes aplicações e sobre diferentes arquiteturas usando técnicas de simulação, seguindo quatro critérios: (i) desempenho, (ii) escalabilidade, (iii) adaptabilidade e (iv) distribuição da carga do trabalho. Diferenciamos as aplicações para grade em dois tipos: (i) aplicações regulares e (ii) aplicações irregulares; dado que em aplicações irregulares não é facil comparar o critério de escalabilidade. Seguindo esse conjunto de critérios o algoritmo HEFT possui o melhor desempenho e escalabilidade; enquanto que os três algoritmos possuem o mesmo nível de adaptabilidade. Na distribuição de carga de trabalho o algoritmo HEFT aproveita melhor os recursos do que os outros. Por outro lado os algoritmos CPOP e PCH usam a técnica de escalonar o caminho crítico no processador que ofereça o melhor tempo de término, mas essa abordagem nem sempre é a mais adequada. / As science advances, many applications in different areas need a big amount of computational power. Grid computing is an important alternative to obtain high processing power, but this high computational power must be well used. By using specialized scheduling techniques, resources can be properly used. Currently there are several algorithms for grid computing, therefore, is necessary to follow a good methodology to choose an algorithm that offers better performance given certain settings. In this work, we compare task dependent scheduling algorithms: (a) Heterogeneous Earliest Finish Time (HEFT), (b) Critical Path on a Processor (CPOP) e Path Clustering Heuristic (PCH); each algorithm is evaluated with different applications and on different architectures using simulation techniques, following four criterias: (i) performance, (ii) scalability, (iii) adaptability and (iv) workload distribution. We distinguish two kinds of grid applications: (i) regular applications and (ii) irregular applications, since in irregular applications is not easy to compare scalability criteria. Following this set of criteria the HEFT algorithm reaches the best performance and scalability, while the three algorithms have the same level of adaptability. In workload distribution HEFT algorithm makes better use of resources than others. On the other hand, CPOP and PCH algorithms use scheduling of tasks which belong to the critical path on the processor which minimizes the earliest finish time, but this approach is not always the most appropriate.
88

Composer-science: um framework para a composição de workflows científicos

Silva, Laryssa Aparecida Machado da 05 July 2010 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-05-31T11:20:44Z No. of bitstreams: 1 laryssaaparecidamachadodasilva.pdf: 4042568 bytes, checksum: 22bb878bf9e226b2225e96b0e5b6405a (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-05-31T12:42:13Z (GMT) No. of bitstreams: 1 laryssaaparecidamachadodasilva.pdf: 4042568 bytes, checksum: 22bb878bf9e226b2225e96b0e5b6405a (MD5) / Made available in DSpace on 2017-05-31T12:42:13Z (GMT). No. of bitstreams: 1 laryssaaparecidamachadodasilva.pdf: 4042568 bytes, checksum: 22bb878bf9e226b2225e96b0e5b6405a (MD5) Previous issue date: 2010-07-05 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Um conceito importante nas pesquisas em e-Science é o de workflows científicos, que, em geral, são longos, compostos de várias aplicações que, em conjunto, representam um experimento científico. Uma possibilidade para auxiliar na definição destes workflows científicos é o uso de ferramentas que agreguem semântica para auxiliar na sua composição. Os serviços Web semânticos apresentam tecnologias altamente favoráveis à sua composição para a obtenção de processos mais complexos, tais como o uso de padrões Web, independência de plataforma, independência de linguagem de programação utilizada para o desenvolvimento, possibilidade de processamento distribuído, e, principalmente, o uso de recursos semânticos que possibilitem sua descoberta, composição e invocação automáticas. Com o objetivo de auxiliar na descoberta de serviços Web para a composição de workflows científicos, propomos o desenvolvimento de um framework, denominado Composer-Science, que realize a busca de serviços Web semânticos e componha estes, definindo assim, um workflow científico. O objetivo geral do ComposerScience é permitir que o pesquisador descreva semanticamente um workflow científico e, considerando essa descrição, automatize, por meio do uso de serviços Web semânticos e ontologias, a busca semântica por serviços em repositórios e a geração de workflows científicos a partir dessa composição. O objetivo geral do framework pode ser decomposto em objetivos específicos: o registro e o armazenamento, nos repositórios distribuídos (bancos de dados) do framework, de ontologias de domínio (OWL) e anotações dos serviços Web semânticos (OWL-S); a realização de pesquisa semântica, baseada em requisitos fornecidos pelo pesquisador, nos repositórios distribuídos, a fim de realizar a descoberta de serviços Web semânticos que atendam os requisitos semânticos fornecidos; a análise sintática, baseada em requisitos estruturais (dados de entrada e saída), além da análise semântica dos serviços descobertos por meio da pesquisa semântica, a fim de se obter possíveis composições dos mesmos; a geração de modelos de workflows em WS-BPEL a partir das composições possíveis. Desta forma, os modelos gerados pelo framework podem ser utilizados em Sistemas de Gerenciamento de Workflows Científicos (SGWfC) e serem compostos com outros modelos de workflow. / An important concept in e-Science research is scientific workflows, which are usually long, consisting of several applications that, together, represent a scientific experiment. One possibility to assist in defining these scientific workflows is the use of tools that add semantics to the composition process. Semantic Web services have technologies that are highly favorable to their composition, in order to obtain more complex processes. Examples of these technologies are the use of Web standards, platform independence, programming language independence, possibility of distributed processing and especially the use of semantic resources that enable their discovery, automatic composition and invocation. With the aim of assisting in the discovery of Web services for scientific workflows composition, we propose the development of a framework, named Composer-Science, to conduct the search for semantic Web services and compose them, thus defining a scientific workflow. The overall objective of Composer-Science is to allow researcher to describe semantically a scientific workflow and, considering this description automatize, through the use of semantic web services and ontologies, the semantic search for services in repositories and the generation of scientific workflows from this composition. The overall objective of the framework can be broken down into specific objectives: registration and storage of domain ontologies (OWL) and semantic annotations of Web services (OWL-S), in distributed repositories (databases) of the framework; implementation of semantic search, based on requirements provided by the researcher, in distributed repositories, in order to discovery semantic Web services that match the semantic requirements provided; the syntactic analysis, based on structural requirements (input and output), and semantic analysis of services discovered using semantic search, in order to obtain their possible compositions; the generation of WS-BPEL workflow models from the possible compositions. Finally, the models generated by the framework can be used in Workflow Management Systems (WMS) and composed with other workflow models.
89

Verifying Modal Specifications of Workflow Nets : using Constraint Solving and Reduction Methods / Vérification de spécifications modales de réseaux worklows à l'aide de solveurs de contraintes et de methodes de résolution

Bride, Hadrien 24 October 2016 (has links)
De nos jours, les workflows sont largement utilisés par les entreprises et les organisations en vue d’améliorer l’efficacité organisationnelle, la réactivité et la rentabilité en gérant les tâches et les étapes de processus opérationnels. La vérification des spécifications est devenue obligatoire afin d’assurer que ces processus sont correctement conçus et atteignent le niveau de confiance et de qualité attendu.Dans ce contexte, cette thèse porte sur la vérification de spécifications modales – comportements nécessaires ou recevables impliquant plusieurs activités et leurs causalités – de workflows nets – une classe de réseaux de Petri adaptés à la description de workflows. En particulier, cette thèse définit un cadre novateur permettant de modéliser les exécutions de workflow nets,avec ou sans données, et de vérifier des spécifications modales à l’aide de systèmes de contraintes. Elle présente également deux méthodes de réduction préservant la "generalised soundness" et la validité d’une spécification modale donnée. Ces méthodes de réduction sont ensuite présentées comme des étapes de prétraitement réduisant la taille des workflow nets, de sorte que la vérification des propriétés conservées puisse être effectuée sur de plus petites instances. Enfin, cette thèse présente les outils qui ont été mis en oeuvre ainsi que des expérimentations qui ont été menées sur un grand nombre de workflows industriels afin de valider les approches proposées dans cette thèse. Ces résultats expérimentaux convaincants mettent en évidence l’efficacité, l’efficience et le passage à l’échelle de la méthode vérification de spécification modales ainsi que des méthodes de réduction introduites dans cette thèse. / Nowadays workflows are extensively used by companies and organisations in order to improve organizationaleffciency, responsiveness and profitability by managing the tasks and steps of business processes. Theverification of specifications has become mandatory to ensure that such processes are properly designedand reach the expected level of trust and quality. In this context, this thesis addresses the verification ofmodal specifications – necessary or admissible behaviour involving several activities and their causalities –of workflow nets – a Petri nets class suited for the description of workflows.In particular, it defines an innovative constraint system based framework to model executions of ordinary as wellas coloured workflow nets, and verify modal specifications. Further, it presents powerful reduction methodspreserving properties of interest such as generalised soundness and correctness of a given modal specification.Such reduction methods are then portrayed as pre-processing steps reducing workflow nets size, so that theverification of preserved properties can be carried out on smaller instances.Finally, as a practical contribution, this thesis introduces the tools that have been implemented as well asexperimentations that have been carried out over industrial workflow nets in order to validate the approachesproposed in this thesis. The convincing experimental results highlight the effectiveness, effciency andscalability of the modal specification verification method and reduction methods introduced in this thesis.
90

Self-managed Workflows for Cyber-physical Systems

Seiger, Ronny 03 December 2018 (has links)
Workflows are a well-established concept for describing business logics and processes in web-based applications and enterprise application integration scenarios on an abstract implementation-agnostic level. Applying Business Process Management (BPM) technologies to increase autonomy and automate sequences of activities in Cyber-physical Systems (CPS) promises various advantages including a higher flexibility and simplified programming, a more efficient resource usage, and an easier integration and orchestration of CPS devices. However, traditional BPM notations and engines have not been designed to be used in the context of CPS, which raises new research questions occurring with the close coupling of the virtual and physical worlds. Among these challenges are the interaction with complex compounds of heterogeneous sensors, actuators, things and humans; the detection and handling of errors in the physical world; and the synchronization of the cyber-physical process execution models. Novel factors related to the interaction with the physical world including real world obstacles, inconsistencies and inaccuracies may jeopardize the successful execution of workflows in CPS and may lead to unanticipated situations. This thesis investigates properties and requirements of CPS relevant for the introduction of BPM technologies into cyber-physical domains. We discuss existing BPM systems and related work regarding the integration of sensors and actuators into workflows, the development of a Workflow Management System (WfMS) for CPS, and the synchronization of the virtual and physical process execution as part of self-* capabilities for WfMSes. Based on the identified research gap, we present concepts and prototypes regarding the development of a CPS WFMS w.r.t. all phases of the BPM lifecycle. First, we introduce a CPS workflow notation that supports the modelling of the interaction of complex sensors, actuators, humans, dynamic services and WfMSes on the business process level. In addition, the effects of the workflow execution can be specified in the form of goals defining success and error criteria for the execution of individual process steps. Along with that, we introduce the notion of Cyber-physical Consistency. Following, we present a system architecture for a corresponding WfMS (PROtEUS) to execute the modelled processes-also in distributed execution settings and with a focus on interactive process management. Subsequently, the integration of a cyber-physical feedback loop to increase resilience of the process execution at runtime is discussed. Within this MAPE-K loop, sensor and context data are related to the effects of the process execution, deviations from expected behaviour are detected, and compensations are planned and executed. The execution of this feedback loop can be scaled depending on the required level of precision and consistency. Our implementation of the MAPE-K loop proves to be a general framework for adding self-* capabilities to WfMSes. The evaluation of our concepts within a smart home case study shows expected behaviour, reasonable execution times, reduced error rates and high coverage of the identified requirements, which makes our CPS~WfMS a suitable system for introducing workflows on top of systems, devices, things and applications of CPS.:1. Introduction 15 1.1. Motivation 15 1.2. Research Issues 17 1.3. Scope & Contributions 19 1.4. Structure of the Thesis 20 2. Workflows and Cyber-physical Systems 21 2.1. Introduction 21 2.2. Two Motivating Examples 21 2.3. Business Process Management and Workflow Technologies 23 2.4. Cyber-physical Systems 31 2.5. Workflows in CPS 38 2.6. Requirements 42 3. Related Work 45 3.1. Introduction 45 3.2. Existing BPM Systems in Industry and Academia 45 3.3. Modelling of CPS Workflows 49 3.4. CPS Workflow Systems 53 3.5. Cyber-physical Synchronization 58 3.6. Self-* for BPM Systems 63 3.7. Retrofitting Frameworks for WfMSes 69 3.8. Conclusion & Deficits 71 4. Modelling of Cyber-physical Workflows with Consistency Style Sheets 75 4.1. Introduction 75 4.2. Workflow Metamodel 76 4.3. Knowledge Base 87 4.4. Dynamic Services 92 4.5. CPS-related Workflow Effects 94 4.6. Cyber-physical Consistency 100 4.7. Consistency Style Sheets 105 4.8. Tools for Modelling of CPS Workflows 106 4.9. Compatibility with Existing Business Process Notations 111 5. Architecture of a WfMS for Distributed CPS Workflows 115 5.1. Introduction 115 5.2. PROtEUS Process Execution System 116 5.3. Internet of Things Middleware 124 5.4. Dynamic Service Selection via Semantic Access Layer 125 5.5. Process Distribution 126 5.6. Ubiquitous Human Interaction 130 5.7. Towards a CPS WfMS Reference Architecture for Other Domains 137 6. Scalable Execution of Self-managed CPS Workflows 141 6.1. Introduction 141 6.2. MAPE-K Control Loops for Autonomous Workflows 141 6.3. Feedback Loop for Cyber-physical Consistency 148 6.4. Feedback Loop for Distributed Workflows 152 6.5. Consistency Levels, Scalability and Scalable Consistency 157 6.6. Self-managed Workflows 158 6.7. Adaptations and Meta-adaptations 159 6.8. Multiple Feedback Loops and Process Instances 160 6.9. Transactions and ACID for CPS Workflows 161 6.10. Runtime View on Cyber-physical Synchronization for Workflows 162 6.11. Applicability of Workflow Feedback Loops to other CPS Domains 164 6.12. A Retrofitting Framework for Self-managed CPS WfMSes 165 7. Evaluation 171 7.1. Introduction 171 7.2. Hardware and Software 171 7.3. PROtEUS Base System 174 7.4. PROtEUS with Feedback Service 182 7.5. Feedback Service with Legacy WfMSes 213 7.6. Qualitative Discussion of Requirements and Additional CPS Aspects 217 7.7. Comparison with Related Work 232 7.8. Conclusion 234 8. Summary and Future Work 237 8.1. Summary and Conclusion 237 8.2. Advances of this Thesis 240 8.3. Contributions to the Research Area 242 8.4. Relevance 243 8.5. Open Questions 245 8.6. Future Work 247 Bibliography 249 Acronyms 277 List of Figures 281 List of Tables 285 List of Listings 287 Appendices 289

Page generated in 0.0248 seconds