Spelling suggestions: "subject:"[een] WORKFLOW MANAGEMENT SYSTEMS"" "subject:"[enn] WORKFLOW MANAGEMENT SYSTEMS""
11 |
Modelo de verificação de processos de negocios atraves de uma maquina virtual Pi-CalculosNader, Marcos Vanine Portilho de, 1954- 12 January 2006 (has links)
Orientador: Mauricio Ferreira Magalhães / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-08T01:16:08Z (GMT). No. of bitstreams: 1
Nader_MarcosVaninePortilhode_M.pdf: 1214383 bytes, checksum: 40e83a8be1c7e86e788d810a8799f6b8 (MD5)
Previous issue date: 2006 / Resumo: Duas áreas importantes estão em desenvolvimento: Gerência de Processos de Negócios (Business Process Management) e Orquestração de Web Services (Web Services Orchestration). Ambas têm um objetivo que é integrar aplicações ou outros processos que tenham interfaces web services, usando o paradigma de processos de negócios. Uma linguagem que vem sendo difundida para essas aplicações é a BPEL (Business Process Execution Language). Este trabalho apresenta um framework aplicável à análise e verificação de processos de negócios escritos em BPEL através do uso de Pi-Calculus. Pi-Calculus é uma álgebra de processos que possui mecanismos formais para criação e ativação de processos que se comunicam através da troca de mensagens em canais, usando o modelo de rendezvous síncrono. Nesse framework, o processo BPEL é traduzido para um programa Pi-Calculus. Uma Máquina Virtual Pi-Calculus (MVP) recebe o programa Pi-Calculus e produz todas as reações possíveis, ou seja, gera todos os caminhos de execução que o programa pode seguir. A partir desse resultado, efetua-se a verificação de propriedades como: atendimento às especificações de mais alto nível, ordenação de eventos e ocorrência ou não de deadlocks. Em termos práticos, uma ferramenta desse tipo pode ser incorporada aos Sistemas de Gerência de Processos de Negócios (Business Process Management Systems - BPMS) para ampliar a cobertura de testes durante as fases de análise e implementação de um processo dentro do seu ciclo de vida. Nesses tipos de sistemas, a reparação de um erro durante a fase de execução é muito mais custosa que nos sistemas tradicionais / Abstract: Two important areas have been in development lately: Business Process Management and Web Service Orchestration. In both of them, the objective is to integrate applications with web services interface through business process paradigm. A number of languages have been proposed with consensus being formed around BPEL (Business Process Execution Language). This dissertation presents a framework for BPEL processes analysis and verification through Pi-Calculus. Pi-Calculus is a process algebra with formal mechanisms for processes creation and activation; these processes communicate sending and receiving messages through channels using the synchronous rendezvous model. In this framework, the BPEL process is translated to a Pi-Calculus program, A Pi-Calculus Virtual Machine (MVP) receives a Pi-calculus program and executes all possible reactions, that is, it generates all execution paths possible to be taken. With this result, the properties such as high-level specification accomplishment, events ordering and deadlock freedom are verified. In practical terms, a tool of this sort can be part of a Business Process Management System (BPMS) to broaden test coverage during the analysis and implementation phases within a process life cycle. In these kinds of systems, a repairing mistake during the execution phase is more complex than in traditional systems / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
|
12 |
Implementation of a Laboratory Information Management System To Manage Genomic SamplesWitty, Derick 05 September 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A Laboratory Information Management Systems (LIMS) is designed to manage laboratory processes and data. It has the ability to extend the core functionality of the LIMS through configuration tools and add-on modules to support the implementation of complex laboratory workflows. The purpose of this project is to demonstrate how laboratory data and processes from a complex workflow can be implemented using a LIMS.
Genomic samples have become an important part of the drug development process due to advances in molecular testing technology. This technology evaluates genomic material for disease markers and provides efficient, cost-effective, and accurate results for a growing number of clinical indications. The preparation of the genomic samples for evaluation requires a complex laboratory process called the precision aliquotting workflow. The precision aliquotting workflow processes genomic samples into precisely created aliquots for analysis. The workflow is defined by a set of aliquotting scheme attributes that are executed based on scheme specific rules logic. The aliquotting scheme defines the attributes of each aliquot based on the achieved sample recovery of the genomic sample. The scheme rules logic executes the creation of the aliquots based on the scheme definitions.
LabWare LIMS is a Windows® based open architecture system that manages laboratory data and workflow processes. A LabWare LIMS model was developed to implement the precision aliquotting workflow using a combination of core functionality and configured code.
|
13 |
Scientific Workflows for HadoopBux, Marc Nicolas 07 August 2018 (has links)
Scientific Workflows bieten flexible Möglichkeiten für die Modellierung und den Austausch komplexer Arbeitsabläufe zur Analyse wissenschaftlicher Daten. In den letzten Jahrzehnten sind verschiedene Systeme entstanden, die den Entwurf, die Ausführung und die Verwaltung solcher Scientific Workflows unterstützen und erleichtern. In mehreren wissenschaftlichen Disziplinen wachsen die Mengen zu verarbeitender Daten inzwischen jedoch schneller als die Rechenleistung und der Speicherplatz verfügbarer Rechner.
Parallelisierung und verteilte Ausführung werden häufig angewendet, um mit wachsenden Datenmengen Schritt zu halten. Allerdings sind die durch verteilte Infrastrukturen bereitgestellten Ressourcen häufig heterogen, instabil und unzuverlässig. Um die Skalierbarkeit solcher Infrastrukturen nutzen zu können, müssen daher mehrere Anforderungen erfüllt sein: Scientific Workflows müssen parallelisiert werden. Simulations-Frameworks zur Evaluation von Planungsalgorithmen müssen die Instabilität verteilter Infrastrukturen berücksichtigen. Adaptive Planungsalgorithmen müssen eingesetzt werden, um die Nutzung instabiler Ressourcen zu optimieren. Hadoop oder ähnliche Systeme zur skalierbaren Verwaltung verteilter Ressourcen müssen verwendet werden.
Diese Dissertation präsentiert neue Lösungen für diese Anforderungen. Zunächst stellen wir DynamicCloudSim vor, ein Simulations-Framework für Cloud-Infrastrukturen, welches verschiedene Aspekte der Variabilität adäquat modelliert. Im Anschluss beschreiben wir ERA, einen adaptiven Planungsalgorithmus, der die Ausführungszeit eines Scientific Workflows optimiert, indem er Heterogenität ausnutzt, kritische Teile des Workflows repliziert und sich an Veränderungen in der Infrastruktur anpasst. Schließlich präsentieren wir Hi-WAY, eine Ausführungsumgebung die ERA integriert und die hochgradig skalierbare Ausführungen in verschiedenen Sprachen beschriebener Scientific Workflows auf Hadoop ermöglicht. / Scientific workflows provide a means to model, execute, and exchange the increasingly complex analysis pipelines necessary for today's data-driven science. Over the last decades, scientific workflow management systems have emerged to facilitate the design, execution, and monitoring of such workflows. At the same time, the amounts of data generated in various areas of science outpaced hardware advancements.
Parallelization and distributed execution are generally proposed to deal with increasing amounts of data. However, the resources provided by distributed infrastructures are subject to heterogeneity, dynamic performance changes at runtime, and occasional failures. To leverage the scalability provided by these infrastructures despite the observed aspects of performance variability, workflow management systems have to progress: Parallelization potentials in scientific workflows have to be detected and exploited. Simulation frameworks, which are commonly employed for the evaluation of scheduling mechanisms, have to consider the instability encountered on the infrastructures they emulate. Adaptive scheduling mechanisms have to be employed to optimize resource utilization in the face of instability. State-of-the-art systems for scalable distributed resource management and storage, such as Apache Hadoop, have to be supported.
This dissertation presents novel solutions for these aspirations. First, we introduce DynamicCloudSim, a cloud computing simulation framework that is able to adequately model the various aspects of variability encountered in computational clouds. Secondly, we outline ERA, an adaptive scheduling policy that optimizes workflow makespan by exploiting heterogeneity, replicating bottlenecks in workflow execution, and adapting to changes in the underlying infrastructure. Finally, we present Hi-WAY, an execution engine that integrates ERA and enables the highly scalable execution of scientific workflows written in a number of languages on Hadoop.
|
Page generated in 0.0295 seconds