Spelling suggestions: "subject:"workflow managemement lemsystems"" "subject:"workflow managemement atemsystems""
11 |
[en] TEAM: AN ARCHITECTURE FOR E-WORKFLOW MANAGEMENT / [pt] TEAM: UMA ARQUITETURA PARA GERÊNCIA DE E-WORKFLOWSLUIZ ANTONIO DE MORAES PEREIRA 30 August 2004 (has links)
[pt] Em aplicações colaborativas distribuídas, o uso de
repositórios centralizados para armazenamento dos dados e
programas compartilhados compromete algumas características
importantes desse tipo de aplicações, tais como tolerância a
falhas, escalabilidade e autonomia local. Aplicações como
Kazaa, Gnutella e Edutella exemplificam o emprego de
computação ponto-a-ponto (P2P), que tem se mostrado uma
alternativa interessante para solução dos problemas
apontados acima, sem impor as restrições típicas de
sistemas centralizados ou mesmo distribuídos do tipo
mediadores e SGBDHs. Nesse trabalho apresentamos a
arquitetura TEAM (Teamwork-support Environment
Architectural Model) para gerência de workflows na web.
Além de descrevermos os componentes e conectores da
arquitetura, que se baseia em computação P2P, tratamos dos
aspectos de modelagem de processos, gerência dos dados,
metadados e das informações de controle de execução dos
processos. Exploramos, também, a estratégia adotada para
disseminação das consultas e mensagens encaminhadas aos
pares da rede em ambientes baseados na arquitetura.
Ilustramos o emprego da arquitetura TEAM em um estudo de
caso em e-learning. / [en] In distributed collaborative applications, the use of
centralized repositories for storing shared data and
programs compromises some important characteristics of
this type of applications, such as fault tolerance,
scalability and local autonomy. Applications like Kazaa,
Gnutella and Edutella exemplify the use of peer-to-peer
computing, which is being considered an interesting
alternative for the solution of the problems mentioned
above, without imposing typical restrictions of centralized
or even distributed systems such as mediators and HDBMSs. In
this work we present the TEAM (Teamwork-support Environment
Architectural Model) architecture for managing workflows in
the Web. Besides describing the components and connectors
of the architecture, which is based on P2P computing, we
address the modelling of processes and management of data,
metadata and execution control information.We also discuss
the strategy adopted for queries dissemination and messages
sent to peers in environments based on the architecture. We
illustrate the application of TEAM in a case study in
e-learning.
|
12 |
Context driven workflow adaptation applied to healthcare planning = Adaptação de workflows dirigida por contexto aplicada ao planejamento de saúde / Adaptação de workflows dirigida por contexto aplicada ao planejamento de saúdeVilar, Bruno Siqueira Campos Mendonça, 1982- 25 August 2018 (has links)
Orientadores: Claudia Maria Bauzer Medeiros, André Santanchè / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-25T03:19:51Z (GMT). No. of bitstreams: 1
Vilar_BrunoSiqueiraCamposMendonca_D.pdf: 3275725 bytes, checksum: 4ccdd82eebca5b8da9748c7c515ea4c1 (MD5)
Previous issue date: 2014 / Resumo: Sistemas de Gerenciamento de Workflows (WfMS -- em inglês) são usados para gerenciar a execução de processos, melhorando eficiência e eficácia de procedimentos em uso. As forças motrizes por trás da adoção e do desenvolvimento de um WfMS são aplicações científicas e de negócios. Esforços conjuntos de ambos resultaram em mecanismos consolidados, além de padrões e protocolos consensuais. Em particular, um WfMS científico (SWfMS -- \textit{Scientific WfMS}) auxilia cientistas a especificar e executar experimentos distribuídos. Ele fornece diferentes recursos que suportam atividades em um ambiente experimental, como prover flexibilidade para mudar o projeto de workflow, manter a proveniência e suportar reproducibilidade de experimentos. Por outro lado, apesar de poucas iniciativas de pesquisa, WfMSs não fornecem suporte apropriado à personalização dinâmica e baseada em contexto durante a execução; adaptações em tempo de execução normalmente requerem intervenção do usuário. Esta tese se concentra em superar essa deficiência, fornecendo a WfMSs um mecanismo de ciente do contexto para personalizar a execução de workflows. Como resultado, foi projetado e desenvolvido o DynFlow -- uma arquitetura de software que permite tal personalização aplicada a um domínio: planejamento de saúde. Este domínio foi escolhido por ser um ótimo exemplo de personalização sensível ao contexto. Procedimentos de saúde constantemente sofrem mudanças que podem ocorrer durante um tratamento, como a reação de um paciente a um medicamento. Para suprir a demanda, a pesquisa em planejamento de saúde desenvolveu técnicas semi-automáticas para suportar mudanças rápidas dos passos de fluxos de tratamento, de acordo com o estado e a evolução do paciente. Uma dessas técnicas é \textit{Computer-Interpretable Guidelines} (CIG), cujo membro mais proeminente é \textit{Task-Network Model} (TNM) -- uma abordagem baseada em regras capaz de construir um plano em tempo de execução. Nossa pesquisa nos levou a concluir que CIGs não suportam características necessárias por profissionais de saúde, como proveniência e extensibilidade, disponíveis em WfMSs. Em outras palavras, CIGs e WfMSs têm características complementares e são direcionadas à execução de atividades. Considerando os fatos citados, as principais contribuições desta tese são: (a) especificação e desenvolvimento do DynFlow, cujo modelo associa características de TNMs e WfMS; (b) caracterização das principais vantagens e desvantagens de modelos CIGs e WfMSs; (c) implementação de um protótipo, baseado em ontologias e aplicadas ao domínio da saúde e enfermagem / Abstract: Workflow Management Systems (WfMS) are used to manage the execution of processes, improving efficiency and efficacy of the procedure in use. The driving forces behind the adoption and development of WfMSs are business and scientific applications. Associated research efforts resulted in consolidated mechanisms, consensual protocols and standards. In particular, a scientific WfMS helps scientists to specify and run distributed experiments. It provides several features that support activities within an experimental environment, such as providing flexibility to change workflow design and keeping provenance (and thus reproducibility) of experiments. On the other hand, barring a few research initiatives, WfMSs do not provide appropriate support to dynamic, context-based customization during run-time; on-the-fly adaptations usually require user intervention. This thesis is concerned with mending this gap, providing WfMSs with a context-aware mechanism to dynamically customize workflow execution. As a result, we designed and developed DynFlow ¿ a software architecture that allows such a customization, applied to a specific domain: healthcare planning. This application domain was chosen because it is a very good example of context-sensitive customization. Indeed, healthcare procedures constantly undergo unexpected changes that may occur during a treatment, such as a patient¿s reaction to a medicine. To meet dynamic customization demands, healthcare planning research has developed semi-automated techniques to support fast changes of the careflow steps according to a patient¿s state and evolution. One such technique is Computer-Interpretable Guidelines (CIG), whose most prominent member is the Task-Network Model (TNM) -- a rule based approach able to build on the fly a plan according to the context. Our research led us to conclude that CIGs do not support features required by health professionals, such as distributed execution, provenance and extensibility, which are available from WfMSs. In other words, CIGs and WfMSs have complementary characteristics, and both are directed towards execution of activities. Given the above facts, the main contributions of the thesis are the following: (a) the design and development of DynFlow, whose underlying model blends TNM characteristics with WfMS; (b) the characterization of the main advantages and disadvantages of CIG models and workflow models; and (c) the implementation of a prototype, based on ontologies, applied to nursing care. Ontologies are used as a solution to enable interoperability across distinct SWfMS internal representations, as well as to support distinct healthcare vocabularies and procedures / Doutorado / Ciência da Computação / Doutor em Ciência da Computação
|
13 |
Data Perspectives of Workflow Schema Evolution : Cases of Task Deletion and InsertionArunagiri, Aravindhan January 2013 (has links) (PDF)
Dynamic changes in the business environment requires their business process to be up-to-date. The Workflow Management Systems supporting these business processes need to adapt to these changes rapidly. The Work Flow Management Systems however lacks the ability to dynamically propagate the process changes to their process model schemas (Workflow templates). The literature on workflow schema evolution emphasizes the impact of changes in control flow with very ittle attention to other aspects of a workflow schema. This thesis studies the data aspect (data flow and data model) of workflow schema during its evolution.
Workflow schema changes can lead to inconsistencies between the underlying database model and the workflow. A rather straight forward approach to the problem would be to abandon the existing database model and start afresh. However this introduces data persistence issues. Also there could be significant system downtimes involved in the process of migrating data from the old database model to the current one. In this research we develop an approach to address this problem. The business changes demand various types of control flow changes to its business process model (workflow schema). The control flow changes include task insertion, deletion, swapping, movement, replacement, extraction, in-lining, Parallelizing etc. Many of the control flow changes to the workflow can be made by using the combination of a simple task insertion and deletion, while some like embedding task in loop/ conditional branch and Parallelizing tasks also requires the addition/removal of control dependency between the tasks. Since many of the control flow change patterns involves task insertion and deletion at its core, in this thesis we study its impact on the underlying data model. We propose algorithms to dynamically handle the changes in the underlying relational database schema.
First we identify the basic change patterns that can be implemented using atomic task insertion and deletions. Then we characterize these basic pattern in terms of their data flow anomalies (Missing, Redundant, Conflicting data) that they can generate. The Data schema compliance criteria are developed to identify the data changes: (i) that makes the underlying database schema inconsistent with the modified workflow and (ii) generating the aforementioned data anomalies. The Data schema compliance criteria characterizes the change patterns in terms of its ability to work with the current relational data model. The Data schema compliance criteria show various properties required of the modified workflow to be consistent with the underlying database model. The data of any workflow instance conforming to Data schema compliance criteria can be directly accommodated in the database model.
The data anomalies (of task insertion and deletion) identified using DSC are handled dynamically using respective Data adaptation algorithms. The algorithm uses the functional dependency constraints in the relational database model to adapt/handle these data anomalies. Such handled data (changes) that conform to DSC can be directly accommodated in the underlying database schema. Hence with this approach the workflow can be modified (using task insertion and deletion) and their data changes can be implemented on-the-fly using the Data adaptation algorithms. In this research the same old data model is evolved without abandoning it even after the modification of the workflow schema. This maintains the old data persistence in the existing database schema. Detailed implementation procedures to deploy the Data adaptation algorithms are presented with illustrative examples.
|
14 |
On the construction of decentralised service-oriented orchestration systemsJaradat, Ward January 2016 (has links)
Modern science relies on workflow technology to capture, process, and analyse data obtained from scientific instruments. Scientific workflows are precise descriptions of experiments in which multiple computational tasks are coordinated based on the dataflows between them. Orchestrating scientific workflows presents a significant research challenge: they are typically executed in a manner such that all data pass through a centralised computer server known as the engine, which causes unnecessary network traffic that leads to a performance bottleneck. These workflows are commonly composed of services that perform computation over geographically distributed resources, and involve the management of dataflows between them. Centralised orchestration is clearly not a scalable approach for coordinating services dispersed across distant geographical locations. This thesis presents a scalable decentralised service-oriented orchestration system that relies on a high-level data coordination language for the specification and execution of workflows. This system's architecture consists of distributed engines, each of which is responsible for executing part of the overall workflow. It exploits parallelism in the workflow by decomposing it into smaller sub-workflows, and determines the most appropriate engines to execute them using computation placement analysis. This permits the workflow logic to be distributed closer to the services providing the data for execution, which reduces the overall data transfer in the workflow and improves its execution time. This thesis provides an evaluation of the presented system which concludes that decentralised orchestration provides scalability benefits over centralised orchestration, and improves the overall performance of executing a service-oriented workflow.
|
15 |
Modelo de verificação de processos de negocios atraves de uma maquina virtual Pi-CalculosNader, Marcos Vanine Portilho de, 1954- 12 January 2006 (has links)
Orientador: Mauricio Ferreira Magalhães / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-08T01:16:08Z (GMT). No. of bitstreams: 1
Nader_MarcosVaninePortilhode_M.pdf: 1214383 bytes, checksum: 40e83a8be1c7e86e788d810a8799f6b8 (MD5)
Previous issue date: 2006 / Resumo: Duas áreas importantes estão em desenvolvimento: Gerência de Processos de Negócios (Business Process Management) e Orquestração de Web Services (Web Services Orchestration). Ambas têm um objetivo que é integrar aplicações ou outros processos que tenham interfaces web services, usando o paradigma de processos de negócios. Uma linguagem que vem sendo difundida para essas aplicações é a BPEL (Business Process Execution Language). Este trabalho apresenta um framework aplicável à análise e verificação de processos de negócios escritos em BPEL através do uso de Pi-Calculus. Pi-Calculus é uma álgebra de processos que possui mecanismos formais para criação e ativação de processos que se comunicam através da troca de mensagens em canais, usando o modelo de rendezvous síncrono. Nesse framework, o processo BPEL é traduzido para um programa Pi-Calculus. Uma Máquina Virtual Pi-Calculus (MVP) recebe o programa Pi-Calculus e produz todas as reações possíveis, ou seja, gera todos os caminhos de execução que o programa pode seguir. A partir desse resultado, efetua-se a verificação de propriedades como: atendimento às especificações de mais alto nível, ordenação de eventos e ocorrência ou não de deadlocks. Em termos práticos, uma ferramenta desse tipo pode ser incorporada aos Sistemas de Gerência de Processos de Negócios (Business Process Management Systems - BPMS) para ampliar a cobertura de testes durante as fases de análise e implementação de um processo dentro do seu ciclo de vida. Nesses tipos de sistemas, a reparação de um erro durante a fase de execução é muito mais custosa que nos sistemas tradicionais / Abstract: Two important areas have been in development lately: Business Process Management and Web Service Orchestration. In both of them, the objective is to integrate applications with web services interface through business process paradigm. A number of languages have been proposed with consensus being formed around BPEL (Business Process Execution Language). This dissertation presents a framework for BPEL processes analysis and verification through Pi-Calculus. Pi-Calculus is a process algebra with formal mechanisms for processes creation and activation; these processes communicate sending and receiving messages through channels using the synchronous rendezvous model. In this framework, the BPEL process is translated to a Pi-Calculus program, A Pi-Calculus Virtual Machine (MVP) receives a Pi-calculus program and executes all possible reactions, that is, it generates all execution paths possible to be taken. With this result, the properties such as high-level specification accomplishment, events ordering and deadlock freedom are verified. In practical terms, a tool of this sort can be part of a Business Process Management System (BPMS) to broaden test coverage during the analysis and implementation phases within a process life cycle. In these kinds of systems, a repairing mistake during the execution phase is more complex than in traditional systems / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
|
16 |
Implementation of a Laboratory Information Management System To Manage Genomic SamplesWitty, Derick 05 September 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A Laboratory Information Management Systems (LIMS) is designed to manage laboratory processes and data. It has the ability to extend the core functionality of the LIMS through configuration tools and add-on modules to support the implementation of complex laboratory workflows. The purpose of this project is to demonstrate how laboratory data and processes from a complex workflow can be implemented using a LIMS.
Genomic samples have become an important part of the drug development process due to advances in molecular testing technology. This technology evaluates genomic material for disease markers and provides efficient, cost-effective, and accurate results for a growing number of clinical indications. The preparation of the genomic samples for evaluation requires a complex laboratory process called the precision aliquotting workflow. The precision aliquotting workflow processes genomic samples into precisely created aliquots for analysis. The workflow is defined by a set of aliquotting scheme attributes that are executed based on scheme specific rules logic. The aliquotting scheme defines the attributes of each aliquot based on the achieved sample recovery of the genomic sample. The scheme rules logic executes the creation of the aliquots based on the scheme definitions.
LabWare LIMS is a Windows® based open architecture system that manages laboratory data and workflow processes. A LabWare LIMS model was developed to implement the precision aliquotting workflow using a combination of core functionality and configured code.
|
17 |
Scientific Workflows for HadoopBux, Marc Nicolas 07 August 2018 (has links)
Scientific Workflows bieten flexible Möglichkeiten für die Modellierung und den Austausch komplexer Arbeitsabläufe zur Analyse wissenschaftlicher Daten. In den letzten Jahrzehnten sind verschiedene Systeme entstanden, die den Entwurf, die Ausführung und die Verwaltung solcher Scientific Workflows unterstützen und erleichtern. In mehreren wissenschaftlichen Disziplinen wachsen die Mengen zu verarbeitender Daten inzwischen jedoch schneller als die Rechenleistung und der Speicherplatz verfügbarer Rechner.
Parallelisierung und verteilte Ausführung werden häufig angewendet, um mit wachsenden Datenmengen Schritt zu halten. Allerdings sind die durch verteilte Infrastrukturen bereitgestellten Ressourcen häufig heterogen, instabil und unzuverlässig. Um die Skalierbarkeit solcher Infrastrukturen nutzen zu können, müssen daher mehrere Anforderungen erfüllt sein: Scientific Workflows müssen parallelisiert werden. Simulations-Frameworks zur Evaluation von Planungsalgorithmen müssen die Instabilität verteilter Infrastrukturen berücksichtigen. Adaptive Planungsalgorithmen müssen eingesetzt werden, um die Nutzung instabiler Ressourcen zu optimieren. Hadoop oder ähnliche Systeme zur skalierbaren Verwaltung verteilter Ressourcen müssen verwendet werden.
Diese Dissertation präsentiert neue Lösungen für diese Anforderungen. Zunächst stellen wir DynamicCloudSim vor, ein Simulations-Framework für Cloud-Infrastrukturen, welches verschiedene Aspekte der Variabilität adäquat modelliert. Im Anschluss beschreiben wir ERA, einen adaptiven Planungsalgorithmus, der die Ausführungszeit eines Scientific Workflows optimiert, indem er Heterogenität ausnutzt, kritische Teile des Workflows repliziert und sich an Veränderungen in der Infrastruktur anpasst. Schließlich präsentieren wir Hi-WAY, eine Ausführungsumgebung die ERA integriert und die hochgradig skalierbare Ausführungen in verschiedenen Sprachen beschriebener Scientific Workflows auf Hadoop ermöglicht. / Scientific workflows provide a means to model, execute, and exchange the increasingly complex analysis pipelines necessary for today's data-driven science. Over the last decades, scientific workflow management systems have emerged to facilitate the design, execution, and monitoring of such workflows. At the same time, the amounts of data generated in various areas of science outpaced hardware advancements.
Parallelization and distributed execution are generally proposed to deal with increasing amounts of data. However, the resources provided by distributed infrastructures are subject to heterogeneity, dynamic performance changes at runtime, and occasional failures. To leverage the scalability provided by these infrastructures despite the observed aspects of performance variability, workflow management systems have to progress: Parallelization potentials in scientific workflows have to be detected and exploited. Simulation frameworks, which are commonly employed for the evaluation of scheduling mechanisms, have to consider the instability encountered on the infrastructures they emulate. Adaptive scheduling mechanisms have to be employed to optimize resource utilization in the face of instability. State-of-the-art systems for scalable distributed resource management and storage, such as Apache Hadoop, have to be supported.
This dissertation presents novel solutions for these aspirations. First, we introduce DynamicCloudSim, a cloud computing simulation framework that is able to adequately model the various aspects of variability encountered in computational clouds. Secondly, we outline ERA, an adaptive scheduling policy that optimizes workflow makespan by exploiting heterogeneity, replicating bottlenecks in workflow execution, and adapting to changes in the underlying infrastructure. Finally, we present Hi-WAY, an execution engine that integrates ERA and enables the highly scalable execution of scientific workflows written in a number of languages on Hadoop.
|
Page generated in 0.0904 seconds