Spelling suggestions: "subject:"workflow managemement lemsystems"" "subject:"workflow managemement atemsystems""
1 |
Improving Throughput and Predictability of High-volume Business Processes Through Embedded ModelingDeKeyrel, Joseph S. 01 January 2011 (has links)
Being faster is good. Being predictable is better. A faithful model of a system, loaded to reflect the system's current state, can then be used to look into the future and predict performance. Building faithful models of processes with high degrees of uncertainty can be very challenging, especially where this uncertainty exists in terms of processing times, queuing behavior and re-work rates. Within the context of an electronic, multi-tiered workflow management system (WFMS) the author builds such a model to endogenously quote due dates. A WFMS that manages business objects can be recast as a flexible flow shop in which the stations that a job (representing the business object) passes through are known and the jobs in the stations queues at any point are known. All of the other parameters associated with the flow shop, including job processing times per station, and station queuing behavior are uncertain though there is a significant body of past performance data that might be brought to bear. The objective, in this environment, is to meet the delivery date promised when the job is accepted. To attack the problem the author develops a novel heuristic algorithm for decomposing the WFMS's event logs exposing non-standard queuing behavior, develops a new simulation component to implement that behavior, and assembles a prototypical system to automate the required historical analysis and allow for on-demand due date quoting through the use of embedded discrete event simulation modeling. To attack the problem the author develops a novel heuristic algorithm for decomposing the WFMS's event logs exposing non-standard queuing behavior, develops a new simulation component to implement that behavior, and assembles a prototypical system to automate the required historical analysis and allow for on-demand due date quoting through the use of embedded discrete event simulation modeling. The developed software components are flexible enough to allow for both the analysis of past performance in conjunction with the WFMS's event logs, and on-demand analysis of new jobs entering the system. Using the proportion of jobs completed within the predicted interval as the measure of effectiveness, the author validates the performance of the system over six months of historical data and during live operations with both samples achieving the 90% service level targeted.
|
2 |
A novel workflow management system for handling dynamic process adaptation and complianceHaji-Omar, Mohamad S. January 2014 (has links)
Modern enterprise organisations rely on dynamic processes. Generally these processes cannot be modelled once and executed repeatedly without change. Enterprise processes may evolve unpredictably according to situations that cannot always be prescribed. However, no mechanism exists to ensure an updated process does not violate any compliance requirements. Typical workflow processes may follow a process definition and execute several thousand instances using a workflow engine without any changes. This is suitable for routine business processes. However, when business processes need flexibility, adaptive features are needed. Updating processes may violate compliance requirements so automatic verification of compliance checking is necessary. The research work presented in this Thesis investigates the problem of current workflow technology in defining, managing and ensuring the specification and execution of business processes that are dynamic in nature, combined with policy standards throughout the process lifycle. The findings from the literature review and the system requirements are used to design the proposed system architecture. Since a two-tier reference process model is not sufficient as a basis for the reference model for an adaptive and compliance workflow management system, a three-tier process model is proposed. The major components of the architecture consist of process models, business rules and plugin modules. This architecture exhibits the concept of user adaptation with structural checks and dynamic adaptation with data-driven checks. A research prototype - Adaptive and Compliance Workflow Management System (ACWfMS) - was developed based on the proposed system architecture to implement core services of the system for testing and evaluation purposes. The ACWfMS enables the development of a workflow management tool to create or update the process models. It automatically validates compliance requirements and, in the case of violations, visual feedback is presented to the user. In addition, the architecture facilitates process migration to manage specific instances with modified definitions. A case study based on the postgraduate research process domain is discussed.
|
3 |
Workflow management systems, their security and access control mechanismsChehrazi, Golriz January 2007 (has links)
<p>This paper gives an overview of workflow management systems (WfMSs) and their security requirements with focus on access mechanisms. It is a descriptive paper in which we examine the state of the art of workflow systems, describe what security risks affect WfMSs in particular, and how these can be diminiuished.</p><p>WfMSs manage, illustrate and support business processes. They contribute to the performance, automation and optimization of processes, which is important in the global economy today. The security of process flows is important, since the sensitive business data need to be protected to inhibit illegal activities, such as blackmailing, imitation and fraud and to provide for good customer service.</p><p>This paper focuses on access mechanisms, because they are basic security mechanisms used by WfMSs assuring that only authorized users are provided access to data and resources. Also because of the unsecurity of the Internet, which is commonly used as infrastructure of Workflow systems, additional security mechanisms, such as PKIs, digital signatures and SSL have to be used to provide secure workflows.</p><p>Depending on the particular requirements in workflow systems, different extensional access control (AC) mechanisms have been developed to maintain security. But when it comes to commercially used WfMSs, the availability of the system is of utmost importance. It is the prerequisite for the system to be employed by companies. The problem is that there is always a trade-off between availability of the system and security. Because this trade off is generally solved in favor of availability, a major part of the developed AC mechanisms are not used in commercially used WfMS.</p><p>After the first part of this paper which is rather theoretical, we examine a commercial WfMS, namely IBM's MQ Workflow , and its security mechanisms. We show vulnerabilities of the system that could be abused by attackers. Afterwards, we show which security mechanisms, in particular, AC mechanisms are provided to secure against threats. We conclude with a summary, which highlights the difference between security concepts developed in the research area and those really implemented by the commercially used WfMS.</p>
|
4 |
Workflow management systems, their security and access control mechanismsChehrazi, Golriz January 2007 (has links)
This paper gives an overview of workflow management systems (WfMSs) and their security requirements with focus on access mechanisms. It is a descriptive paper in which we examine the state of the art of workflow systems, describe what security risks affect WfMSs in particular, and how these can be diminiuished. WfMSs manage, illustrate and support business processes. They contribute to the performance, automation and optimization of processes, which is important in the global economy today. The security of process flows is important, since the sensitive business data need to be protected to inhibit illegal activities, such as blackmailing, imitation and fraud and to provide for good customer service. This paper focuses on access mechanisms, because they are basic security mechanisms used by WfMSs assuring that only authorized users are provided access to data and resources. Also because of the unsecurity of the Internet, which is commonly used as infrastructure of Workflow systems, additional security mechanisms, such as PKIs, digital signatures and SSL have to be used to provide secure workflows. Depending on the particular requirements in workflow systems, different extensional access control (AC) mechanisms have been developed to maintain security. But when it comes to commercially used WfMSs, the availability of the system is of utmost importance. It is the prerequisite for the system to be employed by companies. The problem is that there is always a trade-off between availability of the system and security. Because this trade off is generally solved in favor of availability, a major part of the developed AC mechanisms are not used in commercially used WfMS. After the first part of this paper which is rather theoretical, we examine a commercial WfMS, namely IBM's MQ Workflow , and its security mechanisms. We show vulnerabilities of the system that could be abused by attackers. Afterwards, we show which security mechanisms, in particular, AC mechanisms are provided to secure against threats. We conclude with a summary, which highlights the difference between security concepts developed in the research area and those really implemented by the commercially used WfMS.
|
5 |
Analýza využití workflow produktů / Analysis workflow productsŠich, Jan January 2012 (has links)
The thesis is focused on workflow processes and systems for their design and management. It tries to show readers more familiar way with the nature of these systems and features that should support. It tries to accomplish three goals. The first goal is to determine the criteria by which you can judge the quality of workflow management system. These criteria must be sorted by the importance of specific situations, which are designed weighting of the criteria and technique for their calculation. The second goal of this work is the selection of appropriate tools for testing and subsequent practical test, focusing on appointment criteria and metrics selected for their assessment. The third goal is the inclusion of a practical example of a real process design modifications to make it suitable for using of workflow system to increase its effectiveness. Selected process is the transfer of information between school studies and legal guardians of students by e-mail, prepared for Secondary school, Dubno. The thesis is splitted in two halves for achieving the objectives of the work.The first two chapters are about the theoretical introduce, what benefits can bring the use of workflow systems and what steps to take to be able to effectively and meaningfully utilized. These informations were used from the literature focused primarily on workflow and process management. The third chapter focuses on the practical part, where the selected studies focused on the evaluation criteria workflow systems, build a suitable combination of the current set of criteria and application to selected systems management workflow. In the last chapter of the thesis are processed informations obtained from employees of SOS and SOU Dubno and where is described the current state of that process, a proposal how to make it more effective with using workflow system. Contribution of this thesis is the first assess of the current state workflow on the market and, secondly, the possibility of its use to increase the efficiency of the administrative process SOS and SOU Dubno.
|
6 |
Web-based Thesis Workflow Management SystemGarip, Omer 15 May 2020 (has links)
No description available.
|
7 |
A Logic-Based Methodology for Business Process Analysis and Design: Linking Business Policies to Workflow ModelsWang, Jiannan January 2006 (has links)
Today, organizations often need to modify their business processes to cope with changes in the environment, such as mergers/acquisitions, new government regulations, and new customer demand. Most organizations also have a set of business policies defining the way they conduct their business. Although there has been extensive research on process analysis and design, how to systematically extract workflow models from business policies has not been studied, resulting in a missing link between the specification of business policies and the modeling of business processes.Given that process changes are often determined by executives and managers at the policy level, the aforementioned missing link often leads to inefficient and inaccurate implementation of process changes by business analysts and process designers. We refer to this problem as the policy mismatch problem in business process management. For organizations with large-scale business processes and a large number of business policies, solving the policy mismatch problem is very difficult and challenging.In this dissertation, we attempt to provide a formal link between business policies and workflow models by proposing a logic-based methodology for process analysis and design. In particular, we first propose a Policy-driven Process Design (PPD) methodology to formalize the procedure of extracting workflow models from business policies. In PPD, narrative process policies are parsed into precise information on various workflow components, and a set of process design rules and algorithms are applied to generate workflow models from that information.We also develop a logic-based process modeling language named Unified Predicate Language (UPL). UPL is able to represent all workflow components in a single logic format and provides analytical capability via logic inference and query. We demonstrate UPL's expressive power and analytical ability by applying it to process design and process change analysis. In particular, we use UPL to define and classify process change anomalies and develop algorithms to verify and enforce process consistency.The Policy-driven Process Design, Unified Predicate Language, and process change analysis approach found in this dissertation contribute to business process management research by providing a formal methodology for resolving the policy mismatch problem.
|
8 |
An investigation into the relevance of flexibility- and interoperability requirements for implementation processes for workflow-management-applicationsKühl, Lukas W. H. January 2009 (has links)
Flexibility and Interoperability have become important characteristics for organisations and their business processes. The need to control flexible business processes within an organisation’s boundaries and between organisations imposes major requirements on a company’s process control capabilities. Workflow Management Systems (WFMS) try to fulfil these requirements by offering respective product features. Evidence suggests that the achievement of flexible business processes and an inter-organisational process control is also influenced by implementation processes for Workflow Management Applications (WFMA). [A WFMA comprises the WFMS and "all WFMS specific data with regard to one or more business processes" [VER01]]. The impact of a WFMA implementation methodology on the fulfilment of these requirements is the research scope of the project. The thesis provides knowledge in the following areas: 1. Review of the relationship between workflow management and the claim for process flexibility respectively -interoperability. 2. Definition of a research-/evaluation framework for workflow projects. This framework is composed of all relevant research variables that have been identified for the thesis. 3. Empirical survey of relevant workflow-project objectives and their priority in the context of process flexibility and –interoperability. 4. Empirical survey of the objectives’ achievement. 5. Empirical survey of methodologies / activities that have been applied within workflow projects. 6. Derivation of the project methodologies’ effectiveness in terms of the impact that applied activities had on project objectives. 7. Evaluation of existing workflow life-cycle models in accordance with the research framework. 8. Identification of basic improvements for workflow implementation processes with respect to the achievement of flexible and interoperable business processes. The first part of the thesis argues the relevance of the subject. Afterwards research variables that constitute the evaluation framework for WFMA implementation processes are stepwise identified and defined. An empirical study then proves the variables’ effectiveness for the achievement of process flexibility and –interoperability within the WFMA implementation process. After this the framework is applied to evaluate chosen WFMA implementation methodologies. Identified weaknesses and effective methodological aspects are utilised to develop generic methodological improvements. These improvements are later validated by means of a case study and interviews with workflow experts.
|
9 |
[en] WORK-FLOW EXECUTION IN DISCONNECTED ENVIRONMENTS / [pt] EXECUÇÃO DE WORKFLOW EM AMBIENTES COM DESCONEXÃOFABIO MEIRA DE OLIVEIRA DIAS 15 September 2003 (has links)
[pt] Os sistemas de gerência de workflow são freqüentemente
utilizados para modelagem, monitoramento e execução
coordenada de atividades realizadas por grupos de usuários
em diferentes contextos. Com a atual proliferação de
computadores portáteis e seu crescente poder de computação,
os sistemas tradicionalmente desenvolvidos têm se mostrado,
muitas vezes, excessivamente rígidos, limitando o grau de
autonomia dos usuários. O objetivo deste trabalho é
identificar e analisar diferentes técnicas de
flexibilização e mecanismos que possam ser empregados em um
sistema de gerência de work-flow destinado a dar suporte à
operação desconectada. O principal desafio é
garantir um nível de independência satisfatório entre
grupos de pessoas trabalhando de forma conjunta que
possibilite a realização coordenada de tarefas, com um
objetivo global comum, em ambientes com desconexão. Para
testar a viabilidade das idéias discutidas nesta
dissertação, foi construído um sistema cujo projeto levou
em conta os vários requisitos apresentados e que permite
explorar características específicas de diferentes tipos de
work-flow, buscando flexibilizar sua execução, sem
comprometer a estruturação preestabelecida. / [en] Workflow management systems are frequently used for
modeling, monitoring and controlling the coordinated
execution of activities performed by workgroups in a
variety of contexts. With the widespread use of portable
computers and their growing computational power,
conventional systems have often proved to be overly
restrictive, effectively limiting the level of autonomy of
the users involved. The primary goal of this work is to
identify and analyze different flexibilization techniques
and mechanisms that can be employed in a workflow
management system aimed at supporting disconnected
operation. The main challenge is to provide a satisfactory
degree of independence among individuals in cooperating
teams who share a common goal and work in disconnected
environments. In order to test the viability of
the ideas discussed in this dissertation, a system was
built whose design met the requirements presented in the
text and which allows the exploration of specific features
of different kinds of workflow so as to enhance execution
flexibility, without compromising the predefined structure.
|
10 |
Management of generic and multi-platform workflows for exploiting heterogeneous environments on e-ScienceCarrión Collado, Abel Antonio 01 September 2017 (has links)
Scientific Workflows (SWFs) are widely used to model applications in e-Science. In this programming model, scientific applications are described as a set of tasks that have dependencies among them. During the last decades, the execution of scientific workflows has been successfully performed in the available computing infrastructures (supercomputers, clusters and grids) using software programs called Workflow Management Systems (WMSs), which orchestrate the workload on top of these computing infrastructures. However, because each computing infrastructure has its own architecture and each scientific applications exploits efficiently one of these infrastructures, it is necessary to organize the way in which they are executed.
WMSs need to get the most out of all the available computing and storage resources. Traditionally, scientific workflow applications have been extensively deployed in high-performance computing infrastructures (such as supercomputers and clusters) and grids. But, in the last years, the advent of cloud computing infrastructures has opened the door of using on-demand infrastructures to complement or even replace local infrastructures. However, new issues have arisen, such as the integration of hybrid resources or the compromise between infrastructure reutilization and elasticity, everything on the basis of cost-efficiency.
The main contribution of this thesis is an ad-hoc solution for managing workflows exploiting the capabilities of cloud computing orchestrators to deploy resources on demand according to the workload and to combine heterogeneous cloud providers (such as on-premise clouds and public clouds) and traditional infrastructures (supercomputers and clusters) to minimize costs and response time. The thesis does not propose yet another WMS, but demonstrates the benefits of the integration of cloud orchestration when running complex workflows. The thesis shows several configuration experiments and multiple heterogeneous backends from a realistic comparative genomics workflow called Orthosearch, to migrate memory-intensive workload to public infrastructures while keeping other blocks of the experiment running locally. The running time and cost of the experiments is computed and best practices are suggested. / Los flujos de trabajo científicos son comúnmente usados para modelar aplicaciones en e-Ciencia. En este modelo de programación, las aplicaciones científicas se describen como un conjunto de tareas que tienen dependencias entre ellas. Durante las últimas décadas, la ejecución de flujos de trabajo científicos se ha llevado a cabo con éxito en las infraestructuras de computación disponibles (supercomputadores, clústers y grids) haciendo uso de programas software llamados Gestores de Flujos de Trabajos, los cuales distribuyen la carga de trabajo en estas infraestructuras de computación. Sin embargo, debido a que cada infraestructura de computación posee su propia arquitectura y cada aplicación científica explota eficientemente una de estas infraestructuras, es necesario organizar la manera en que se ejecutan.
Los Gestores de Flujos de Trabajo necesitan aprovechar el máximo todos los recursos de computación y almacenamiento disponibles. Habitualmente, las aplicaciones científicas de flujos de trabajos han sido ejecutadas en recursos de computación de altas prestaciones (tales como supercomputadores y clústers) y grids. Sin embargo, en los últimos años, la aparición de las infraestructuras de computación en la nube ha posibilitado el uso de infraestructuras bajo demanda para complementar o incluso reemplazar infraestructuras locales. No obstante, este hecho plantea nuevas cuestiones, tales como la integración de recursos híbridos o el compromiso entre la reutilización de la infraestructura y la elasticidad, todo ello teniendo en cuenta que sea eficiente en el coste.
La principal contribución de esta tesis es una solución ad-hoc para gestionar flujos de trabajos explotando las capacidades de los orquestadores de recursos de computación en la nube para desplegar recursos bajo demando según la carga de trabajo y combinar proveedores de computación en la nube heterogéneos (privados y públicos) e infraestructuras tradicionales (supercomputadores y clústers) para minimizar el coste y el tiempo de respuesta. La tesis no propone otro gestor de flujos de trabajo más, sino que demuestra los beneficios de la integración de la orquestación de la computación en la nube cuando se ejecutan flujos de trabajo complejos. La tesis muestra experimentos con diferentes configuraciones y múltiples plataformas heterogéneas, haciendo uso de un flujo de trabajo real de genómica comparativa llamado Orthosearch, para traspasar cargas de trabajo intensivas de memoria a infraestructuras públicas mientras se mantienen otros bloques del experimento ejecutándose localmente. El tiempo de respuesta y el coste de los experimentos son calculados, además de sugerir buenas prácticas. / Els fluxos de treball científics són comunament usats per a modelar aplicacions en e-Ciència. En aquest model de programació, les aplicacions científiques es descriuen com un conjunt de tasques que tenen dependències entre elles. Durant les últimes dècades, l'execució de fluxos de treball científics s'ha dut a terme amb èxit en les infraestructures de computació disponibles (supercomputadors, clústers i grids) fent ús de programari anomenat Gestors de Fluxos de Treballs, els quals distribueixen la càrrega de treball en aquestes infraestructures de computació. No obstant açò, a causa que cada infraestructura de computació posseeix la seua pròpia arquitectura i cada aplicació científica explota eficientment una d'aquestes infraestructures, és necessari organitzar la manera en què s'executen.
Els Gestors de Fluxos de Treball necessiten aprofitar el màxim tots els recursos de computació i emmagatzematge disponibles. Habitualment, les aplicacions científiques de fluxos de treballs han sigut executades en recursos de computació d'altes prestacions (tals com supercomputadors i clústers) i grids. No obstant açò, en els últims anys, l'aparició de les infraestructures de computació en el núvol ha possibilitat l'ús d'infraestructures sota demanda per a complementar o fins i tot reemplaçar infraestructures locals. No obstant açò, aquest fet planteja noves qüestions, tals com la integració de recursos híbrids o el compromís entre la reutilització de la infraestructura i l'elasticitat, tot açò tenint en compte que siga eficient en el cost. La principal contribució d'aquesta tesi és una solució ad-hoc per a gestionar fluxos de treballs explotant les capacitats dels orquestadors de recursos de computació en el núvol per a desplegar recursos baix demande segons la càrrega de treball i combinar proveïdors de computació en el núvol heterogenis (privats i públics) i infraestructures tradicionals (supercomputadors i clústers) per a minimitzar el cost i el temps de resposta. La tesi no proposa un gestor de fluxos de treball més, sinó que demostra els beneficis de la integració de l'orquestració de la computació en el núvol quan s'executen fluxos de treball complexos. La tesi mostra experiments amb diferents configuracions i múltiples plataformes heterogènies, fent ús d'un flux de treball real de genòmica comparativa anomenat Orthosearch, per a traspassar càrregues de treball intensives de memòria a infraestructures públiques mentre es mantenen altres blocs de l'experiment executant-se localment. El temps de resposta i el cost
dels experiments són calculats, a més de suggerir bones pràctiques. / Carrión Collado, AA. (2017). Management of generic and multi-platform workflows for exploiting heterogeneous environments on e-Science [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86179
|
Page generated in 0.0641 seconds