Spelling suggestions: "subject:"workflow"" "subject:"iworkflow""
221 |
Scheduling and deployment of large-scale applications on Cloud platformsMuresan, Adrian 10 December 2012 (has links) (PDF)
Infrastructure as a service (IaaS) Cloud platforms are increasingly used in the IT industry. IaaS platforms are providers of virtual resources from a catalogue of predefined types. Improvements in virtualization technology make it possible to create and destroy virtual machines on the fly, with a low overhead. As a result, the great benefit of IaaS platforms is the ability to scale a virtual platform on the fly, while only paying for the used resources. From a research point of view, IaaS platforms raise new questions in terms of making efficient virtual platform scaling decisions and then efficiently scheduling applications on dynamic platforms. The current thesis is a step forward towards exploring and answering these questions. The first contribution of the current work is focused on resource management. We have worked on the topic of automatically scaling cloud client applications to meet changing platform usage. There have been various studies showing self-similarities in web platform traffic which implies the existence of usage patterns that may or may not be periodical. We have developed an automatic platform scaling strategy that predicted platform usage by identifying non-periodic usage patterns and extrapolating future platform usage based on them. Next we have focused on extending an existing grid platform with on-demand resources from an IaaS platform. We have developed an extension to the DIET (Distributed Interactive Engineering Toolkit) middleware, that uses a virtual market based approach to perform resource allocation. Each user is given a sum of virtual currency that he will use for running his tasks. This mechanism help in ensuring fair platform sharing between users. The third and final contribution targets application management for IaaS platforms. We have studied and developed an allocation strategy for budget-constrained workflow applications that target IaaS Cloud platforms. The workflow abstraction is very common amongst scientific applications. It is easy to find examples in any field from bioinformatics to geology. In this work we have considered a general model of workflow applications that comprise parallel tasks and permit non-deterministic transitions. We have elaborated two budget-constrained allocation strategies for this type of workflow. The problem is a bi-criteria optimization problem as we are optimizing both budget and workflow makespan. This work has been practically validated by implementing it on top of the Nimbus open source cloud platform and the DIET MADAG workflow engine. This is being tested with a cosmological simulation workflow application called RAMSES. This is a parallel MPI application that, as part of this work, has been ported for execution on dynamic virtual platforms. Both theoretical simulations and practical experiments have shown encouraging results and improvements.
|
222 |
A Lightweight Coordination Approach for Resource-Centric CollaborationsGhandehari, Morteza Unknown Date
No description available.
|
223 |
Analysing supply chain operation dynamics through logic-based modelling and simulationManataki, Areti January 2012 (has links)
Supply Chain Management (SCM) is becoming increasingly important in the modern business world. In order to effectively manage and integrate a supply chain (SC), a deep understanding of overall SC operation dynamics is needed. This involves understanding how the decisions, actions and interactions between SC members affect each other, and how these relate to SC performance and SC disruptions. Achieving such an understanding is not an easy task, given the complex and dynamic nature of supply chains. Existing simulation approaches do not provide an explanation of simulation results, while related work on SC disruption analysis studies SC disruptions separately from SC operation and performance. This thesis presents a logic-based approach for modelling, simulating and explaining SC operation that fills these gaps. SC members are modelled as logicbased intelligent agents consisting of a reasoning layer, represented through business rules, a process layer, represented through business processes and a communication layer, represented through communicative actions. The SC operation model is declaratively formalised, and a rule-based specification is provided for the execution semantics of the formal model, thus driving the simulation of SC operation. The choice of a logic-based approach enables the automated generation of explanations about simulated behaviours. SC disruptions are included in the SC operation model, and a causal model is defined, capturing relationships between different types of SC disruptions and low SC performance. This way, explanations can be generated on causal relationships between occurred SC disruptions and low SC performance. This approach was analytically and empirically evaluated with the participation of SCM and business experts. The results indicate the following: Firstly, the approach is useful, as it allows for higher efficiency, correctness and certainty about explanations of SC operation compared to the case of no automated explanation support. Secondly, it improves the understanding of the domain for non-SCM experts with respect to their correctness and efficiency; the correctness improvement is significantly higher compared to the case of no prior explanation system use, without loss of efficiency. Thirdly, the logic-based approach allows for maintainability and reusability with respect to the specification of SC operation input models, the developed simulation system and the developed explanation system.
|
224 |
Integration of MRI into the radiotherapy workflowJonsson, Joakim January 2013 (has links)
The modern day radiotherapy treatments are almost exclusively based on computed tomography (CT) images. The CT images are acquired using x-rays, and therefore reflect the radiation interaction properties of the material. This information is used to perform accurate dose calculation by the treatment planning system, and the data is also well suited for creating digitally reconstructed radiographs for comparing patient set up at the treatment machine where x-ray images are routinely acquired for this purpose. The magnetic resonance (MR) scanner has many attractive features for radiotherapy purposes. The soft tissue contrast as compared to CT is far superior, and it is possible to vary the sequences in order to visualize different anatomical and physiological properties of an organ. Both of these properties may contribute to an increase in accuracy of radiotherapy treatment. Using the MR images by themselves for treatment planning is, however, problematic. MR data reflects the magnetic properties of protons, and thus have no connection to the radiointeraction properties of the material. MRI also has inherent difficulty in imaging bone, which will appear in images as areas of no signal similar to air. This makes both dose calculation and patient positioning at the treatment machine troublesome. There are several clinics that use MR images together with CT images to perform treatment planning. The images are registered to a common coordinate system, a process often described as image fusion. In these cases, the MR images are primarily used for target definition and the CT images are used for dose calculations. This method is now not ideal, however, since the image fusion may introduce systematic uncertainties into the treatment due to the fact that the tumor is often able to move relatively freely with respect to the patients’ bony anatomy and outer contour, especially when the image registration algorithms take the entire patient anatomy in the volume of interest into account. The work presented in the thesis “Integration of MRI into the radiotherapy workflow” aim towards investigating the possibilities of workflows based entirely on MRI without using image registration, as well as workflows using image registration methods that are better suited for targets that can move with respect to surrounding bony anatomy, such as the prostate. / Modern strålterapi av cancer baseras nästan helt på datortomografiska (CT) bilder. CT bilder tas med hjälp av röntgenfotoner, och återger därför hur det avbildade materialet växelverkar med strålning. Denna information används för att utföra noggranna dosberäkningar i ett dosplaneringssystem, och data från CT bilder lämpar sig också väl för att skapa digitalt rekonstruerade röntgenbilder vilka kan användas för att verifiera patientens position vid behandling. Bildgivande magnetresonanstomografi (MRI) har många egenskaper som är intressanta för radioterapi. Mjukdelskontrasten i MR bilder är överlägsen CT, och det är möjligt att i stor utstäckning variera sekvensparametrar för att synliggöra olika anatomiska och funktionella attribut hos ett organ. Dessa bägge egenskaper kan bidra till ökad noggrannhet i strålbehandling av cancer. Att använda enbart MR bilder som planeringsunderlag för radioterapi är dock problematiskt. MR data reflekterar magnetiska attribut hos protoner, och har därför ingen koppling till materialets egenskaper då det gäller strålningsväxelverkan. Dessutom är det komplicerat att avbilda ben med MR; ben uppträder som områden av signalförlust i bilderna, på samma sätt som luft gör. Detta gör det svårt att utföra noggranna dosberäkningar och positionera patienten vid behandling. Många moderna kliniker använder redan idag MR tillsammans med CT under dosplanering. Bilderna registreras till ett gemensamt koordinatsystem i en process som kallas bildfusion. I dessa fall används MR bilderna primärt som underlag för utlinjering av tumör, eller target, och CT bilderna används som grund för dosberäkningar. Denna metod är dock inte ideal, då bildregistreringen kan införa systematiska geometriska fel i behandlingen. Detta på grund av att tumörer ofta är fria att röra sig relativt patientens skelett och yttre kontur, och många bildregistreringsalgoritmer tar hänsyn till hela bildvolymen. Arbetet som presenteras i denna avhandling syftar till att undersöka möjligheterna med arbetsflöden som baseras helt på MR data utan bildregistrering, samt arbetsflöden som använder bildregistrerings-algoritmer som är bättre anpassade för tumörer som kan röra sig i förhållande till patientens övriga anatomi, som till exempel prostatacancer.
|
225 |
Specification And Scheduling Of Workflows Under Resource Allocation ConstraintsSenkul Karagoz, Pinar 01 January 2003 (has links) (PDF)
Workflow is a collection of tasks organized to accomplish some business process. It also
defines the order of task invocation or conditions under which task must be invoked,
task synchronization, and information flow. Before the execution of the workflow, a
correct execution schema, in other words, the schedule of the workflow, must be determined.
Workflow scheduling is finding an execution sequence of tasks that obeys the
business logic of workflow. Research on specification and scheduling of workflows has
concentrated on temporal and causality constraints, which specify existence and order
dependencies among tasks. However, another set of constraints that specify resource allocation
is also equally important. The resources in a workflow environment are agents
such as person, machine, software, etc. that execute the task. Execution of a task has a cost and this may vary depending on the resources allocated in order to execute that
task. Resource allocation constraints define restrictions on how to allocate resources,
and scheduling under resource allocation constraints provide proper resource allocation
to tasks. In this thesis, we present two approaches to specify and schedule workflows
under resource allocation constraints as well as temporal and causality constraints. In
the first approach, we present an architecture whose core and novel parts are a specifi-
cation language with the ability to express resources and resource allocation constraints
and a scheduler module that contains a constraint solver in order to find correct resource
assignments. In the second approach, we developed a new logical formalism,
called Concurrent Constraint Transaction Logic (CCTR) which integrates constraint
logic programming (CLP) and Concurrent Transaction Logic, and a logic-based work-
flow scheduler that is based on this new formalism. CCTR has the constructs to specify
resource allocation constraints as well as workflows and it provides semantics for these specifications so that validity of a schedule can be checked.
|
226 |
The impact of information technology during the restructuring of a premier public hospital :Wee, May Li Stella. Unknown Date (has links)
This research focuses on the impact of Information Technology (IT) and the corresponding outcomes during an intense period of change and restructuring of the Singapore General Hospital (SGH), the premier public hospital in Singapore. / The primary aim of the Singapore Government in restructuring hospitals was to control costs because of limited resources. Singapore is a small city-state and devoid of any natural resources like gas, oil or minerals and her only resource is a population of four million people. The Singapore Government allocates 3% of GDP for health-care and increasing costs coupled with an ageing population and the growing demand from the public for better health-care services have made it difficult to increase health-care funds indefinitely. Hence, to control rising costs, the question was whether Information Technology could help, apart from other areas of management, viz. pruning staff, reducing costs, etc. At the time of restructuring SGH, management looked at how IT could play an integral role in cutting costs, streamlining processes and improving overall efficiency and effectiveness. Ultimately, the aim was to achieve the Holy Grail of having a fully integrated campus-wide electronic medical record (EMR). However, prior to developing an EMR, there was a multitude of other areas that IT could be harnessed for greater efficiency and the challenge was to use IT to tie the business side and the medical side of running a large public hospital. IT staff had to develop user-friendly software adapted to the needs of end-users as the aim was to provide the best patient care outcomes. The business side consisted of tasks of varying complexities, viz. updated billings for hospital charges, regular updates of the large inventory of medical consumables and hospital supplies with data available at any point in time to avoid keeping excessive stocks beyond expiry and ordering sufficient stocks to meet seasonal demands, and monthly updates of bed occupancy, OT utilisation, ward and clinic resources, etc. / On the medical side, IT was used to integrate various laboratories, wards and satellite pharmacies to cut down on time-consuming voluminous paperwork as well as to assist staff to code, tag and deliver correct food trolleys to various impatient wards, conduct regular medical audits and quality control of hospital operations to improve efficiency and identify outliers in respect of ineffective OT utilisation. IT was also used to establish resources for one-stop medical centres so patients would not have to go from pillar to post to receive treatment as well as develop a fully integrated EMR with the patients full medical records, i.e. previous drugs taken, dosage prescribed, history of drug allergies, etc. The data could be collated, mined and stored to find new treatment modalities for better patient outcomes. / This thesis is confined to information technology (IT) and excludes medical technology, as IT was instrumental in bringing about many of the progressive developments at SGH from the time of restructuring till to-date, IT transcended through clinical and non-clinical departments throughout the hospital to facilitate improvements in hospital operations whereas medical technology was restricted to medical specialists who had a specific interest to acquire and use medical techniques, knowledge or equipment associated with the technology. / Public hospitals have an imperative need to utilise the enormous capacity of IT to enhance and improve operational performance by sorting and sifting information effectively and efficiently, viz. medical records, financial statistics, etc. The significance of the research findings would be instructive to comparable complex public hospitals who are challenged to tap into the important potential of IT to maintain, sustain or boost their organisational capabilities and hence impuove their competitiveness to remain viable in the health-care industry. A total of 100 questionnaires were sent out to various categories of staff and 10 face-to-face interviews were conducted to obtain first-hand accounts of complex and routine issues that direct users of IT had to grapple with, during the period of hospital restructuring. The important findings are set out in detail in the data analysis as well as in the conclusion to explicitly illustrate the contributions of this research thesis. / The findings illuminate the ways in which IT impacted upon the daily work of a large and complex public hospital. Detailed analysis of the process of change, in which IT is a constantly evolving form of innovation, throws light on the slow way in which hospital staff gradually became attuned to the potential of the tools of technology to enhance their work performance and productivity. This research material is especially useful to hospital adminstrators who have the responsibility to craft workable policies and practical guidelines for incorporating IT into organisational operations. The research, therefore, contributes to policy making in regard to the administrative operations of a large public sector hospital. This thesis is submitted as a new and original contribution in the context of a large public sector hospital in a developed country. / Thesis (PhDBusinessandManagement)--University of South Australia, 2005.
|
227 |
Responsive Workflows: Design, Execution and Analysis of Interruption Policy ModelsBelinda Melanie Carter Unknown Date (has links)
Business processes form the backbone of all business operations, and workflow technology has enabled companies to gain significant productivity benefits through the automatic enactment of routine, repetitive processes. Process automation can be achieved by encoding the business rules and procedures into the applications, but capturing the process logic in a graphical workflow model allows the process to be specified, validated and ultimately maintained by business analysts with limited technical knowledge. The process models can also be automatically verified at design-time to detect structural issues such as deadlock and ensure correct data flow during process execution. These benefits have resulted in the success of workflow technology in a variety of industries, although workflows are often criticised for being too rigid, particularly in light of their recent deployment in collaborative applications such as e-business. Generally, many events can impact on the execution of a workflow process. Initially, the workflow is triggered by an external event (for example, receipt of an order). Participants then interact with the workflow system through the worklist as they perform constituent tasks of the workflow, driving the progression of each process instance through the model until its completion. For traditional workflow processes, this functionality was sufficient. However, new generation 'responsive' workflow technology must facilitate interaction with the external environment during workflow execution. For example, during the execution of an 'order to cash' process, the customer may attempt to cancel the order or update the shipping address. We call these events 'interruptions'. The potential occurrence of interruptions can be anticipated but, unlike the other workflow events, they are never required to occur in order to successfully execute any process instance. Interruptions can also occur at any stage during process execution, and may therefore be considered as 'expected, asynchronous exceptions' during the execution of workflow processes. Every interruption must be handled, and the desired reaction often depends on the situation. For example, an address update may not be permitted after a certain point, where this point depends on the customer type, and a shipping charge or refund may be applicable, depending on the original and new delivery region. Therefore, a set of rules is associated with each interruption, such that if a condition is satisfied when the event occurs, a particular action is to be performed. This set of rules forms a policy to handle each interruption. Several workflow systems do facilitate the automatic enforcement of 'exception handling' rules and support the reuse of code fragments to enable the limited specification and maintenance of rules by non-technical users. However, this functionality is not represented in a formal, intuitive model. Moreover, we argue that inadequate consideration is given to the verification of the rules, with insufficient support provided for the detection of issues at design-time that could hinder effective maintenance of the process logic or interfere with the interruption handling functionality at run-time. This thesis presents a framework to capture, analyse and enforce interruption process logic for highly responsive processes without compromising the benefits of workflow technology. We address these issues in two stages. In the first stage, we consider that the reaction to an interruption event is dependent on three factors: the progress of the process instance with respect to the workflow model, the values of the associated case data variables at the time at which the event occurs, and the data embedded in the event. In the second stage, we consider that the reaction to each interruption event may also depend on the other events that have also been detected, that is, we allow interruptions to be defined through event patterns or complex events. We thus consider the issues of definition, analysis and enactment for both 'basic' and 'extended' interruption policy models. First, we introduce a method to model interruption policies in an intuitive but executable manner such that they may be maintained without technical support. We then address the issue of execution, detailing the required system functionality and proposing a reference architecture for the automatic enforcement of the policies. Finally, we introduce a set of formal, generic correctness criteria and a verification procedure for the models. For extended policy models, we introduce and compare two alternative execution models for the evaluation of logical expressions that represent interruption patterns. Finally, we present a thorough analysis of related verification issues, considering both the system and user perspectives, in order to ensure correct process execution and also provide support for the user in semantic validation of the interruption policies.
|
228 |
Responsive Workflows: Design, Execution and Analysis of Interruption Policy ModelsBelinda Melanie Carter Unknown Date (has links)
Business processes form the backbone of all business operations, and workflow technology has enabled companies to gain significant productivity benefits through the automatic enactment of routine, repetitive processes. Process automation can be achieved by encoding the business rules and procedures into the applications, but capturing the process logic in a graphical workflow model allows the process to be specified, validated and ultimately maintained by business analysts with limited technical knowledge. The process models can also be automatically verified at design-time to detect structural issues such as deadlock and ensure correct data flow during process execution. These benefits have resulted in the success of workflow technology in a variety of industries, although workflows are often criticised for being too rigid, particularly in light of their recent deployment in collaborative applications such as e-business. Generally, many events can impact on the execution of a workflow process. Initially, the workflow is triggered by an external event (for example, receipt of an order). Participants then interact with the workflow system through the worklist as they perform constituent tasks of the workflow, driving the progression of each process instance through the model until its completion. For traditional workflow processes, this functionality was sufficient. However, new generation 'responsive' workflow technology must facilitate interaction with the external environment during workflow execution. For example, during the execution of an 'order to cash' process, the customer may attempt to cancel the order or update the shipping address. We call these events 'interruptions'. The potential occurrence of interruptions can be anticipated but, unlike the other workflow events, they are never required to occur in order to successfully execute any process instance. Interruptions can also occur at any stage during process execution, and may therefore be considered as 'expected, asynchronous exceptions' during the execution of workflow processes. Every interruption must be handled, and the desired reaction often depends on the situation. For example, an address update may not be permitted after a certain point, where this point depends on the customer type, and a shipping charge or refund may be applicable, depending on the original and new delivery region. Therefore, a set of rules is associated with each interruption, such that if a condition is satisfied when the event occurs, a particular action is to be performed. This set of rules forms a policy to handle each interruption. Several workflow systems do facilitate the automatic enforcement of 'exception handling' rules and support the reuse of code fragments to enable the limited specification and maintenance of rules by non-technical users. However, this functionality is not represented in a formal, intuitive model. Moreover, we argue that inadequate consideration is given to the verification of the rules, with insufficient support provided for the detection of issues at design-time that could hinder effective maintenance of the process logic or interfere with the interruption handling functionality at run-time. This thesis presents a framework to capture, analyse and enforce interruption process logic for highly responsive processes without compromising the benefits of workflow technology. We address these issues in two stages. In the first stage, we consider that the reaction to an interruption event is dependent on three factors: the progress of the process instance with respect to the workflow model, the values of the associated case data variables at the time at which the event occurs, and the data embedded in the event. In the second stage, we consider that the reaction to each interruption event may also depend on the other events that have also been detected, that is, we allow interruptions to be defined through event patterns or complex events. We thus consider the issues of definition, analysis and enactment for both 'basic' and 'extended' interruption policy models. First, we introduce a method to model interruption policies in an intuitive but executable manner such that they may be maintained without technical support. We then address the issue of execution, detailing the required system functionality and proposing a reference architecture for the automatic enforcement of the policies. Finally, we introduce a set of formal, generic correctness criteria and a verification procedure for the models. For extended policy models, we introduce and compare two alternative execution models for the evaluation of logical expressions that represent interruption patterns. Finally, we present a thorough analysis of related verification issues, considering both the system and user perspectives, in order to ensure correct process execution and also provide support for the user in semantic validation of the interruption policies.
|
229 |
Responsive Workflows: Design, Execution and Analysis of Interruption Policy ModelsBelinda Melanie Carter Unknown Date (has links)
Business processes form the backbone of all business operations, and workflow technology has enabled companies to gain significant productivity benefits through the automatic enactment of routine, repetitive processes. Process automation can be achieved by encoding the business rules and procedures into the applications, but capturing the process logic in a graphical workflow model allows the process to be specified, validated and ultimately maintained by business analysts with limited technical knowledge. The process models can also be automatically verified at design-time to detect structural issues such as deadlock and ensure correct data flow during process execution. These benefits have resulted in the success of workflow technology in a variety of industries, although workflows are often criticised for being too rigid, particularly in light of their recent deployment in collaborative applications such as e-business. Generally, many events can impact on the execution of a workflow process. Initially, the workflow is triggered by an external event (for example, receipt of an order). Participants then interact with the workflow system through the worklist as they perform constituent tasks of the workflow, driving the progression of each process instance through the model until its completion. For traditional workflow processes, this functionality was sufficient. However, new generation 'responsive' workflow technology must facilitate interaction with the external environment during workflow execution. For example, during the execution of an 'order to cash' process, the customer may attempt to cancel the order or update the shipping address. We call these events 'interruptions'. The potential occurrence of interruptions can be anticipated but, unlike the other workflow events, they are never required to occur in order to successfully execute any process instance. Interruptions can also occur at any stage during process execution, and may therefore be considered as 'expected, asynchronous exceptions' during the execution of workflow processes. Every interruption must be handled, and the desired reaction often depends on the situation. For example, an address update may not be permitted after a certain point, where this point depends on the customer type, and a shipping charge or refund may be applicable, depending on the original and new delivery region. Therefore, a set of rules is associated with each interruption, such that if a condition is satisfied when the event occurs, a particular action is to be performed. This set of rules forms a policy to handle each interruption. Several workflow systems do facilitate the automatic enforcement of 'exception handling' rules and support the reuse of code fragments to enable the limited specification and maintenance of rules by non-technical users. However, this functionality is not represented in a formal, intuitive model. Moreover, we argue that inadequate consideration is given to the verification of the rules, with insufficient support provided for the detection of issues at design-time that could hinder effective maintenance of the process logic or interfere with the interruption handling functionality at run-time. This thesis presents a framework to capture, analyse and enforce interruption process logic for highly responsive processes without compromising the benefits of workflow technology. We address these issues in two stages. In the first stage, we consider that the reaction to an interruption event is dependent on three factors: the progress of the process instance with respect to the workflow model, the values of the associated case data variables at the time at which the event occurs, and the data embedded in the event. In the second stage, we consider that the reaction to each interruption event may also depend on the other events that have also been detected, that is, we allow interruptions to be defined through event patterns or complex events. We thus consider the issues of definition, analysis and enactment for both 'basic' and 'extended' interruption policy models. First, we introduce a method to model interruption policies in an intuitive but executable manner such that they may be maintained without technical support. We then address the issue of execution, detailing the required system functionality and proposing a reference architecture for the automatic enforcement of the policies. Finally, we introduce a set of formal, generic correctness criteria and a verification procedure for the models. For extended policy models, we introduce and compare two alternative execution models for the evaluation of logical expressions that represent interruption patterns. Finally, we present a thorough analysis of related verification issues, considering both the system and user perspectives, in order to ensure correct process execution and also provide support for the user in semantic validation of the interruption policies.
|
230 |
Responsive Workflows: Design, Execution and Analysis of Interruption Policy ModelsBelinda Melanie Carter Unknown Date (has links)
Business processes form the backbone of all business operations, and workflow technology has enabled companies to gain significant productivity benefits through the automatic enactment of routine, repetitive processes. Process automation can be achieved by encoding the business rules and procedures into the applications, but capturing the process logic in a graphical workflow model allows the process to be specified, validated and ultimately maintained by business analysts with limited technical knowledge. The process models can also be automatically verified at design-time to detect structural issues such as deadlock and ensure correct data flow during process execution. These benefits have resulted in the success of workflow technology in a variety of industries, although workflows are often criticised for being too rigid, particularly in light of their recent deployment in collaborative applications such as e-business. Generally, many events can impact on the execution of a workflow process. Initially, the workflow is triggered by an external event (for example, receipt of an order). Participants then interact with the workflow system through the worklist as they perform constituent tasks of the workflow, driving the progression of each process instance through the model until its completion. For traditional workflow processes, this functionality was sufficient. However, new generation 'responsive' workflow technology must facilitate interaction with the external environment during workflow execution. For example, during the execution of an 'order to cash' process, the customer may attempt to cancel the order or update the shipping address. We call these events 'interruptions'. The potential occurrence of interruptions can be anticipated but, unlike the other workflow events, they are never required to occur in order to successfully execute any process instance. Interruptions can also occur at any stage during process execution, and may therefore be considered as 'expected, asynchronous exceptions' during the execution of workflow processes. Every interruption must be handled, and the desired reaction often depends on the situation. For example, an address update may not be permitted after a certain point, where this point depends on the customer type, and a shipping charge or refund may be applicable, depending on the original and new delivery region. Therefore, a set of rules is associated with each interruption, such that if a condition is satisfied when the event occurs, a particular action is to be performed. This set of rules forms a policy to handle each interruption. Several workflow systems do facilitate the automatic enforcement of 'exception handling' rules and support the reuse of code fragments to enable the limited specification and maintenance of rules by non-technical users. However, this functionality is not represented in a formal, intuitive model. Moreover, we argue that inadequate consideration is given to the verification of the rules, with insufficient support provided for the detection of issues at design-time that could hinder effective maintenance of the process logic or interfere with the interruption handling functionality at run-time. This thesis presents a framework to capture, analyse and enforce interruption process logic for highly responsive processes without compromising the benefits of workflow technology. We address these issues in two stages. In the first stage, we consider that the reaction to an interruption event is dependent on three factors: the progress of the process instance with respect to the workflow model, the values of the associated case data variables at the time at which the event occurs, and the data embedded in the event. In the second stage, we consider that the reaction to each interruption event may also depend on the other events that have also been detected, that is, we allow interruptions to be defined through event patterns or complex events. We thus consider the issues of definition, analysis and enactment for both 'basic' and 'extended' interruption policy models. First, we introduce a method to model interruption policies in an intuitive but executable manner such that they may be maintained without technical support. We then address the issue of execution, detailing the required system functionality and proposing a reference architecture for the automatic enforcement of the policies. Finally, we introduce a set of formal, generic correctness criteria and a verification procedure for the models. For extended policy models, we introduce and compare two alternative execution models for the evaluation of logical expressions that represent interruption patterns. Finally, we present a thorough analysis of related verification issues, considering both the system and user perspectives, in order to ensure correct process execution and also provide support for the user in semantic validation of the interruption policies.
|
Page generated in 0.0448 seconds