351 |
Responsive Workflows: Design, Execution and Analysis of Interruption Policy ModelsBelinda Melanie Carter Unknown Date (has links)
Business processes form the backbone of all business operations, and workflow technology has enabled companies to gain significant productivity benefits through the automatic enactment of routine, repetitive processes. Process automation can be achieved by encoding the business rules and procedures into the applications, but capturing the process logic in a graphical workflow model allows the process to be specified, validated and ultimately maintained by business analysts with limited technical knowledge. The process models can also be automatically verified at design-time to detect structural issues such as deadlock and ensure correct data flow during process execution. These benefits have resulted in the success of workflow technology in a variety of industries, although workflows are often criticised for being too rigid, particularly in light of their recent deployment in collaborative applications such as e-business. Generally, many events can impact on the execution of a workflow process. Initially, the workflow is triggered by an external event (for example, receipt of an order). Participants then interact with the workflow system through the worklist as they perform constituent tasks of the workflow, driving the progression of each process instance through the model until its completion. For traditional workflow processes, this functionality was sufficient. However, new generation 'responsive' workflow technology must facilitate interaction with the external environment during workflow execution. For example, during the execution of an 'order to cash' process, the customer may attempt to cancel the order or update the shipping address. We call these events 'interruptions'. The potential occurrence of interruptions can be anticipated but, unlike the other workflow events, they are never required to occur in order to successfully execute any process instance. Interruptions can also occur at any stage during process execution, and may therefore be considered as 'expected, asynchronous exceptions' during the execution of workflow processes. Every interruption must be handled, and the desired reaction often depends on the situation. For example, an address update may not be permitted after a certain point, where this point depends on the customer type, and a shipping charge or refund may be applicable, depending on the original and new delivery region. Therefore, a set of rules is associated with each interruption, such that if a condition is satisfied when the event occurs, a particular action is to be performed. This set of rules forms a policy to handle each interruption. Several workflow systems do facilitate the automatic enforcement of 'exception handling' rules and support the reuse of code fragments to enable the limited specification and maintenance of rules by non-technical users. However, this functionality is not represented in a formal, intuitive model. Moreover, we argue that inadequate consideration is given to the verification of the rules, with insufficient support provided for the detection of issues at design-time that could hinder effective maintenance of the process logic or interfere with the interruption handling functionality at run-time. This thesis presents a framework to capture, analyse and enforce interruption process logic for highly responsive processes without compromising the benefits of workflow technology. We address these issues in two stages. In the first stage, we consider that the reaction to an interruption event is dependent on three factors: the progress of the process instance with respect to the workflow model, the values of the associated case data variables at the time at which the event occurs, and the data embedded in the event. In the second stage, we consider that the reaction to each interruption event may also depend on the other events that have also been detected, that is, we allow interruptions to be defined through event patterns or complex events. We thus consider the issues of definition, analysis and enactment for both 'basic' and 'extended' interruption policy models. First, we introduce a method to model interruption policies in an intuitive but executable manner such that they may be maintained without technical support. We then address the issue of execution, detailing the required system functionality and proposing a reference architecture for the automatic enforcement of the policies. Finally, we introduce a set of formal, generic correctness criteria and a verification procedure for the models. For extended policy models, we introduce and compare two alternative execution models for the evaluation of logical expressions that represent interruption patterns. Finally, we present a thorough analysis of related verification issues, considering both the system and user perspectives, in order to ensure correct process execution and also provide support for the user in semantic validation of the interruption policies.
|
352 |
Responsive Workflows: Design, Execution and Analysis of Interruption Policy ModelsBelinda Melanie Carter Unknown Date (has links)
Business processes form the backbone of all business operations, and workflow technology has enabled companies to gain significant productivity benefits through the automatic enactment of routine, repetitive processes. Process automation can be achieved by encoding the business rules and procedures into the applications, but capturing the process logic in a graphical workflow model allows the process to be specified, validated and ultimately maintained by business analysts with limited technical knowledge. The process models can also be automatically verified at design-time to detect structural issues such as deadlock and ensure correct data flow during process execution. These benefits have resulted in the success of workflow technology in a variety of industries, although workflows are often criticised for being too rigid, particularly in light of their recent deployment in collaborative applications such as e-business. Generally, many events can impact on the execution of a workflow process. Initially, the workflow is triggered by an external event (for example, receipt of an order). Participants then interact with the workflow system through the worklist as they perform constituent tasks of the workflow, driving the progression of each process instance through the model until its completion. For traditional workflow processes, this functionality was sufficient. However, new generation 'responsive' workflow technology must facilitate interaction with the external environment during workflow execution. For example, during the execution of an 'order to cash' process, the customer may attempt to cancel the order or update the shipping address. We call these events 'interruptions'. The potential occurrence of interruptions can be anticipated but, unlike the other workflow events, they are never required to occur in order to successfully execute any process instance. Interruptions can also occur at any stage during process execution, and may therefore be considered as 'expected, asynchronous exceptions' during the execution of workflow processes. Every interruption must be handled, and the desired reaction often depends on the situation. For example, an address update may not be permitted after a certain point, where this point depends on the customer type, and a shipping charge or refund may be applicable, depending on the original and new delivery region. Therefore, a set of rules is associated with each interruption, such that if a condition is satisfied when the event occurs, a particular action is to be performed. This set of rules forms a policy to handle each interruption. Several workflow systems do facilitate the automatic enforcement of 'exception handling' rules and support the reuse of code fragments to enable the limited specification and maintenance of rules by non-technical users. However, this functionality is not represented in a formal, intuitive model. Moreover, we argue that inadequate consideration is given to the verification of the rules, with insufficient support provided for the detection of issues at design-time that could hinder effective maintenance of the process logic or interfere with the interruption handling functionality at run-time. This thesis presents a framework to capture, analyse and enforce interruption process logic for highly responsive processes without compromising the benefits of workflow technology. We address these issues in two stages. In the first stage, we consider that the reaction to an interruption event is dependent on three factors: the progress of the process instance with respect to the workflow model, the values of the associated case data variables at the time at which the event occurs, and the data embedded in the event. In the second stage, we consider that the reaction to each interruption event may also depend on the other events that have also been detected, that is, we allow interruptions to be defined through event patterns or complex events. We thus consider the issues of definition, analysis and enactment for both 'basic' and 'extended' interruption policy models. First, we introduce a method to model interruption policies in an intuitive but executable manner such that they may be maintained without technical support. We then address the issue of execution, detailing the required system functionality and proposing a reference architecture for the automatic enforcement of the policies. Finally, we introduce a set of formal, generic correctness criteria and a verification procedure for the models. For extended policy models, we introduce and compare two alternative execution models for the evaluation of logical expressions that represent interruption patterns. Finally, we present a thorough analysis of related verification issues, considering both the system and user perspectives, in order to ensure correct process execution and also provide support for the user in semantic validation of the interruption policies.
|
353 |
Responsive Workflows: Design, Execution and Analysis of Interruption Policy ModelsBelinda Melanie Carter Unknown Date (has links)
Business processes form the backbone of all business operations, and workflow technology has enabled companies to gain significant productivity benefits through the automatic enactment of routine, repetitive processes. Process automation can be achieved by encoding the business rules and procedures into the applications, but capturing the process logic in a graphical workflow model allows the process to be specified, validated and ultimately maintained by business analysts with limited technical knowledge. The process models can also be automatically verified at design-time to detect structural issues such as deadlock and ensure correct data flow during process execution. These benefits have resulted in the success of workflow technology in a variety of industries, although workflows are often criticised for being too rigid, particularly in light of their recent deployment in collaborative applications such as e-business. Generally, many events can impact on the execution of a workflow process. Initially, the workflow is triggered by an external event (for example, receipt of an order). Participants then interact with the workflow system through the worklist as they perform constituent tasks of the workflow, driving the progression of each process instance through the model until its completion. For traditional workflow processes, this functionality was sufficient. However, new generation 'responsive' workflow technology must facilitate interaction with the external environment during workflow execution. For example, during the execution of an 'order to cash' process, the customer may attempt to cancel the order or update the shipping address. We call these events 'interruptions'. The potential occurrence of interruptions can be anticipated but, unlike the other workflow events, they are never required to occur in order to successfully execute any process instance. Interruptions can also occur at any stage during process execution, and may therefore be considered as 'expected, asynchronous exceptions' during the execution of workflow processes. Every interruption must be handled, and the desired reaction often depends on the situation. For example, an address update may not be permitted after a certain point, where this point depends on the customer type, and a shipping charge or refund may be applicable, depending on the original and new delivery region. Therefore, a set of rules is associated with each interruption, such that if a condition is satisfied when the event occurs, a particular action is to be performed. This set of rules forms a policy to handle each interruption. Several workflow systems do facilitate the automatic enforcement of 'exception handling' rules and support the reuse of code fragments to enable the limited specification and maintenance of rules by non-technical users. However, this functionality is not represented in a formal, intuitive model. Moreover, we argue that inadequate consideration is given to the verification of the rules, with insufficient support provided for the detection of issues at design-time that could hinder effective maintenance of the process logic or interfere with the interruption handling functionality at run-time. This thesis presents a framework to capture, analyse and enforce interruption process logic for highly responsive processes without compromising the benefits of workflow technology. We address these issues in two stages. In the first stage, we consider that the reaction to an interruption event is dependent on three factors: the progress of the process instance with respect to the workflow model, the values of the associated case data variables at the time at which the event occurs, and the data embedded in the event. In the second stage, we consider that the reaction to each interruption event may also depend on the other events that have also been detected, that is, we allow interruptions to be defined through event patterns or complex events. We thus consider the issues of definition, analysis and enactment for both 'basic' and 'extended' interruption policy models. First, we introduce a method to model interruption policies in an intuitive but executable manner such that they may be maintained without technical support. We then address the issue of execution, detailing the required system functionality and proposing a reference architecture for the automatic enforcement of the policies. Finally, we introduce a set of formal, generic correctness criteria and a verification procedure for the models. For extended policy models, we introduce and compare two alternative execution models for the evaluation of logical expressions that represent interruption patterns. Finally, we present a thorough analysis of related verification issues, considering both the system and user perspectives, in order to ensure correct process execution and also provide support for the user in semantic validation of the interruption policies.
|
354 |
Scientific progress and its metaphysical foundationsMcLaughlin, Amy LeeAnn, Kronz, Frederick M., January 2004 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2004. / Supervisor: Frederick M. Kronz. Vita. Includes bibliographical references.
|
355 |
Handling domain knowledge in system design models. An ontology based approach.Hacid, Kahina 06 March 2018 (has links) (PDF)
Complex systems models are designed in heterogeneous domains and this heterogeneity is rarely considered explicitly when describing and validating processes. Moreover, these systems usually involve several domain experts and several design models corresponding to different analyses (views) of the same system. However, no explicit information regarding the characteristics neither of the domain nor of the performed system analyses is given. In our thesis, we propose a general framework offering first, the formalization of domain knowledge using ontologies and second, the capability to strengthen design models by making explicit references to the domain knowledgeformalized in these ontology. This framework also provides resources for making explicit the features of an analysis by formalizing them within models qualified as ‘’points of view ‘’. We have set up two deployments of our approach: a Model Driven Engineering (MDE) based deployment and a formal methods one based on proof and refinement. This general framework has been validated on several no trivial case studies issued from system engineering.
|
356 |
Model Based Safety Analysis of Cyber Physical SystemsJanuary 2010 (has links)
abstract: Cyber Physical Systems (CPSs) are systems comprising of computational systems that interact with the physical world to perform sensing, communication, computation and actuation. Common examples of these systems include Body Area Networks (BANs), Autonomous Vehicles (AVs), Power Distribution Systems etc. The close coupling between cyber and physical worlds in a CPS manifests in two types of interactions between computing systems and the physical world: intentional and unintentional. Unintentional interactions result from the physical characteristics of the computing systems and often cause harm to the physical world, if the computing nodes are close to each other, these interactions may overlap thereby increasing the chances of causing a Safety hazard. Similarly, due to mobile nature of computing nodes in a CPS planned and unplanned interactions with the physical world occur. These interactions represent the behavior of a computing node while it is following a planned path and during faulty operations. Both of these interactions change over time due to the dynamics (motion) of the computing node and may overlap thereby causing harm to the physical world. Lack of proper modeling and analysis frameworks for these systems causes system designers to use ad-hoc techniques thereby further increasing their design and development time. The thesis addresses these problems by taking a holistic approach to model Computational, Physical and Cyber Physical Interactions (CPIs) aspects of a CPS and proposes modeling constructs for them. These constructs are analyzed using a safety analysis algorithm developed as part of the thesis. The algorithm computes the intersection of CPIs for both mobile as well as static computing nodes and determines the safety of the physical system. A framework is developed by extending AADL to support these modeling constructs; the safety analysis algorithm is implemented as OSATE plug-in. The applicability of the proposed approach is demonstrated by considering the safety of human tissue during the operations of BAN, and the safety of passengers traveling in an Autonomous Vehicle. / Dissertation/Thesis / M.S. Computer Science 2010
|
357 |
Design of an Automated Validation Environment For A Radiation Hardened MIPS MicroprocessorJanuary 2011 (has links)
abstract: Ever reducing time to market, along with short product lifetimes, has created a need to shorten the microprocessor design time. Verification of the design and its analysis are two major components of this design cycle. Design validation techniques can be broadly classified into two major categories: simulation based approaches and formal techniques. Simulation based microprocessor validation involves running millions of cycles using random or pseudo random tests and allows verification of the register transfer level (RTL) model against an architectural model, i.e., that the processor executes instructions as required. The validation effort involves model checking to a high level description or simulation of the design against the RTL implementation. Formal techniques exhaustively analyze parts of the design but, do not verify RTL against the architecture specification. The focus of this work is to implement a fully automated validation environment for a MIPS based radiation hardened microprocessor using simulation based approaches. The basic framework uses the classical validation approach in which the design to be validated is described in a Hardware Definition Language (HDL) such as VHDL or Verilog. To implement a simulation based approach a number of random or pseudo random tests are generated. The output of the HDL based design is compared against the one obtained from a "perfect" model implementing similar functionality, a mismatch in the results would thus indicate a bug in the HDL based design. Effort is made to design the environment in such a manner that it can support validation during different stages of the design cycle. The validation environment includes appropriate changes so as to support architecture changes which are introduced because of radiation hardening. The manner in which the validation environment is build is highly dependent on the specifications of the perfect model used for comparisons. This work implements the validation environment for two MIPS simulators as the reference model. Two bugs have been discovered in the RTL model, using simulation based approaches through the validation environment. / Dissertation/Thesis / M.S. Electrical Engineering 2011
|
358 |
Uncovering bugs in P4 programs with assertion based verification / Revelando bugs em programação P4 com verificação baseada em asserçõesFreire, Lucas Menezes January 2018 (has links)
Tendências recentes em redes definidas por software têm estendido a programabilidade de rede para o plano de dados através de linguagens de programação como P4. Infelizmente, a chance de introduzir bugs na rede também aumenta significativamente nesse novo contexto. Para prevenir bugs de violarem propriedades de rede, as técnicas de imposição e verificação podem ser aplicadas. Enquanto imposição procura monitorar ativamente o plano de dados para bloquear violações de propriedades, verificação visa encontrar bugs assegurando que o programa satisfaz seus requisitos. Abordagens de verificação de plano de dados existentes que são capazes de modelar programas P4 apresentam restrições severas no conjunto de propriedades que podem ser verificadas. Neste trabalho, nós propomos ASSERT-P4, uma abordagem de verificação de programas de plano de dados baseada em asserções e execução simbólica. Programadores de rede anotam programas P4 com asserções expressando propriedades gerais de corretude. Os programas anotados são transformados em modelos C e todos os seus caminhos possíveis são executados simbolicamente. Como execução simbólica é conhecida por possuir desafios de escalabilidade, nós também propomos um conjunto de técnicas que podem ser aplicadas neste domínio para tornar a verificação factível. Nomeadamente, nós investigamos o efeito das seguintes técnicas sobre o desempenho da verificação: paralelização, otimizações de compilador, limitações de pacotes e fluxo de controle, estratégia de reporte de bugs, e fatiamento de programas. Nós implementamos um protótipo para estudar a eficácia e eficiência da abordagem proposta. Nós mostramos que ela pode revelar uma ampla gama de bugs e defeitos de software, e é capaz de fazer isso em menos de um minuto considerando diversas aplicações P4 encontradas na literatura. Nós mostramos como uma seleção de técnicas de otimização em programas mais complexos pode reduzir o tempo de verificação em aproximadamente 85 por cento. / Recent trends in software-defined networking have extended network programmability to the data plane through programming languages such as P4. Unfortunately, the chance of introducing bugs in the network also increases significantly in this new context. To prevent bugs from violating network properties, the techniques of enforcement or verification can be applied. While enforcement seeks to actively monitor the data plane to block property violations, verification aims to find bugs by assuring that the program meets its requirements. Existing data plane verification approaches that are able to model P4 programs present severe restrictions in the set of properties that can be verified. In this work, we propose ASSERT-P4, a data plane program verification approach based on assertions and symbolic execution. Network programmers annotate P4 programs with assertions expressing general correctness properties. The annotated programs are transformed into C models and all their possible paths are symbolically executed. Since symbolic execution is known to have scalability challenges, we also propose a set of techniques that can be applied in this domain to make verification feasible. Namely, we investigate the effect of the following techniques on verification performance: parallelization, compiler optimizations, packet and control flow constraints, bug reporting strategy, and program slicing. We implemented a prototype to study the efficacy and efficiency of the proposed approach. We show it can uncover a broad range of bugs and software flaws, and can do it in less than a minute considering various P4 applications proposed in the literature. We show how a selection of the optimization techniques on more complex programs can reduce the verification time in approximately 85 percent.
|
359 |
Memory consistency directed cache coherence protocols for scalable multiprocessorsElver, Marco Iskender January 2016 (has links)
The memory consistency model, which formally specifies the behavior of the memory system, is used by programmers to reason about parallel programs. From a hardware design perspective, weaker consistency models permit various optimizations in a multiprocessor system: this thesis focuses on designing and optimizing the cache coherence protocol for a given target memory consistency model. Traditional directory coherence protocols are designed to be compatible with the strictest memory consistency model, sequential consistency (SC). When they are used for chip multiprocessors (CMPs) that provide more relaxed memory consistency models, such protocols turn out to be unnecessarily strict. Usually, this comes at the cost of scalability, in terms of per-core storage due to sharer tracking, which poses a problem with increasing number of cores in today’s CMPs, most of which no longer are sequentially consistent. The recent convergence towards programming language based relaxed memory consistency models has sparked renewed interest in lazy cache coherence protocols. These protocols exploit synchronization information by enforcing coherence only at synchronization boundaries via self-invalidation. As a result, such protocols do not require sharer tracking which benefits scalability. On the downside, such protocols are only readily applicable to a restricted set of consistency models, such as Release Consistency (RC), which expose synchronization information explicitly. In particular, existing architectures with stricter consistency models (such as x86) cannot readily make use of lazy coherence protocols without either: adapting the protocol to satisfy the stricter consistency model; or changing the architecture’s consistency model to (a variant of) RC, typically at the expense of backward compatibility. The first part of this thesis explores both these options, with a focus on a practical approach satisfying backward compatibility. Because of the wide adoption of Total Store Order (TSO) and its variants in x86 and SPARC processors, and existing parallel programs written for these architectures, we first propose TSO-CC, a lazy cache coherence protocol for the TSO memory consistency model. TSO-CC does not track sharers and instead relies on self-invalidation and detection of potential acquires (in the absence of explicit synchronization) using per cache line timestamps to efficiently and lazily satisfy the TSO memory consistency model. Our results show that TSO-CC achieves, on average, performance comparable to a MESI directory protocol, while TSO-CC’s storage overhead per cache line scales logarithmically with increasing core count. Next, we propose an approach for the x86-64 architecture, which is a compromise between retaining the original consistency model and using a more storage efficient lazy coherence protocol. First, we propose a mechanism to convey synchronization information via a simple ISA extension, while retaining backward compatibility with legacy codes and older microarchitectures. Second, we propose RC3 (based on TSOCC), a scalable cache coherence protocol for RCtso, the resulting memory consistency model. RC3 does not track sharers and relies on self-invalidation on acquires. To satisfy RCtso efficiently, the protocol reduces self-invalidations transitively using per-L1 timestamps only. RC3 outperforms a conventional lazy RC protocol by 12%, achieving performance comparable to a MESI directory protocol for RC optimized programs. RC3’s storage overhead per cache line scales logarithmically with increasing core count and reduces on-chip coherence storage overheads by 45% compared to TSO-CC. Finally, it is imperative that hardware adheres to the promised memory consistency model. Indeed, consistency directed coherence protocols cannot use conventional coherence definitions (e.g. SWMR) to be verified against, and few existing verification methodologies apply. Furthermore, as the full consistency model is used as a specification, their interaction with other components (e.g. pipeline) of a system must not be neglected in the verification process. Therefore, verifying a system with such protocols in the context of interacting components is even more important than before. One common way to do this is via executing tests, where specific threads of instruction sequences are generated and their executions are checked for adherence to the consistency model. It would be extremely beneficial to execute such tests under simulation, i.e. when the functional design implementation of the hardware is being prototyped. Most prior verification methodologies, however, target post-silicon environments, which when used for simulation-based memory consistency verification would be too slow. We propose McVerSi, a test generation framework for fast memory consistency verification of a full-system design implementation under simulation. Our primary contribution is a Genetic Programming (GP) based approach to memory consistency test generation, which relies on a novel crossover function that prioritizes memory operations contributing to non-determinism, thereby increasing the probability of uncovering memory consistency bugs. To guide tests towards exercising as much logic as possible, the simulator’s reported coverage is used as the fitness function. Furthermore, we increase test throughput by making the test workload simulation-aware. We evaluate our proposed framework using the Gem5 cycle accurate simulator in full-system mode with Ruby (with configurations that use Gem5’s MESI protocol, and our proposed TSO-CC together with an out-of-order pipeline). We discover 2 new bugs in the MESI protocol due to the faulty interaction of the pipeline and the cache coherence protocol, highlighting that even conventional protocols should be verified rigorously in the context of a full-system. Crucially, these bugs would not have been discovered through individual verification of the pipeline or the coherence protocol. We study 11 bugs in total. Our GP-based test generation approach finds all bugs consistently, therefore providing much higher guarantees compared to alternative approaches (pseudo-random test generation and litmus tests).
|
360 |
VEasy : a tool suite towards the functional verification challenges / VEasy: um conjunto de ferramentas direcionado aos desafios da verificação funcionalPagliarini, Samuel Nascimento January 2011 (has links)
Esta dissertação descreve um conjunto de ferramentas, VEasy, o qual foi desenvolvido especificamente para auxiliar no processo de Verificação Funcional. VEasy contém quatro módulos principais, os quais realizam tarefas-chave do processo de verificação como linting, simulação, coleta/análise de cobertura e a geração de testcases. Cada módulo é comentado em detalhe ao longo dos capítulos. Todos os módulos são integrados e construídos utilizando uma Interface Gráfica. Esta interface possibilita o uso de uma metodologia de criação de testcases estruturados em camadas, onde é possível criar casos de teste complexos através do uso de operações do tipo drag-and-drop. A forma de uso dos módulos é exemplificada utilizando projetos simples escritos em Verilog. As funcionalidades da ferramenta, assim como o seu desempenho, são comparadas com algumas ferramentas comerciais e acadêmicas. Assim, algumas conclusões são apresentadas, mostrando que o tempo de simulação é consideravelmente menor quando efetuada a comparação com as ferramentas comerciais e acadêmicas. Os resultados também mostram que a metodologia é capaz de permitir um alto nível de automação no processo de criação de testcases através do modelo baseado em camadas. / This thesis describes a tool suite, VEasy, which was developed specifically for aiding the process of Functional Verification. VEasy contains four main modules that perform linting, simulation, coverage collection/analysis and testcase generation, which are considered key challenges of the process. Each of those modules is commented in details throughout the chapters. All the modules are integrated and built on top of a Graphical User Interface. This framework enables the testcase automation methodology which is based on layers, where one is capable of creating complex test scenarios using drag-anddrop operations. Whenever possible the usage of the modules is exemplified using simple Verilog designs. The capabilities of this tool and its performance were compared with some commercial and academic functional verification tools. Finally, some conclusions are drawn, showing that the overall simulation time is considerably smaller with respect to commercial and academic simulators. The results also show that the methodology is capable of enabling a great deal of testcase automation by using the layering scheme.
|
Page generated in 0.131 seconds