Spelling suggestions: "subject:"cerification**"" "subject:"erification**""
341 |
Semantic Integration of Time OntologiesOng, Darren 15 December 2011 (has links)
Here we consider the verification and semantic integration for the set of first-order time ontologies by Allen-Hayes, Ladkin, and van Benthem that axiomatize time as points, intervals, or a combination of both within an ontology repository environment. Semantic integration of the set of time ontologies is explored via the notion of theory interpretations using an automated reasoner as part of the methodology. We use the notion of representation theorems for verification by characterizing the models of the ontology up to isomorphism and proving that they are equivalent to the intended structures for the ontology. Provided is a complete account of the meta-theoretic relationships between ontologies along with corrections to their axioms, translation definitions, proof of representation theorems, and a discussion of various issues such as class-quantified interpretations, the impact of namespacing support for Common Logic, and ontology repository support for semantic integration as related to the time ontologies examined.
|
342 |
Robust Consistency Checking for Modern FilesystemsSun, Kuei 19 March 2013 (has links)
A runtime file system checker protects file-system metadata integrity. It checks the consistency of file system update operations before they are committed to disk, thus preventing corrupted updates from reaching the disk. In this thesis, we describe our experiences with building Brunch, a runtime checker for an emerging Linux file system called Btrfs. Btrfs supports many modern file-system features that pose challenges in designing a robust checker. We find that the runtime consistency checks need to be expressed clearly so that they can be reasoned about and implemented reliably, and thus we propose writing the checks declaratively. This approach reduces the complexity of the checks, ensures their independence, and helps identify the correct abstractions in the checker. It also shows how the checker can be designed to handle arbitrary file system corruption. Our results show that runtime consistency checking is still viable for complex, modern file systems.
|
343 |
Automatic Datapath Abstraction Of Pipelined CircuitsVlad, Ciubotariu 18 February 2011 (has links)
Pipelined circuits operate as an assembly line that starts processing new instructions while older ones
continue execution. Control properties specify the correct behaviour of the pipeline with respect to
how it handles the concurrency between instructions. Control properties stand out as one of the most
challenging aspects of pipelined circuit verification. Their verification depends on the datapath and
memories, which in practice account for the largest part of the state space of the circuit. To alleviate
the state explosion problem, abstraction of memories and datapath becomes mandatory. This thesis
provides a methodology for an efficient abstraction of the datapath under all possible control-visible
behaviours. For verification of control properties, the abstracted datapath is then substituted in place
of the original one and the control circuitry is left unchanged. With respect to control properties, the
abstraction is shown conservative by both language containment and simulation.
For verification of control properties, the pipeline datapath is represented by a network of registers,
unrestricted combinational datapath blocks and muxes. The values flowing through the datapath are
called parcels. The control is the state machine that steers the parcels through the network. As parcels
travel through the pipeline, they undergo transformations through the datapath blocks. The control-
visible results of these transformations fan-out into control variables which in turn influence the next
stage the parcels are transferred to by the control. The semantics of the datapath is formalized as a
labelled transition system called a parcel automaton. Parcel automata capture the set of all control
visible paths through the pipeline and are derived without the need of reachability analysis of the
original pipeline. Datapath abstraction is defined using familiar concepts such as language containment
or simulation. We have proved results that show that datapath abstraction leads to pipeline abstraction.
Our approach has been incorporated into a practical algorithm that yields directly the abstract parcel
automaton, bypassing the construction of the concrete parcel automaton. The algorithm uses a SAT
solver to generate incrementally all possible control visible behaviours of the pipeline datapath. Our
largest case study is a 32-bit two-wide superscalar OpenRISC microprocessor written in VHDL, where
it reduced the size of the implementation from 35k gates to 2k gates in less than 10 minutes while using
less than 52MB of memory.
|
344 |
Tags: Augmenting Microkernel Messages with Lightweight MetadataSaif Ur Rehman, Ahmad January 2012 (has links)
In this work, we propose Tags, an e cient mechanism that augments microkernel interprocess
messages with lightweight metadata to enable the development of new, systemwide
functionality without requiring the modi cation of application source code. Therefore, the
technology is well suited for systems with a large legacy code base and for third-party
applications such as phone and tablet applications.
As examples, we detailed use cases in areas consisting of mandatory security and runtime
veri cation of process interactions. In the area of mandatory security, we use tagging
to assess the feasibility of implementing a mandatory integrity propagation model in the
microkernel. The process interaction veri cation use case shows the utility of tagging to
track and verify interaction history among system components.
To demonstrate that tagging is technically feasible and practical, we implemented it
in a commercial microkernel and executed multiple sets of standard benchmarks on two
di erent computing architectures. The results clearly demonstrate that tagging has only
negligible overhead and strong potential for many applications.
|
345 |
Methods for Reducing Monitoring Overhead in Runtime VerificationWu, Chun Wah Wallace January 2013 (has links)
Runtime verification is a lightweight technique that serves to complement existing approaches, such as formal methods and testing, to ensure system correctness. In runtime verification, monitors are synthesized to check a system at run time against a set of properties the system is expected to satisfy. Runtime verification may be used to determine software faults before and after system deployment. The monitor(s) can be synthesized to notify, steer and/or perform system recovery from detected software faults at run time.
The research and proposed methods presented in this thesis aim to reduce the monitoring overhead of runtime verification in terms of memory and execution time by leveraging time-triggered techniques for monitoring system events. Traditionally, runtime verification frameworks employ event-triggered monitors, where the invocation of the monitor occurs after every system event. Because systems events can be sporadic or bursty in nature, event-triggered monitoring behaviour is difficult to predict. Time-triggered monitors, on the other hand, periodically preempt and process system events, making monitoring behaviour predictable. However, software system state reconstruction is not guaranteed (i.e., missed state changes/events between samples).
The first part of this thesis analyzes three heuristics that efficiently solve the NP-complete problem of minimizing the amount of memory required to store system state changes to guarantee accurate state reconstruction. The experimental results demonstrate that adopting near-optimal algorithms do not greatly change the memory consumption and execution time of monitored programs; hence, NP-completeness is likely not an obstacle for time-triggered runtime verification. The second part of this thesis introduces a novel runtime verification technique called hybrid runtime verification. Hybrid runtime verification enables the monitor to toggle between event- and time-triggered modes of operation. The aim of this approach is to reduce the overall runtime monitoring overhead with respect to execution time. Minimizing the execution time overhead by employing hybrid runtime verification is not in NP. An integer linear programming heuristic is formulated to determine near-optimal hybrid monitoring schemes. Experimental results show that the heuristic typically selects monitoring schemes that are equal to or better than naively selecting exclusively one operation mode for monitoring.
|
346 |
Methods for Reducing Monitoring Overhead in Runtime VerificationWu, Chun Wah Wallace January 2013 (has links)
Runtime verification is a lightweight technique that serves to complement existing approaches, such as formal methods and testing, to ensure system correctness. In runtime verification, monitors are synthesized to check a system at run time against a set of properties the system is expected to satisfy. Runtime verification may be used to determine software faults before and after system deployment. The monitor(s) can be synthesized to notify, steer and/or perform system recovery from detected software faults at run time.
The research and proposed methods presented in this thesis aim to reduce the monitoring overhead of runtime verification in terms of memory and execution time by leveraging time-triggered techniques for monitoring system events. Traditionally, runtime verification frameworks employ event-triggered monitors, where the invocation of the monitor occurs after every system event. Because systems events can be sporadic or bursty in nature, event-triggered monitoring behaviour is difficult to predict. Time-triggered monitors, on the other hand, periodically preempt and process system events, making monitoring behaviour predictable. However, software system state reconstruction is not guaranteed (i.e., missed state changes/events between samples).
The first part of this thesis analyzes three heuristics that efficiently solve the NP-complete problem of minimizing the amount of memory required to store system state changes to guarantee accurate state reconstruction. The experimental results demonstrate that adopting near-optimal algorithms do not greatly change the memory consumption and execution time of monitored programs; hence, NP-completeness is likely not an obstacle for time-triggered runtime verification. The second part of this thesis introduces a novel runtime verification technique called hybrid runtime verification. Hybrid runtime verification enables the monitor to toggle between event- and time-triggered modes of operation. The aim of this approach is to reduce the overall runtime monitoring overhead with respect to execution time. Minimizing the execution time overhead by employing hybrid runtime verification is not in NP. An integer linear programming heuristic is formulated to determine near-optimal hybrid monitoring schemes. Experimental results show that the heuristic typically selects monitoring schemes that are equal to or better than naively selecting exclusively one operation mode for monitoring.
|
347 |
Exploiting structure for scalable software verificationDomagoj, Babić 11 1900 (has links)
Software bugs are expensive. Recent estimates by the US National Institute of Standards and Technology claim that the cost of software bugs to the US economy alone is approximately 60 billion USD annually. As society becomes increasingly software-dependent, bugs also reduce our productivity and threaten our safety and security. Decreasing these direct and indirect costs represents a significant research challenge as well as an opportunity for businesses.
Automatic software bug-finding and verification tools have a potential to completely revolutionize the software engineering industry by improving reliability and decreasing development costs. Since software analysis is in general undecidable, automatic tools have to use various abstractions to make the analysis computationally tractable. Abstraction is a double-edged sword: coarse abstractions, in general, yield easier verification, but also less precise results.
This thesis focuses on exploiting the structure of software for abstracting away irrelevant behavior. Programmers tend to organize code into objects and functions, which effectively represent natural abstraction boundaries. Humans use such structural abstractions to simplify their mental models of software and for constructing informal explanations of why a piece of code should work. A natural question to ask is: How can automatic bug-finding tools exploit the same natural abstractions? This thesis offers possible answers.
More specifically, I present three novel ways to exploit structure at three different steps of the software analysis process. First, I show how symbolic execution can preserve the data-flow dependencies of the original code while constructing compact symbolic representations of programs. Second, I propose structural abstraction, which exploits the structure preserved by the symbolic execution. Structural abstraction solves a long-standing open problem --- scalable interprocedural path- and context-sensitive program analysis. Finally, I present an automatic tuning approach that exploits the fine-grained structural properties of software (namely, data- and control-dependency) for faster property checking. This novel approach resulted in a 500-fold speedup over the best previous techniques. Automatic tuning not only redefined the limits of automatic software analysis tools, but also has already found its way into other domains (like model checking), demonstrating the generality and applicability of this idea.
|
348 |
Responsive Workflows: Design, Execution and Analysis of Interruption Policy ModelsBelinda Melanie Carter Unknown Date (has links)
Business processes form the backbone of all business operations, and workflow technology has enabled companies to gain significant productivity benefits through the automatic enactment of routine, repetitive processes. Process automation can be achieved by encoding the business rules and procedures into the applications, but capturing the process logic in a graphical workflow model allows the process to be specified, validated and ultimately maintained by business analysts with limited technical knowledge. The process models can also be automatically verified at design-time to detect structural issues such as deadlock and ensure correct data flow during process execution. These benefits have resulted in the success of workflow technology in a variety of industries, although workflows are often criticised for being too rigid, particularly in light of their recent deployment in collaborative applications such as e-business. Generally, many events can impact on the execution of a workflow process. Initially, the workflow is triggered by an external event (for example, receipt of an order). Participants then interact with the workflow system through the worklist as they perform constituent tasks of the workflow, driving the progression of each process instance through the model until its completion. For traditional workflow processes, this functionality was sufficient. However, new generation 'responsive' workflow technology must facilitate interaction with the external environment during workflow execution. For example, during the execution of an 'order to cash' process, the customer may attempt to cancel the order or update the shipping address. We call these events 'interruptions'. The potential occurrence of interruptions can be anticipated but, unlike the other workflow events, they are never required to occur in order to successfully execute any process instance. Interruptions can also occur at any stage during process execution, and may therefore be considered as 'expected, asynchronous exceptions' during the execution of workflow processes. Every interruption must be handled, and the desired reaction often depends on the situation. For example, an address update may not be permitted after a certain point, where this point depends on the customer type, and a shipping charge or refund may be applicable, depending on the original and new delivery region. Therefore, a set of rules is associated with each interruption, such that if a condition is satisfied when the event occurs, a particular action is to be performed. This set of rules forms a policy to handle each interruption. Several workflow systems do facilitate the automatic enforcement of 'exception handling' rules and support the reuse of code fragments to enable the limited specification and maintenance of rules by non-technical users. However, this functionality is not represented in a formal, intuitive model. Moreover, we argue that inadequate consideration is given to the verification of the rules, with insufficient support provided for the detection of issues at design-time that could hinder effective maintenance of the process logic or interfere with the interruption handling functionality at run-time. This thesis presents a framework to capture, analyse and enforce interruption process logic for highly responsive processes without compromising the benefits of workflow technology. We address these issues in two stages. In the first stage, we consider that the reaction to an interruption event is dependent on three factors: the progress of the process instance with respect to the workflow model, the values of the associated case data variables at the time at which the event occurs, and the data embedded in the event. In the second stage, we consider that the reaction to each interruption event may also depend on the other events that have also been detected, that is, we allow interruptions to be defined through event patterns or complex events. We thus consider the issues of definition, analysis and enactment for both 'basic' and 'extended' interruption policy models. First, we introduce a method to model interruption policies in an intuitive but executable manner such that they may be maintained without technical support. We then address the issue of execution, detailing the required system functionality and proposing a reference architecture for the automatic enforcement of the policies. Finally, we introduce a set of formal, generic correctness criteria and a verification procedure for the models. For extended policy models, we introduce and compare two alternative execution models for the evaluation of logical expressions that represent interruption patterns. Finally, we present a thorough analysis of related verification issues, considering both the system and user perspectives, in order to ensure correct process execution and also provide support for the user in semantic validation of the interruption policies.
|
349 |
Responsive Workflows: Design, Execution and Analysis of Interruption Policy ModelsBelinda Melanie Carter Unknown Date (has links)
Business processes form the backbone of all business operations, and workflow technology has enabled companies to gain significant productivity benefits through the automatic enactment of routine, repetitive processes. Process automation can be achieved by encoding the business rules and procedures into the applications, but capturing the process logic in a graphical workflow model allows the process to be specified, validated and ultimately maintained by business analysts with limited technical knowledge. The process models can also be automatically verified at design-time to detect structural issues such as deadlock and ensure correct data flow during process execution. These benefits have resulted in the success of workflow technology in a variety of industries, although workflows are often criticised for being too rigid, particularly in light of their recent deployment in collaborative applications such as e-business. Generally, many events can impact on the execution of a workflow process. Initially, the workflow is triggered by an external event (for example, receipt of an order). Participants then interact with the workflow system through the worklist as they perform constituent tasks of the workflow, driving the progression of each process instance through the model until its completion. For traditional workflow processes, this functionality was sufficient. However, new generation 'responsive' workflow technology must facilitate interaction with the external environment during workflow execution. For example, during the execution of an 'order to cash' process, the customer may attempt to cancel the order or update the shipping address. We call these events 'interruptions'. The potential occurrence of interruptions can be anticipated but, unlike the other workflow events, they are never required to occur in order to successfully execute any process instance. Interruptions can also occur at any stage during process execution, and may therefore be considered as 'expected, asynchronous exceptions' during the execution of workflow processes. Every interruption must be handled, and the desired reaction often depends on the situation. For example, an address update may not be permitted after a certain point, where this point depends on the customer type, and a shipping charge or refund may be applicable, depending on the original and new delivery region. Therefore, a set of rules is associated with each interruption, such that if a condition is satisfied when the event occurs, a particular action is to be performed. This set of rules forms a policy to handle each interruption. Several workflow systems do facilitate the automatic enforcement of 'exception handling' rules and support the reuse of code fragments to enable the limited specification and maintenance of rules by non-technical users. However, this functionality is not represented in a formal, intuitive model. Moreover, we argue that inadequate consideration is given to the verification of the rules, with insufficient support provided for the detection of issues at design-time that could hinder effective maintenance of the process logic or interfere with the interruption handling functionality at run-time. This thesis presents a framework to capture, analyse and enforce interruption process logic for highly responsive processes without compromising the benefits of workflow technology. We address these issues in two stages. In the first stage, we consider that the reaction to an interruption event is dependent on three factors: the progress of the process instance with respect to the workflow model, the values of the associated case data variables at the time at which the event occurs, and the data embedded in the event. In the second stage, we consider that the reaction to each interruption event may also depend on the other events that have also been detected, that is, we allow interruptions to be defined through event patterns or complex events. We thus consider the issues of definition, analysis and enactment for both 'basic' and 'extended' interruption policy models. First, we introduce a method to model interruption policies in an intuitive but executable manner such that they may be maintained without technical support. We then address the issue of execution, detailing the required system functionality and proposing a reference architecture for the automatic enforcement of the policies. Finally, we introduce a set of formal, generic correctness criteria and a verification procedure for the models. For extended policy models, we introduce and compare two alternative execution models for the evaluation of logical expressions that represent interruption patterns. Finally, we present a thorough analysis of related verification issues, considering both the system and user perspectives, in order to ensure correct process execution and also provide support for the user in semantic validation of the interruption policies.
|
350 |
Software-centric and interaction-oriented system-on-chip verification.Xu, Xiao Xi January 2009 (has links)
As the complexity of very-large-scale-integrated-circuits (VLSI) soars, the complexity of verifying them increases even faster. Design verification becomes the biggest bottleneck in VLSI design, consuming around 70% of the effort and time in a typical design cycle. The problem is even more severe as the system-on-chip (SoC) design paradigm is gaining popularity. Unfortunately, the development in verification techniques has not kept up with the growth of the design capability, and is being left further behind in the SoC era. In recent years, a new generation of hardware-modelling-languages alongside the best practices to use them have emerged and evolved in an attempt to productively build an intelligent stimulationobservation environment referred to as the test-bench. Ironically, as test-benches are becoming more powerful and sophisticated under these best practices known as verification methodologies, the overall verification approaches today are still officially described as ad hoc and experimental and are in great need of a methodological breakthrough. Our research was carried out to seek the desirable methodological breakthrough, and this thesis presents the research outcome: a novel and holistic methodology that brings an opportunity to address the SoC verification problems. Furthermore, our methodology is a solution completely independent of the underlying simulation technologies; therefore, it could extend its applicability into future VLSI designs. Our methodology presents two ideas. (a) We propose that system-level verification should resort to the SoC-native languages rather than the test-bench construction languages; the software native to the SoC should take more critical responsibilities than the test-benches. (b) We challenge the fundamental assumption that “objects-under-test” and “tests” are distinct entities; instead, they should be understood as one type of entities – the interactions; interactions, together with the interference between interactions, i.e., the parallelism and resource-competitions, should be treated as the focus in system-level verification. The above two ideas, namely, software-centric verification and interaction-oriented verification have yielded practical techniques. This thesis elaborates on these techniques, including the transfer-resource-graph based test-generation method targeting the parallelism, the coverage measures of the concurrency completeness using Petri-nets, the automation of the test-programs which can execute smartly in an event-driven manner, and a software observation mechanism that gives insights into the system-level behaviours. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1363926 / Thesis (Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2009
|
Page generated in 0.0723 seconds