Spelling suggestions: "subject:"computer programs : cerification"" "subject:"computer programs : erification""
11 |
Program verificatio in functional programming systemsSilver, James L. January 1983 (has links)
Functional programming systems provide a number of features which facilitate program verification. Such verification may be observed to rest directly upon the theoretical foundations of computing and simultaneously to exhibit a close relation to the programs being verified.
In order to demonstrate these aspects of functional systems, two functions, MIN and SORT, are defined on a parameterized type consisting of sequences to elements from some ordered type. Theorems showing that MIN and SORT terminate and return the correct values are stated and proved. Similar results are derived for a function to perform a binary search on an ordered sequence.
Finally, conditions similar to Dijkstra’s weakest preconditions are given which allow the simultaneous synthesis and verification of certain programs from program specifications. A function to find the greatest common division of two integers is derived and verified. / M.S.
|
12 |
A symbolic execution framework for algorithm-level modelling and verification of computer microarchitectureHanna, Ziyad January 2011 (has links)
This dissertation addresses the challenge of modelling and functional verification for com- plex computer micro-architecture designs. It is evident that the emerging span and com- plexity of new computer architectures outstrip existing design modelling and verification methods. Several attempts in industry and academia, including High Level Modelling, still do not scale to address this problem. Typically they lack precise and clear language semantics for formal analysis, and do not have native support for concurrency, or the design language and methodology do not fit. Therefore, the gap between what current solutions provide and what industry needs is increasing. In this research we aim to leap ahead of the common incremental research in this area, and develop a new framework for algorithm level modelling and verification. We introduce a high level and executable Architectural Specification Language (ASL) for modelling the functional behaviour of the architectural algorithms. The semantics of our models is based on the theory of Abstract State Machines with synchronous parallel execution and finite choice, which we find naturally suitable for hardware modelling. Our framework is also powered by native symbolic execution algorithms for enabling high- level verification, design explorations and refinement checks of the high level models down to the design implementation. We developed a new framework that implements our ideas through ASL and supports symbolic execution. We demonstrate the utility of our language and symbolic execu- tion on examples and case studies in various modelling domains, and show a promising framework and methodology. We believe our approach will make it easier to explore micro-architectural algorithm behavior and easier to validate this using dynamic or formal techniques, thus yielding a promising attack on the modelling and verification problem, and enabling more productive convergence to high quality implementations.
|
13 |
Efficient and effective symbolic model checkingIyer, Subramanian Krishnan 28 August 2008 (has links)
Not available / text
|
14 |
Productivity with performance: property/behavior-based automated composition of parallel programs from self-describing components / Property/behavior-based automated composition of parallel programs from self-describing componentsMahmood, Nasim, 1976- 28 August 2008 (has links)
Development of efficient and correct parallel programs is a complex task. These parallel codes have strong requirements for performance and correctness and must operate robustly and efficiently across a wide spectrum of application parameters and on a wide spectrum of execution environments. Scientific and engineering programs increasingly use adaptive algorithms whose behavior can change dramatically at runtime. Performance properties are often not known until programs are tested and performance may degrade during execution. Many errors in parallel programs arise in incorrect programming of interactions and synchronizations. Testing has proven to be inadequate. Formal proofs of correctness are needed. This research is based on systematic application of software engineering methods to effective development of efficiently executing families of high performance parallel programs. We have developed a framework (P-COM²) for development of parallel program families which addresses many of the problems cited above. The conceptual innovations underlying P-COM² are a software architecture specification language based on self-describing components, a timing and sequencing algorithm which enables execution of programs with both concrete and abstract components and a formal semantics for the architecture specification language. The description of each component incorporates compiler-useable specifications for the properties and behaviors of the components, the functionality a component implements, pre-conditions and postconditions on the inputs and outputs and state machine based sequencing control for invocations of the component. The P-COM² compiler and runtime system implement these concepts to enable: (a) evolutionary development where a program instance is evolved from a performance model to a complete application with performance known at each step of evolution, (b) automated composition of program instances targeting specific application instances and/or execution environments from self-describing components including generation of all parallel structuring, (c) runtime adaptation of programs on a component by component basis, (d) runtime validation of pre-and post-conditions and sequencing of interactions and (e) formal proofs of correctness for interactions among components based on model checking of the interaction and synchronization properties of the program. The concepts and their integration are defined, the implementation is described and the capabilities of the system are illustrated through several examples.
|
15 |
Generalization, lemma generation, and induction in ACL2Erickson, John D., Ph. D. 29 August 2008 (has links)
Formal verification is becoming a critical tool for designing software and hardware today. Rising complexity, along with software's pervasiveness in the global economy have meant that errors are becoming more difficult to find and more costly to fix. Among the formal verification tools available today, theorem provers offer the ability to do the most complete verification of the most complex systems. However, theorem proving requires expert guidance and typically is too costly to be economical for all but the most mission critical systems. Three major challenges to using a theorem prover are: finding generalizations, choosing the right induction scheme, and generating lemmas. In this dissertation we study all three of these in the context of the ACL2 theorem prover. / text
|
16 |
Inspection methods in programmingRich, Charles January 1980 (has links)
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1980. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Bibliography: leaves 294-297. Includes index. / by Charles Rich. / Ph.D.
|
17 |
Using theorem proving and algorithmic decision procedures for large-scale system verificationRay, Sandip 28 August 2008 (has links)
Not available / text
|
18 |
Improving Software Quality through Syntax and Semantics Verification of Requirements ModelsGaither, Danielle 12 1900 (has links)
Software defects can frequently be traced to poorly-specified requirements. Many software teams manage their requirements using tools such as checklists and databases, which lack a formal semantic mapping to system behavior. Such a mapping can be especially helpful for safety-critical systems. Another limitation of many requirements analysis methods is that much of the analysis must still be done manually. We propose techniques that automate portions of the requirements analysis process, as well as clarify the syntax and semantics of requirements models using a variety of methods, including machine learning tools and our own tool, VeriCCM. The machine learning tools used help us identify potential model elements and verify their correctness. VeriCCM, a formalized extension of the causal component model (CCM), uses formal methods to ensure that requirements are well-formed, as well as providing the beginnings of a full formal semantics. We also explore the use of statecharts to identify potential abnormal behaviors from a given set of requirements. At each stage, we perform empirical studies to evaluate the effectiveness of our proposed approaches.
|
19 |
Modelling and verification of web services protocols.Ramsokul, Pemadeep Kumar, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Among the plethora of solutions to the Business-to-Business interoperability problem, no other solution has obtained as much attention asWeb Services Technology (WST), which allows entities to exchange data regardless of their underlying platforms. WST also allows services to be composed in order to provide high quality customer service over the web. In order to perform transactions across different service providers, standard protocols need to be supported by participating providers. Many useful protocols are coming into the market, but are often ambiguously specified by protocol designers and not fully verified. Furthermore, even if the specifications are reasonably clear, programmers often make subtle assumptions, possibly leading to errors that are hard to detect and locate, especially when the number of participating entities is dynamic. Consequently, these can lead to interoperability problems among implementations of the same protocol and high software maintenance costs. To address these issues, a hierarchical automata-based framework is proposed to model the functional aspects of Web Services (WS) protocols that also assists in verifying their correctness. The modelling formalism has a sound mathematical foundation and aims to reconcile desirable features while still maintaining syntactic and semantic simplicity. The properties to be verified are specified using a pattern system and/or 'observer' states, which have been adapted for WS protocols. In particular, always in a positive observer state implies proper termination and partial functional correctness while reachability of a negative observer state signifies deadlock and/or violation of a safety property. Verification itself is handled by automatic translation of the model and its properties into a model-checker's input code and interpretation of the output produced by the model-checker. A test-bed is proposed to check the conformance of a protocol implementation to its specification It helps in locating errors in the implementations of WS protocols especially where the number of participating entities is dynamic. Conformance checking is achieved by capturing sequences of exchanged messages of the actual implementations and checking them against the formal specification. Experience using the framework is also described and illustrated using two non-trivial WS protocols, namely WS-BusinessActivity and WS-AtomicTransaction.
|
20 |
Environment Analysis of Higher-Order LanguagesMight, Matthew Brendon 29 June 2007 (has links)
Any analysis of higher-order languages must grapple with the
tri-facetted nature of lambda. In one construct, the fundamental
control, environment and data structures of a language meet and
intertwine. With the control facet tamed nearly two decades ago, this
work brings the environment facet to heel, defining the environment
problem and developing its solution: environment analysis. Environment
analysis allows a compiler to reason about the equivalence of
environments, i.e., name-to-value mappings, that arise during a
program's execution. In this dissertation, two different
techniques-abstract counting and abstract frame strings-make this
possible. A third technique, abstract garbage collection, makes both
of these techniques more precise and, counter to intuition, often
faster as well. An array of optimizations and even deeper analyses
which depend upon environment analysis provide motivation for this
work.
In an abstract interpretation, a single abstract entity represents a
set of concrete entities. When the entities under scrutiny are
bindings-single name-to-value mappings, the atoms of environment-then
determining when the equality of two abstract bindings infers the
equality of their concrete counterparts is the crux of environment
analysis. Abstract counting does this by tracking the size of
represented sets, looking for singletons, in order to apply the
following principle:
If {x} = {y}, then x = y.
Abstract frame strings enable environmental reasoning by statically
tracking the possible stack change between the births of two
environments; when this change is effectively empty, the environments
are equivalent. Abstract garbage collection improves precision by
intermittently removing unreachable environment structure during
abstract interpretation.
|
Page generated in 0.1604 seconds