Spelling suggestions: "subject:"nonverification"" "subject:"noverification""
11 |
The semantics and implementation of occamBarrett, Geoff January 1988 (has links)
No description available.
|
12 |
The Six-Nation Initiative : origins, organisation and policiesFrangonikolopoulos, Christos January 1990 (has links)
No description available.
|
13 |
Verification of Software under Relaxed MemoryLeonardsson, Carl January 2016 (has links)
The work covered in this thesis concerns automatic analysis of correctness of parallel programs running under relaxed memory models. When a parallel program is compiled and executed on a modern architecture, various optimizations may cause it to behave in unexpected ways. In particular, accesses to the shared memory may appear in the execution in the opposite order to how they appear in the control flow of the original program source code. The memory model determines which memory accesses can be reordered in a program on a given system. Any memory model that allows some observable memory access reordering is called a relaxed memory model. The reorderings may cause bugs and make the production of parallel programs more difficult. In this work, we consider three main approaches to analysis of correctness of programs running under relaxed memory models. An exact analysis for finite state programs running under the TSO memory model (Paper I). This technique is based on the well quasi ordering framework. An over-approximate analysis for integer programs running under TSO (Paper II), based on predicate abstraction combined with a buffer abstraction. Two under-approximate analysis techniques for programs running under the TSO, PSO or POWER memory models (Papers III and IV). The latter two techniques are based on stateless model checking and dynamic partial order reduction. In addition to determining whether a program is correct under a given memory model, the problem of automatic fence synthesis is also considered. A memory fence is an instruction that can be inserted into a program in order to locally disable some memory access reorderings. The fence synthesis problem is the problem of automatically inferring a minimal set of memory fences which restores sufficient order in a given program to ensure its correctness. / UPMARC
|
14 |
Automated quantitative software verificationKattenbelt, Mark Alex January 2010 (has links)
Many software systems exhibit probabilistic behaviour, either added explicitly, to improve performance or to break symmetry, or implicitly, through interaction with unreliable networks or faulty hardware. When employed in safety-critical applications, it is important to rigorously analyse the behaviour of these systems. This can be done with a formal verification technique called model checking, which establishes properties of systems by algorithmically considering all execution scenarios. In the presence of probabilistic behaviour, we consider quantitative properties such as "the worst-case probability that the airbag fails to deploy within 10ms", instead of qualitative properties such as "the airbag eventually deploys". Although many model checking techniques exist to verify qualitative properties of software, quantitative model checking techniques typically focus on manually derived models of systems and cannot directly verify software. In this thesis, we present two quantitative model checking techniques for probabilistic software. The first is a quantitative adaptation of a successful model checking technique called counter-example guided abstraction refinement which uses stochastic two-player games as abstractions of probabilistic software. We show how to achieve abstraction and refinement in a probabilistic setting and investigate theoretical extensions of stochastic two-player game abstractions. Our second technique instruments probabilistic software in such a way that existing, non-probabilistic software verification methods can be used to compute bounds on quantitative properties of the original, uninstrumented software. Our techniques are the first to target real, compilable software in a probabilistic setting. We present an experimental evaluation of both approaches on a large range of case studies and evaluate several extensions and heuristics. We demonstrate that, with our methods, we can successfully compute quantitative properties of real network clients comprising approximately 1,000 lines of complex ANSI-C code — the verification of such software is far beyond the capabilities of existing quantitative model checking techniques.
|
15 |
Evaluation of program specification and verification systemsUbhayakar, Sonali S. 06 1900 (has links)
Computer systems that earn a high degree of trust must be backed by rigorous verification methods. A verification system is an interactive environment for writing formal specifications and checking formal proofs. Verification systems allow large complicated proofs to be managed and checked interactively. We desire evaluation criteria that provide a means of finding which verification system is suitable for a specific research environment and what needs of a particular project the tool satisfies. Therefore, the purpose of this thesis is to develop a methodology and set of evaluation criteria to evaluate verification systems for their suitability to improve the assurance that systems meet security objectives. A specific verification system is evaluated with respect to the defined methodology. The main goals are to evaluate whether the verification system has the capability to express the properties of software systems and to evaluate whether the verification system can provide inter-level mapping, a feature required for understanding how a system meets security objectives. / Naval Postgraduate School author (civilian).
|
16 |
Computational verification of security requirementsBibu, Gideon Dadik January 2014 (has links)
One of the reasons for persistence of information security challenges in organisations is that security is usually seen as a technical problem. Hence the emphasis on technical solutions in practice. However, security challenges can also arise from people and processes. We therefore approach the problem of security in organisations from a socio-technical perspective and reason that the design of security requirements for organisations has to include procedures that would allow for the design time analysis of the system behaviour with respect to security requirements. In this thesis we present a computational approach to the verification and validation of elicited security requirements. This complements the existing approaches of security requirements elicitation by providing a computational means for reasoning about security requirements at design time. Our methodology is centered on a deontic logic inspired institutional framework which provides a mechanism to monitor the permissions, empowerment, and obligations of actors and generates violations when a security breach occurs. We demonstrate the functionality of our approach by modelling a practical scenario from health care domain to explore how the institutional framework can be used to develop a model of a system of interacting actors using the action language InstAL. Through the application of the semantics of answer set programming (ASP), we demonstrate a way of carrying out verification of security requirements such that it is possible to predict the effect of certain actions and the causes of certain system states. To show that our approach works for a number of security requirements, we also use other scenarios to demonstrate the analysis of confidentiality and integrity requirements. From human factor point of view compliance determines the effectiveness of security requirements. We demonstrate that our approach can be used for management of security requirements compliance. By verifying compliance and predicting non-compliance and its consequences at design time, requirements can be redesigned in such a way that better compliance can be achieved.
|
17 |
µLeech: A Side-Channel Evaluation Platform for Next Generation Trusted Embedded SystemsMoukarzel, Michael A 10 September 2015 (has links)
"We propose a new embedded trusted platform module for next generation power scavenging devices. Such power scavenging devices are already in the current market. For instance, the Square point-of-sale reader uses the microphone/speaker interface of a smartphone for both communications and to charge up the power supply. While such devices are already widely deployed in the market and used as trusted devices in security critical applications they have not been properly evaluated yet. Our trusted module is a dedicated microprocessor that can preform cryptographic operations and store cryptographic keys internally. This power scavenging trusted module will provide a secure cryptographic platform for any smartphone. The second iteration of our device will be a side-channel evaluation platform for power scavenging devices. This evaluation platform will focus on evaluating leakage characteristics, it will include all the features of our trusted module, i.e. complicated power handling including scavenging from the smartphone and communications through the microphone/speaker interface. Our design will also included the on-board ports to facilitate easy acquisition of high quality power signals for further side-channel analysis. Our evaluation platform will provide the ability for security researchers to analyze leakage in next generation mobile attached embedded devices and to develop and enroll countermeasures."
|
18 |
Formal verification-driven parallelisation synthesisBotinčan, Matko January 2018 (has links)
Concurrency is often an optimisation, rather than intrinsic to the functional behaviour of a program, i.e., a concurrent program is often intended to achieve the same effect of a simpler sequential counterpart, just faster. Error-free concurrent programming remains a tricky problem, beyond the capabilities of most programmers. Consequently, an attractive alternative to manually developing a concurrent program is to automatically synthesise one. This dissertation presents two novel formal verification-based methods for safely transforming a sequential program into a concurrent one. The first method---an instance of proof-directed synthesis---takes as the input a sequential program and its safety proof, as well as annotations on where to parallelise, and produces a correctly-synchronised parallelised program, along with a proof of that program. The method uses the sequential proof to guide the insertion of synchronisation barriers to ensure that the parallelised program has the same behaviour as the original sequential version. The sequential proof, written in separation logic, need only represent shape properties, meaning we can parallelise complex heap-manipulating programs without verifying every aspect of their behaviour. The second method proposes specification-directed synthesis: given a sequential program, we extract a rich, stateful specification compactly summarising program behaviour, and use that specification for parallelisation. At the heart of the method is a learning algorithm which combines dynamic and static analysis. In particular, dynamic symbolic execution and the computational learning technique grammar induction are used to conjecture input-output specifications, and counterexample-guided abstraction refinement to confirm or refute the equivalence between the conjectured specification and the original program. Once equivalence checking succeeds, from the inferred specifications we synthesise code that executes speculatively in parallel---enabling automated parallelisation of irregular loops that are not necessary polyhedral, disjoint or with a static pipeline structure.
|
19 |
Efficient verification of universal and intermediate quantum computingKapourniotis, Theodoros January 2016 (has links)
The promise of scalable quantum technology appears more realistic, after recent advances in both theory and experiment. Assuming a quantum computer is developed, the task of verifying the correctness of its outcome becomes crucial. Unfortunately, for a system that involves many particles, predicting its evolution via classical simulation becomes intractable. Moreover, verification of the outcome by computational methods, i.e. involving a classical witness, is believed inefficient for the hardest problems solvable by a quantum computer. A feasible alternative to verify quantum computation is via cryptographic methods, where an untrusted prover has to convince a weak verifier for the correctness of his outcome. This is the approach we take in this thesis. In the most standard configuration the prover is capable of computing all polynomial-time quantum circuits and the verifier is restricted to classical with very modest quantum power. The goal of existing verification protocols is to reduce the quantum requirements for the verifier - ideally making it purely classical - and reduce the communication complexity. In Part II we propose a composition of two existing verification protocols [Fitzsimons and Kashefi, 2012], [Aharonov et al., 2010] that achieves quadratic improvement in communication complexity, while keeping the quantum requirements for the verifier modest. Along this result, several new techniques are proposed, including the generalization of [Fitzsimons and Kashefi, 2012] to prime dimensions. In Part III we discuss the idea of model-specific quantum verification, where the prover is restricted to intermediate quantum power, i.e. between full-fledged quantum and purely classical, thus more feasible experimentally. As a proof of principle we propose a verification protocol for the One-Pure-Qubit computer [Knill and Laflamme, 1998], which tolerates noise and is capable of computing hard problems such as large matrix trace estimation. The verification protocol is an adaptation of [Fitzsimons and Kashefi, 2012] running on Measurement-Based Quantum Computing with newly proved properties of the underlying resources. Connections of quantum verification to other security primitives are considered in Part IV. Authenticated quantum communication has been already proved to relate to quantum verification. We expand this by proposing a quantum authentication protocol derived from [Fitzsimons and Kashefi, 2012] and discuss implications to verification with purely classical verifier. Connections between quantum security primitives, namely blindness - prover does not learn the computation -, and classical security are considered in Part V. We introduce a protocol where a client with restricted classical resources computes blindly a universal classical gate with the help of an untrusted server, by adding modest quantum capabilities to both client and server. This example of quantum-enhanced classical security we prove to be a task classically impossible.
|
20 |
Vers la compilation vérifiée de Sea of Nodes : propriétés et raisonnement sémantiques / Toward verified compilation of Sea of Nodes : semantic properties and reasoningFernández de Retana, Yon 05 July 2018 (has links)
Les compilateurs optimisants pour les langages de programmation sont devenus des logiciels complexes et donc une source de bugs. Ceci peut être dangereux dans le contexte de systèmes critiques comme l'avionique ou la médecine. Cette thèse s'inscrit dans le cadre de la compilation vérifiée optimisante dont l'objectif est d'assurer l'absence de tels bugs. Plus précisément, nous étudions sémantiquement une représentation intermédiaire SSA (Single Static Assignment) particulière, Sea of Nodes, utilisée notamment dans le compilateur optimisant HotSpot pour Java. La propriété SSA a déjà été étudiée d'un point de vue sémantique sur des représentations simples sous forme de graphe de flot de contrôle, mais le sujet des dépendances entre instructions a seulement été effleuré depuis une perspective formelle. Cette thèse apporte une étude sémantique de transformations de programmes sous forme Sea of Nodes, intégrant la flexibilité en termes de dépendances de données entre instructions. En particulier, élimination de zero-checks redondants, propagation de constantes, retour au bloc de base séquentiel et destruction de SSA sont étudiés. Certains des sujets abordés, dont la formalisation d'une sémantique pour Sea of Nodes, sont accompagnés d'une vérification à l'aide de l'assistant de preuve Coq. / Optimizing compilers for programming languages have become complex software, and they are hence subject to bugs. This can be dangerous in the context of critical systems such as avionics or health care. This thesis is part of research work on verified optimizing compilers, whose objective is to ensure the absence of such bugs. More precisely, we semantically study a particular SSA intermediate representation, Sea of Nodes, which is notably used in the optimizing compiler HotSpot for Java. The SSA property has already been studied from a semantic point of view on simple intermediate representations in control flow graph form, but the subject of dependencies between instructions has just been skimmed from a formal perspective. This thesis brings a semantic study of transformations of programs in Sea of Nodes form, integrating the flexibility regarding data dependencies between instructions. In particular, redundant zero-check elimination, constant propagation, transformation back to sequential basic block, and SSA destruction are studied. Some of the approached topics, including the formalization of a semantics for Sea of Nodes, are accompanied by a verification using the Coq proof assistant.
|
Page generated in 0.116 seconds