• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 140
  • 24
  • 22
  • 13
  • 9
  • 2
  • 1
  • 1
  • Tagged with
  • 245
  • 245
  • 72
  • 71
  • 65
  • 55
  • 47
  • 46
  • 34
  • 32
  • 31
  • 28
  • 26
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Certified algorithms for program slicing / Algorithmes certifiés pour la simplification syntaxique de programmes

Léchenet, Jean-Christophe 19 July 2018 (has links)
La simplification syntaxique, ou slicing, est une technique permettant d’extraire, à partir d’un programme et d’un critère consistant en une ou plusieurs instructions de ce programme, un programme plus simple, appelé slice, ayant le même comportement que le programme initial vis-à-vis de ce critère. Les méthodes d’analyse de code permettent d’établir les propriétés d’un programme. Ces méthodes sont souvent coûteuses, et leur complexité augmente rapidement avec la taille du code. Il serait donc souhaitable d’appliquer ces techniques sur des slices plutôt que sur le programme initial, mais cela nécessite de pouvoir justifier théoriquement l’interprétation des résultats obtenus sur les slices. Cette thèse apporte cette justification pour le cas de la recherche d’erreurs à l’exécution. Dans ce cadre, deux questions se posent. Si une erreur est détectée dans une slice, cela veut-il dire qu’elle se déclenchera aussi dans le programme initial ? Et inversement, si l’absence d’erreurs est prouvée dans une slice, cela veut-il dire que le programme initial en est lui aussi exempt ? Nous modélisons ce problème sur un mini-langage impératif représentatif, autorisant les erreurs et la non-terminaison, et montrons le lien entre la sémantique du programme initial et la sémantique de sa slice, ce qui nous permet de répondre aux deux questions précédentes. Pour généraliser ces résultats, nous nous intéressons à la première brique d’un slicer indépendant du langage : le calcul générique des dépendances de contrôle. Nous formalisons une théorie élégante de dépendances de contrôle sur des graphes orientés finis arbitraires prise dans la littérature et améliorons l’algorithme de calcul proposé.Pour garantir un maximum de confiance dans les résultats, tous ces travaux sont prouvés dans l’assistant de preuve Coq ou dans l’outil de preuve Why3. / Program slicing is a technique that extracts, given a program and a criterion that is one or several instructions in this program, a simpler program, called a slice, that has the same behavior as the initial program with respect to the criterion. Program analysis techniques focus on establishing the properties of a program. These techniques are costly, and their complexity increases with the size of the program. Therefore, it would be interesting to apply these techniques on slices rather than the initial program, but it requires theoretical foundations to interpret the results obtained on the slices. This thesis provides this justification for runtime error detection. In this context, two questions arise. If an error is detected in the slice, does this mean that it can also be triggered in the initial program? On the contrary, if the slice is proved to be error-free, does this mean that the initial program is error-free too? We model this problem using a small representative imperative language containing errors and non-termination, and establish the link between the semantics of the initial program and of its slice, which allows to give a precise answer to the two questions raised above. To apply these results in a more general context, we focus on the first step towards a language-independent slicer: an algorithm computing control dependence. We formalize an elegant theory of control dependence on arbitrary finite directed graphs taken from the literature and improve the proposed algorithm. To ensure a high confidence in the results, we prove them in the Coq proof assistant or in the Why3 proof plateform.
152

Formal Methods for Probabilistic Energy Models

Daum, Marcus 11 April 2019 (has links)
The energy consumption that arises from the utilisation of information processing systems adds a significant contribution to environmental pollution and has a big share of operation costs. This entails that we need to find ways to reduce the energy consumption of such systems. When trying to save energy it is important to ensure that the utility (e.g., user experience) of a system is not unnecessarily degraded, requiring a careful trade-off analysis between the consumed energy and the resulting utility. Therefore, research on energy efficiency has become a very active and important research topic that concerns many different scientific areas, and is as well of interest for industrial companies. The concept of quantiles is already well-known in mathematical statistics, but its benefits for the formal quantitative analysis of probabilistic systems have been noticed only recently. For instance, with the help of quantiles it is possible to reason about the minimal energy that is required to obtain a desired system behaviour in a satisfactory manner, e.g., a required user experience will be achieved with a sufficient probability. Quantiles also allow the determination of the maximal utility that can be achieved with a reasonable probability while staying within a given energy budget. As those examples illustrate important measures that are of interest when analysing energy-aware systems, it is clear that it is beneficial to extend formal analysis-methods with possibilities for the calculation of quantiles. In this monograph, we will see how we can take advantage of those quantiles as an instrument for analysing the trade-off between energy and utility in the field of probabilistic model checking. Therefore, we present algorithms for their computation over Markovian models. We will further investigate different techniques in order to improve the computational performance of implementations of those algorithms. The main feature that enables those improvements takes advantage of the specific characteristics of the linear programs that need to be solved for the computation of quantiles. Those improved algorithms have been implemented and integrated into the well-known probabilistic model checker PRISM. The performance of this implementation is then demonstrated by means of different protocols with an emphasis on the trade-off between the consumed energy and the resulting utility. Since the introduced methods are not restricted to the case of an energy-utility analysis only, the proposed framework can be used for analysing the interplay of cost and its resulting benefit in general.:1 Introduction 1.1 Related work 1.2 Contribution and outline 2 Preliminaries 3 Reward-bounded reachability properties and quantiles 3.1 Essentials 3.2 Dualities 3.3 Upper-reward bounded quantiles 3.3.1 Precomputation 3.3.2 Computation scheme 3.3.3 Qualitative quantiles 3.4 Lower-reward bounded quantiles 3.4.1 Precomputation 3.4.2 Computation scheme 3.5 Energy-utility quantiles 3.6 Quantiles under side conditions 3.6.1 Upper reward bounds 3.6.2 Lower reward bounds 3.6.2.1 Maximal reachability probabilities 3.6.2.2 Minimal reachability probabilities 3.7 Reachability quantiles and continuous time 3.7.1 Dualities 4 Expectation Quantiles 4.1 Computation scheme 4.2 Arbitrary models 4.2.1 Existential expectation quantiles 4.2.2 Universal expectation quantiles 5 Implementation 5.1 Computation optimisations 5.1.1 Back propagation 5.1.2 Reward window 5.1.3 Topological sorting of zero-reward sub-MDPs 5.1.4 Parallel computations 5.1.5 Multi-thresholds 5.1.6 Multi-state solution methods 5.1.7 Storage for integer sets 5.1.8 Elimination of zero-reward self-loops 5.2 Integration in Prism 5.2.1 Computation of reward-bounded reachability probabilities 5.2.2 Computation of quantiles in CTMCs 6 Analysed Protocols 6.1 Prism Benchmark Suite 6.1.1 Self-Stabilising Protocol 6.1.2 Leader-Election Protocol 6.1.3 Randomised Consensus Shared Coin Protocol 6.2 Energy-Aware Protocols 6.2.1 Energy-Aware Job-Scheduling Protocol 6.2.1.1 Energy-Aware Job-Scheduling Protocol with side conditions 6.2.1.2 Energy-Aware Job-Scheduling Protocol and expectation quantiles 6.2.1.3 Multiple shared resources 6.2.2 Energy-Aware Bonding Network Device (eBond) 6.2.3 HAECubie Demonstrator 6.2.3.1 Operational behaviour of the protocol 6.2.3.2 Formal analysis 7 Conclusion 7.1 Classification 7.2 Future prospects Bibliography List of Figures List of Tables
153

Towards provably safe and robust learning-enabled systems

Fan, Jiameng 26 August 2022 (has links)
Machine learning (ML) has demonstrated great success in numerous complicated tasks. Fueled by these advances, many real-world systems like autonomous vehicles and aircraft are adopting ML techniques by adding learning-enabled components. Unfortunately, ML models widely used today, like neural networks, lack the necessary mathematical framework to provide formal guarantees on safety, causing growing concerns over these learning-enabled systems in safety-critical settings. In this dissertation, we tackle this problem by combining formal methods and machine learning to bring provable safety and robustness to learning-enabled systems. We first study the robustness verification problem of neural networks on classification tasks. We focus on providing provable safety guarantees on the absence of failures under arbitrarily strong adversaries. We propose an efficient neural network verifier LayR to compute a guaranteed and overapproximated range for the output of a neural network given an input set which contains all possible adversarially perturbed inputs. LayR relaxes nonlinear units in neural networks using linear bounds and refines such relaxations with mixed integer linear programming (MILP) to iteratively improve the approximation precision, which achieves tighter output range estimations compared to prior neural network verifiers. However, the neural network verifier focuses more on analyzing a trained neural network but less on learning provably safe neural networks. To tackle this problem, we study verifiable training that incorporates verification into training procedures to train provably safe neural networks and scale to larger models and datasets. We propose a novel framework, AdvIBP, for combining adversarial training and provable robustness verification. We show that the proposed framework can learn provable robust neural networks at a sublinear convergence rate. In the second part of the dissertation, we study the verification of system-level properties in neural-network controlled systems (NNCS). We focus on proving bounded-time safety properties by computing reachable sets. We first introduce two efficient NNCS verifiers ReachNN* and POLAR that construct polynomial-based overapproximations of neural-network controllers. We transfer NNCSs to tractable closed-loop systems with approximated polynomial controllers for computing reachable sets using existing reachability analysis tools of dynamical systems. The combination of polynomial overapproximations and reachability analysis tools opens promising directions for NNCS verification. We also include a survey and experimental study of existing NNCS verification methods, including combining state-of-the-art neural network verifiers with reachability analysis tools, to discuss what overapproximation is suitable for NNCS reachability analysis. While these verifiers enable proving safety properties of NNCS, the nonlinearity of neural-network controllers is the main bottleneck that limits their efficiency and scalability. We propose a novel framework of knowledge distillation to control “the degree of nonlinearity” of NN controllers to ease NNCS verification which improves provable safety of NNCSs especially when they are safe but cannot be verified due to their complexity. For the verification community, this opens up the possibility of reducing verification complexity by influencing how a system is trained. Though NNCS verification can prove safety when system models are known, modern deep learning, e.g., deep reinforcement learning (DRL), often targets tasks with unknown system models, also known as the model-free setting. To tackle this issue, we first focus on safe exploration of DRL and propose a novel Lyapunov-inspired method. Our method uses Gaussian Process models to provide probabilistic guarantees on the policies, and guide the exploration of the unknown environment in a safe fashion. Then, we study learning robust visual control policies in DRL to enhance the robustness against visual changes that were not seen during training. We propose a method DRIBO, which can learn robust state representations for RL via a novel contrastive version of the Multi-View Information Bottleneck (MIB). This approach enables us to train high-performance visual policies that are robust to visual distractions, and can generalize well to unseen environments.
154

Formal Verification of Hardware Peripheral with Security Property / Formell verifikation av extern hårdvara med säkerhetskrav

Yao Håkansson, Jonathan, Rosencrantz, Niklas January 2017 (has links)
One problem with computers is that the operating system automatically trusts any externallyconnected peripheral. This can result in abuse when a peripheral technically can violate the security model because the peripheral is trusted. Because of that the security is an important issue to look at.The aim of our project is to see in which cases hardware peripherals can be trusted. We built amodel of the universal asynchronous transmitter/receiver (UART), a model of the main memory(RAM) and a model of a DMA controller. We analysed interaction between hardware peripherals,user processes and the main memory.One of our results is that connections with hardware peripherals are secure if the hardware is properly configured. A threat scenario could be an eavesdropper or man-in-the-middle trying to steal data or change a cryptographic key.We consider the use-cases of DMA and protecting a cryptographic key. We prove the well-behavior of the algorithm. Some error-traces resulted from incorrect modelling that was resolved by adjusting the models. Benchmarks were done for different memory sizes.The result is that a peripheral can be trusted provided a configuration is done. Our models consist of finite state machines and their corresponding SMV modules. The models represent computer hardware with DMA. We verified the SMV models using the model checkers NuSMV and nuXmv. / Målet med vårt projekt är att verifiera olika specifikationer av externa enheter som ansluts till datorn. Vi utför formell verifikation av sådan datorutrustning och virtuellt minne. Verifikation med temporal logik, LTL, utförs. Specifikt verifierar vi 4 olika use-case och 9 formler för seriell datakommunikation, DMA och virtuellt minne. Slutsatsen är att anslutning av extern hårdvara är säker om den är ordentligt konfigurerad.Vi gör jämförelser mellan olika minnesstorlekar och mätte tidsåtgången för att verifiera olika system. Vi ser att tidsåtgången för verifikation är långsammare än linjärt beroende och att relativt små system tar relativt lång tid att verifiera.
155

Facing infinity in model checking expressive specification languages

Magnago, Enrico 18 November 2022 (has links)
Society relies on increasingly complex software and hardware systems, hence techniques capable of proving that they behave as expected are of great and growing interest. Formal verification procedures employ mathematically sound reasoning to address this need. This thesis proposes novel techniques for the verification and falsification of expressive specifications on timed and infinite-state systems. An expressive specification language allows the description of the intended behaviour of a system via compact formal statements written at an abstraction level that eases the review process. Falsifying a specification corresponds to identifying an execution of the system that violates the property (i.e. a witness). The capability of identifying witnesses is a key feature in the iterative refinement of the design of a system, since it provides a description of how a certain error can occur. The designer can analyse the witness and take correcting actions by refining either the description of the system or its specification. The contribution of this thesis is twofold. First, we propose a semantics for Metric Temporal Logic that considers four different models of time (discrete, dense, super-discrete and super-dense). We reduce its verification problem to finding an infinite fair execution (witness) for an infinite-state system with discrete time. Second, we define a novel SMT-based algorithm to identify such witnesses. The algorithm employs a general representation of such executions that is both informative to the designer and provides sufficient structure to automate the search of a witness. We apply the proposed techniques to benchmarks taken from software, infinite-state, timed and hybrid systems. The experimental results highlight that the proposed approaches compete and often outperform specific (application tailored) techniques currently used in the state of the art.
156

FORMAL: A SEQUENTIAL ATPG-BASED BOUNDED MODEL CHECKING SYSTEM FOR VLSI CIRCUITS

Qiang, Qiang 10 April 2006 (has links)
No description available.
157

Behavioral Signature-based Framework for Identifying Unsatisfiable Variable Mappings between Digital Designs

Tendulkar, Vaibhav Uday 18 April 2012 (has links)
No description available.
158

Low-Level Static Analysis for Memory Usage and Control Flow Recovery

Bockenek, Joshua Alexander 07 March 2023 (has links)
Formal characterization of the memory used by a program is an important basis for security analyses, compositional verification, and identification of noninterference. However, soundly proving memory usage requires operating on the assembly level due to the semantic gap between high-level languages and the code that processors actually execute. Automated methods, such as model checking, would not be able to handle many interesting functions due to the undecidability of memory usage. Fully-interactive methods do not scale well either. Sound control flow recovery (CFR) is also important for binary decompilation, verification, patching, and security analysis. It lifts raw unstructured data into a form that allows reasoning over behavior and semantics. However, doing so requires interpreting the behavior of the program when indirect or dynamic control flow exists, creating a recursive dependency. This dissertation tackles the first property with two contributions that perform proof generation combined with interactive theorem proving in a semi-automated manner: an untrusted tool extracts as much information as it can from the functions under test and then generates all the necessary proofs to be completed in a theorem prover. The first, Floyd-style approach still requires significant manual effort but provides good flexibility and ensures no paths are analyzed more than once. In contrast, the second, Hoare-style approach sacrifices some flexibility and avoidance of repeated path evaluation in order to achieve much greater automation. However, neither approach can handle the dynamic control flow caused by indirect branching. The second property is handled by the second set of contributions of this dissertation. These two contributions provide fully-automated methods of recovering control flow from binaries even in the presence of indirect branching. When such dynamic control flow cannot be overapproximatively resolved, it is clearly noted in the resultant output. In the first approach to control flow recovery, a structured memory representation allows for general analysis of control flow in the presence of indirection, gaining scalability by utilizing context-free function analysis. It supports various aliasing conditions via the usage of nondeterminism, with multiple output states potentially being produced from a given input state. The second approach adds function context and abstract interpretation-inspired modeling of the C++ exception handling (EH) application binary interface (ABI), allowing for the discovery of previously-unknown paths while maintaining or increasing automation. / Doctor of Philosophy / Modern computer programs are so complicated that individual humans cannot manually check all but the smallest programs to make sure they are correct and secure. This is even worse if you want to reduce the trusted computing base (TCB), the stuff that you have to assume is working right in order to say a program will execute correctly. The TCB includes your computer itself, but also whatever tools were used to take the programs written by programmers and transform them into a form suitable for running on a computer. Such tools are often called compilers. One method of reducing the TCB is to examine the lowest-level representation of that program, the assembly or even machine code that is actually run by your computer. This poses unique challenges, because operating on such a low level means you do not have a lot of the structure that a more abstract, higher-level representation provides. Also, sometimes you want to formally state things about a program's behavior; that is, say things about what it does with a high degree of confidence based on mathematical principles. You may also want to verify that one or more of those statements are true. If you want to be detailed about that behavior, you may need to know all of the chunks, or regions, in random-access memory (RAM) that are used by that program. RAM, henceforth referred to as just ``memory'', is your computer's first place of storage for the information used by running programs. This is distinct from long-term storage devices like hard disk drives (HDDs) or solid-state drives (SSDs), which programs do not normally have direct access to. Unfortunately, there is no one single approach that can automatically determine with absolute certainty for all cases the exact regions of memory that are read or written. This is called undecidability, and means that you need to approximate those memory regions a lot of the time if you want to have a significant degree of automation. An underapproximation, an approach that only gives you some of the regions, is not useful for formal statements as it might miss out on some behavior; it is unsound. This means that you need an overapproximation, an approach that is guaranteed to give you at least the regions read or written. Therefore, the first contribution of this dissertation is a preliminary approach to such an overapproximation. This approach is based on the work of Robert L. Floyd, focusing on the direct control flow (where the steps of a program go) in an individual function (structured program component). It still requires a lot of user effort, including having to manually specify the regions in memory that were possibly used and do a lot of work to prove that those regions are (overapproximatively) correct, so our tests were limited in scope. The second contribution automated a lot of the manual work done for the first approach. It is based on the work of Charles Antony Richard Hoare, who developed a verification approach focusing on the syntax (the textual form) of programs. This contribution produces what we call formal memory usage certificates (FMUCs), which are formal statements that the regions of memory they describe are the only ones possibly affected by the functions under test. These statements also come with proofs, which for our work are like scripts used to verify that the things the FMUCs assert about the corresponding functions can be shown to be true given the assumptions our FMUCs have. Sometimes those proofs are incomplete, though, such as when there is a loop (repeated bit of code) in a function under test or one function calls (executes) another. In those cases, a user has to finish the proof, in the first case by weakening (removing information from) the FMUC's statements about the loop and in the second by composing, or combining, the FMUCs of the two functions. Additionally, this second approach cannot handle dynamic control flow. Such control flow occurs when the low-level instructions a program uses to move to another place in that program do not have a pre-stored location to go to. Instead, that location is supplied as the program is running. This is opposed to direct control flow, where the place to go to is hard-coded into the program when it is compiled. The tool also cannot not deal with aliasing, which is when different state parts (value-holding components) of a program contain the same value and that value is used as the numeric address or identifier of a location in memory. Specifically, it cannot deal with potential aliasing, when there is not enough information available to determine if the state parts alias or not. Because of that, we had to add extra assumptions to the FMUCs that limited them to those cases where ambiguous memory-referencing state parts referred to separate memory locations. Finally, it specifically requires assembly as input; you cannot directly supply a binary to it. This is also true of the first contribution. Because of this, we were able to test on more functions than before, but not a lot more. Not being able to deal with dynamic control flow is a big problem, as almost all programs use it. For example, when a function reaches its end, it has to figure out where to return to based on the current state of the program (in the previous contribution, this was done manually). This means that control flow recovery (CFR) is very important for many applications, including decompilation (converting a program back into a higher-level form), patching (updating a program in place without modifying the original code and recompiling it), and low-level analysis or verification in general. However, as you may have noticed from earlier in this paragraph, in order to deal with such dynamic control flow you need to figure out what the possible destinations are for the individual control flow transfers. That can require knowing where you came from in the program, which means that analysis of dynamic control flow requires context (in this context, information previously obtained in the program). Even worse, it is another undecidable problem that requires overapproximation. To soundly recover control flow, we developed Hoare graphs (HGs), the third contribution of this dissertation. HGs use memory models that take the form of forests, or collections of tree data structures. A single tree represents a region in memory that may have multiple symbolic references, or abstract representations of a value. The children of the tree represent regions used in the program that are enclosed within their parent tree elements. Now, instead of assuming that all ambiguous memory regions are separate, we can use them under various aliasing conditions. We have also implemented support for some forms of dynamic control flow. Those that are not supported are clearly marked in the resultant HG. No user interaction is required even when loops are present thanks to a methodology that automatically reduces the amount of information present at a re-executed instruction until the information stabilizes. Function composition is also automatic now thanks to a method that treats each function as its own context in a safe and automated way, reducing memory consumption of our tool and allowing larger programs to be examined. In the process we did lose the ability to deal with recursion (functions that call themselves or call other functions that call back to the original), though. Lastly, we provided the ability to directly load binaries into the tool, no external disassembly (converting machine code into human-readable instructions) needed. This all allowed much greater testing than before, with applications to multiple programs and program libraries. The fourth and final contribution of this dissertation iterates on the HG work by narrowing focus to the concept of exceptional control flow. Specifically, it models the kind of exception handling used by C++ programs. This is important as, if you want to explore a program's behavior, you need to know all the places it goes to. If you use a tool that does not model exception handling, you may end up missing paths of execution caused by unwinding. This is when an exception is thrown and propagates up through the program's current stack of function calls, potentially reaching programmer-supplied handling for that exception. Despite this, commonplace tools for static, low-level program analysis do not model such unwinding. The control flow graph (CFG) produced by our exception-aware tool are called exceptional interprocedural control flow graphs (EICFGs). These provide information about the exceptions being thrown and what paths they take in the program when they are thrown. Additional improvements are a better methodology for handling dynamic control flow as well adding back in support for recursion. All told, this allowed us to explore even more programs than ever before.
159

XFM: An Incremental Methodology for Developing Formal Models

Suhaib, Syed Mohammed 13 May 2004 (has links)
We present a methodology of an agile formal method named eXtreme Formal Modeling (XFM) recently developed by us, based on Extreme Programming concepts to construct abstract models from a natural language specification of a complex system. In particular, we focus on Prescriptive Formal Models (PFMs) that capture the specification of the system under design in a mathematically precise manner. Such models can be used as golden reference models for formal verification, test generation, etc. This methodology for incrementally building PFMs work by adding user stories (expressed as LTL formulae) gleaned from the natural language specifications, one by one, into the model. XFM builds the models, retaining correctness with respect to incrementally added properties by regressively model checking all the LTL properties captured theretofore in the model. We illustrate XFM with a graded set of examples including a traffic light controller, a DLX pipeline and a Smart Building control system. To make the regressive model checking steps feasible with current model checking tools, we need to keep the model size increments under control. We therefore analyze the effects of ordering LTL properties in XFM. We compare three different property-ordering methodologies: 'arbitrary ordering', 'property based ordering' and 'predicate based ordering'. We experiment on the models of the ISA bus monitor and the arbitration phase of the Pentium Pro bus. We experimentally show and mathematically reason that predicate based ordering is the best among these orderings. Finally, we present a GUI based toolbox for users to build PFMs using XFM. / Master of Science
160

A self-verifying theorem prover

Davis, Jared Curran 24 August 2010 (has links)
Programs have precise semantics, so we can use mathematical proof to establish their properties. These proofs are often too large to validate with the usual "social process" of mathematics, so instead we create and check them with theorem-proving software. This software must be advanced enough to make the proof process tractable, but this very sophistication casts doubt upon the whole enterprise: who verifies the verifier? We begin with a simple proof checker, Level 1, that only accepts proofs composed of the most primitive steps, like Instantiation and Cut. This program is so straightforward the ordinary, social process can establish its soundness and the consistency of the logical theory it implements (so we know theorems are "always true"). Next, we develop a series of increasingly capable proof checkers, Level 2, Level 3, etc. Each new proof checker accepts new kinds of proof steps which were not accepted in the previous levels. By taking advantage of these new proof steps, higher-level proofs can be written more concisely than lower-level proofs, and can take less time to construct and check. Our highest-level proof checker, Level 11, can be thought of as a simplified version of the ACL2 or NQTHM theorem provers. One contribution of this work is to show how such systems can be verified. To establish that the Level 11 proof checker can be trusted, we first use it, without trusting it, to prove the fidelity of every Level n to Level 1: whenever Level n accepts a proof of some phi, there exists a Level 1 proof of phi. We then mechanically translate the Level 11 proof for each Level n into a Level n - 1 proof---that is, we create a Level 1 proof of Level 2's fidelity, a Level 2 proof of Level 3's fidelity, and so on. This layering shows that each level can be trusted, and allows us to manage the sizes of these proofs. In this way, our system proves its own fidelity, and trusting Level 11 only requires us to trust Level 1. / text

Page generated in 0.2479 seconds