Spelling suggestions: "subject:"[een] FORMAL VERIFICATION"" "subject:"[enn] FORMAL VERIFICATION""
21 |
Automatic Datapath Abstraction Of Pipelined CircuitsVlad, Ciubotariu 18 February 2011 (has links)
Pipelined circuits operate as an assembly line that starts processing new instructions while older ones
continue execution. Control properties specify the correct behaviour of the pipeline with respect to
how it handles the concurrency between instructions. Control properties stand out as one of the most
challenging aspects of pipelined circuit verification. Their verification depends on the datapath and
memories, which in practice account for the largest part of the state space of the circuit. To alleviate
the state explosion problem, abstraction of memories and datapath becomes mandatory. This thesis
provides a methodology for an efficient abstraction of the datapath under all possible control-visible
behaviours. For verification of control properties, the abstracted datapath is then substituted in place
of the original one and the control circuitry is left unchanged. With respect to control properties, the
abstraction is shown conservative by both language containment and simulation.
For verification of control properties, the pipeline datapath is represented by a network of registers,
unrestricted combinational datapath blocks and muxes. The values flowing through the datapath are
called parcels. The control is the state machine that steers the parcels through the network. As parcels
travel through the pipeline, they undergo transformations through the datapath blocks. The control-
visible results of these transformations fan-out into control variables which in turn influence the next
stage the parcels are transferred to by the control. The semantics of the datapath is formalized as a
labelled transition system called a parcel automaton. Parcel automata capture the set of all control
visible paths through the pipeline and are derived without the need of reachability analysis of the
original pipeline. Datapath abstraction is defined using familiar concepts such as language containment
or simulation. We have proved results that show that datapath abstraction leads to pipeline abstraction.
Our approach has been incorporated into a practical algorithm that yields directly the abstract parcel
automaton, bypassing the construction of the concrete parcel automaton. The algorithm uses a SAT
solver to generate incrementally all possible control visible behaviours of the pipeline datapath. Our
largest case study is a 32-bit two-wide superscalar OpenRISC microprocessor written in VHDL, where
it reduced the size of the implementation from 35k gates to 2k gates in less than 10 minutes while using
less than 52MB of memory.
|
22 |
Génération de séquences de test pour l'accélération d'assertions / Generation of test sequences for accelerating assertionsDamri, Laila 17 December 2012 (has links)
Avec la complexité croissante des systèmes sur puce, le processus de vérification devient une tâche de plus en plus cruciale à tous les niveaux du cycle de conception, et monopolise une part importante du temps de développement. Dans ce contexte, l'assertion-based verification (ABV) a considérablement gagné en popularité ces dernières années. Il s'agit de spécifier le comportement attendu du système par l'intermédiaire de propriétés logico-temporelles, et de vérifier ces propriétés par des méthodes semi-formelles ou formelles. Des langages de spécification comme PSL ou SVA (standards IEEE) sont couramment utilisés pour exprimer ces propriétés. Des techniques de vérification statiques (model checking) ou dynamiques (validation en cours de simulation) peuvent être mises en œuvre. Nous nous plaçons dans le contexte de la vérification dynamique. A partir d'assertions exprimées en PSL ou SVA, des descriptions VHDL ou Verilog synthétisables de moniteurs matériels de surveillance peuvent être produites (outil Horus). Ces composants peuvent être utilisés pendant la conception (en simulation et/ou émulation pour le débug et la validation de circuits), ou comme composants embarqués, pour la surveillance du comportement de systèmes critiques. Pour l'analyse en phase de conception, que ce soit en simulation ou en émulation, le problème de la génération des séquences de test se pose. En effet, des séquences de test générées aléatoirement peuvent conduire à un faible taux de couverture des conditions d'activation des moniteurs et, de ce fait, peuvent être peu révélatrices de la satisfaction des assertions. Les méthodes de génération de séquences de test sous contraintes n'apportent pas de réelle solution car les contraintes ne peuvent pas être liées à des conditions temporelles. De nouvelles méthodes doivent être spécifiées et implémentées, c'est ce que nous nous proposons d'étudier dans cette thèse. / With the increasing complexity of SoC, the verification process becomes a task more crucial at all levels of the design cycle, and monopolize a large share of development time. In this context, the assertion-based verification (ABV) has gained considerable popularity in recent years. This is to specify the behavior of the system through logico-temporal properties and check these properties by semiformal or formal methods. Specification languages such as PSL or SVA (IEEE) are commonly used to express these properties. Static verification techniques (model checking) or dynamic (during simulation) can be implemented. We are placed in the context of dynamic verification. Our assertions are expressed in PSL or SVA, and synthesizable descriptions VHDL or Verilog hardware surveillance monitors can be produced (Horus tool). These components can be used for design (simulation and/or emulation for circuit debug and validation) or as embedded components for monitoring the behavior of critical systems. For analysis in the design phase, either in simulation or emulation, the problem of generating test sequences arises. In effect, sequences of randomly generated test can lead to a low coverage conditions of activation monitors and, therefore, may be indicative of little satisfaction assertions. The methods of generation of test sequences under constraints do not provide real solution because the constraints can not be linked to temporal conditions. New methods must be specified and implemented, this's what we propose to study in this thesis.
|
23 |
Verification and composition of security protocols with applications to electronic voting / Vérification et composition des protocoles de securité avec des applications aux protocoles de vote electroniqueCiobâcǎ, Ştefan 09 December 2011 (has links)
Cette these concerne la verification formelle et la composition de protocoles de securite, motivees en particulier par l'analyse des protocoles de vote electronique. Les chapitres 3 a 5 ont comme sujet la verification de protocoles de securite et le Chapitre 6 vise la composition.Nous montrons dans le Chapitre 3 comment reduire certains problemes d'une algebre quotient des termes a l'algebre libre des termes en utilisant des ensembles fortement complets de variants. Nous montrons que, si l'algebre quotient est donnee par un systeme de reecriture de termes convergent et optimalement reducteur (optimally reducing), alors des ensembles fortement complets de variants existent et sont finis et calculables.Dans le Chapitre 4, nous montrons que l'equivalence statique pour (des classes) de theories equationnelles, dont les theories sous-terme convergentes, la theorie de l'engagement a trappe (trapdoor commitment) et la theorie de signature en aveugle (blind signatures), est decidable en temps polynomial. Nous avons implemente de maniere efficace cette procedure.Dans le Chapitre 5, nous etendons la procedure de decision precedente a l'equivalence de traces. Nous utilisons des ensembles fortement complets de variants du Chapitre 3 pour reduire le probleme a l'algebre libre. Nous modelisons chaque trace du protocole comme une theorie de Horn et nous utilisons un raffinement de la resolution pour resoudre cette theorie. Meme si nous n'avons pas reussi a prouver que la procedure de resolution termine toujours, nous l'avons implementee et utilisee pour donner la premiere preuve automatique de l'anonymat dans le protocole de vote electronique FOO.Dans le Chapitre 6, nous etudions la composition de protocoles. Nous montrons que la composition de deux protocoles qui utilisent des primitives cryptographiques disjointes est sure s'ils ne revelent et ne reutilisent pas les secrets partages. Nous montrons qu'une forme d'etiquettage de protocoles est suffisante pour assurer la disjonction pour un ensemble fixe de primitives cryptographiques. / This thesis is about the formal verification and composition of security protocols, motivated by applications to electronic voting protocols. Chapters 3 to 5 concern the verification of security protocols while Chapter 6 concerns composition.We show in Chapter 3 how to reduce certain problems from a quotient term algebra to the free term algebra via the use of strongly complete sets of variants. We show that, when the quotient algebra is given by a convergent optimally reducing rewrite system, finite strongly complete sets of variants exist and are effectively computable.In Chapter 4, we show that static equivalence for (classes of) equational theories including subterm convergent equational theories, trapdoor commitment and blind signatures is decidable in polynomial time. We also provide an efficient implementation.In Chapter 5 we extend the previous decision procedure to handle trace equivalence. We use finite strongly complete sets of variants introduced in Chapter 3 to get rid of the equational theory and we model each protocol trace as a Horn theory which we solve using a refinement of resolution. Although we have not been able to prove that this procedure always terminates, we have implemented it and used it to provide the first automated proof of vote privacy of the FOO electronic voting protocol.In Chapter 6, we study composition of protocols. We show that two protocols that use arbitrary disjoint cryptographic primitives compose securely if they do not reveal or reuse any shared secret. We also show that a form of tagging is sufficient to provide disjointness in the case of a fixed set of cryptographic primitives.
|
24 |
Alloy-Guided Verification of Cooperative Autonomous Driving BehaviorVanValkenburg, MaryAnn E. 18 May 2020 (has links)
Alloy is a lightweight formal modeling tool that generates instances of a software specification to check properties of the design. This work demonstrates the use of Alloy for the rapid development of autonomous vehicle driving protocols. We contribute two driving protocols: a Normal protocol that represents the unpredictable yet safe driving behavior of typical human drivers, and a Connected protocol that employs connected technology for cooperative autonomous driving. Using five properties that define safe and productive driving actions, we analyze the performance of our protocols in mixed traffic. Lightweight formal modeling is a valuable way to reason about driving protocols early in the development process because it can automate the checking of safety and productivity properties and prevent costly design flaws.
|
25 |
Formaln verifikace RISC-V procesoru s vyuitm Questa PropCheck / Formal verification of RISC-V processor with Questa PropCheckJavor, Adrin January 2020 (has links)
The topic of this master thesis is Formal verification of RISC-V processor with Questa PropCheck using SystemVerilog assertions. The theoretical part writes about the RISC-V architecture, furthermore, selected components of Codix Berkelium 5 processor used for formal verification are described, communication protocol AHB-lite, formal verification and its methods and tools are also studied. Experimental part consists of verification planning of selected components, subsequent formal verification, analysing of results and evaluating a benefits of formal technics.
|
26 |
Formal Verification Methodologies for NULL Convention Logic CircuitsLe, Son Ngoc January 2020 (has links)
NULL Convention Logic (NCL) is a Quasi-Delay Insensitive (QDI) asynchronous design paradigm that aims to tackle some of the major problems synchronous designs are facing as the industry trend of increased clock rates and decreased feature size continues. The clock in synchronous designs is becoming increasingly difficult to manage and causing more power consumption than ever before. NCL circuits address some of these issues by requiring less power, producing less noise and electro-magnetic interference, and being more robust to Process, Voltage, and Temperature (PVT) variations. With the increase in popularity of asynchronous designs, a formal verification methodology is crucial for ensuring these circuits operate correctly. Four automated formal verification methodologies have been developed, three to ensure delay-insensitivity of an NCL circuit (i.e., prove Input-Completeness, Observability, and Completion-Completeness properties), and one to aid in proving functional equivalence between an NCL circuit and its synchronous counterpart. Note that an NCL circuit can be functionally correct and still not be input-complete, observable, or completion-complete, which could cause the circuit to operate correctly under normal conditions, but malfunction when circuit timing drastically changes (e.g., significantly reduced supply voltage, extreme temperatures). Since NCL circuits are implemented using dual-rail logic (i.e., 2 wires, rail0 and rail1, represent one bit of data), part of the functional equivalence verification involves ensuring that the NCL rail0 logic is the inverse of its rail1 logic. Equivalence verification optimizations and alternative invariant checking methods were investigated and proved to decrease verification times of identical circuits substantially. This work will be a major step toward NCL circuits being utilized more frequently in industry, since it provides an automated verification method to prove correctness of an NCL implementation and equivalence to its synchronous specification, which is the industry standard.
|
27 |
Formally Verifying the Robustness of Machine Learning Models : A Comparative Study / Formell verifiering av robusthet hos maskininlärningsmodeller : En jämförelsestudieLundström, Linnea January 2020 (has links)
Machine learning models have become increasingly popular in recent years, and not without reason. They enable software to become more powerful, and with less human involvement. As a consequence however, the actions of the software are hard for a human to understand and anticipate. This prohibits the use of machine learning in systems where safety has to be assured, typically using formal proofs of relevant properties. This thesis is focused on robustness - one of many properties that can impact the safety of a system. There are several tools available that enable formal robustness verification of machine learning models, and a goal of this thesis is to evaluate their performance. A variety of machine learning models are also assessed according to how robust they can be proved to be. A digit recognition problem was used in order to evaluate how sensitive different model types are to perturbations of pixels in an image, and also to assess the performance of applicable verification tools. On this particular problem, we discovered that a Support Vector Machine demonstrates the highest degree of robustness, which could be verified with short enough time using the tool SAVer. In addition, machine learning models were trained on a data set consisting of Android applications that are labelled either as malware or benign. In this verification problem, we check whether adding permission requests to an application that is malware can make it become labelled as benign. For this problem, a Gradient Boosting Machine proved to be the most robust with a very short verification time using the tool VoTE. Although not the most robust, Neural Networks were proved to be relatively robust on both problems using the tool ERAN, whereas Random Forests performed the worst, in terms of robustness.
|
28 |
Symboleo: Specification and Verification of Legal ContractsParvizimosaed, Alireza 21 October 2022 (has links)
Contracts are legally binding and enforceable agreements among two or more parties that govern social interactions. They have been used for millennia, including in commercial transactions, employment relationships and intellectual property generation. Each contract determines obligations and powers of contracting parties. The execution of a contract needs to be continuously monitored to ensure compliance with its terms and conditions. Smart contracts are software systems that monitor and control the execution of contracts to ensure compliance. But for such software systems to become possible, contracts need to be specified precisely to eliminate ambiguities, contradictions, and missing clauses. This thesis proposes a formal specification language for contracts named Symboleo. The ontology of Symboleo is founded on the legal concepts of obligation (a kind of duty) and power (a kind of right) complemented with the concepts of event and situation that are suitable for conceptualizing monitoring tasks. The formal semantics of legal concepts is defined in terms of state machines that describe the lifetimes of contracts, obligations, and powers, as well as axioms that describe precisely state transitions. The language supports execution-time operations that enable subcontracting assignment of rights and substitution of performance to a third party during the execution of a contract. Symboleo has been applied to the formalization of contracts from three different domains as a preliminary evaluation of its expressiveness. Formal specifications can be algorithmically analyzed to ensure that they satisfy desired properties. Towards this end, the thesis presents two implemented analysis tools. One is a conformance checking tool (SymboleoPC) that ensures that a specification is consistent with the expectations of contracting parties. Expectations are defined for this tool in terms of scenarios (sequences of events) and the expected final outcome (i.e., successful/unsuccessful execution). The other tool (SymboleoPC), which builds on top of an existing model checker (nuXmv), can prove/disprove desired properties of a contract, expressed in temporal logic. These tools have been used for assessing different business contracts. SymboleoPC is also assessed in terms of performance and scalability, with positive results. Symboleo, together with its associated tools, is envisioned as an enabler for the formal verification of contracts to address requirements-level issues, at design time.
|
29 |
Arbitration Techniques for SoC Bus Interconnect with Optimized Verification MethodologySarpangala, Kishan January 2013 (has links)
No description available.
|
30 |
Design Verification for Sequential Systems at Various Abstraction LevelsZhang, Liang 31 January 2005 (has links)
With the ever increasing complexity of digital systems, functional verification has become a daunting task to circuit designers. Functional verification alone often surpasses 70% of the total development cost and the situation has been projected to continue to worsen. The most critical limitations of existing techniques are the capacity issue and the run-time issue. This dissertation addresses the functional verification problem using a unified approach, which utilizes different core algorithms at various abstraction levels. At the logic level, we focus on incorporating a set of novel ideas to existing formal verification approaches. First, we present a number of powerful optimizations to improve the performance and capacity of a typical SAT-based bounded model checking framework. Secondly, we present a novel method for performing dynamic abstraction within a framework for abstraction-refinement based model checking. Experiments on a wide range of industrial designs have shown that the proposed optimizations consistently provide between 1-2 orders of magnitude speedup and can be extremely useful in enhancing the efficacy of existing formal verification algorithms. At the register transfer level, where the formal verification is less likely to succeed, we developed an efficient ATPG-based validation framework, which leverages the high-level circuit information and an improved observability-enhanced coverage to generate high quality validation sequences. Experiments show that our approach is able to generate high quality validation vectors, which achieve both high tag coverage and high bug coverage with extremely low computational cost. / Ph. D.
|
Page generated in 0.0672 seconds