Spelling suggestions: "subject:"model checking."" "subject:"model hecking.""
361 |
Bezpečnost protokolů bezkontaktních čipových karet / Security of Contactless Smart Card ProtocolsHenzl, Martin January 2016 (has links)
Tato práce analyzuje hrozby pro protokoly využívající bezkontaktní čipové karty a představuje metodu pro poloautomatické hledání zranitelností v takových protokolech pomocí model checkingu. Návrh a implementace bezpečných aplikací jsou obtížné úkoly, i když je použit bezpečný hardware. Specifikace na vysoké úrovni abstrakce může vést k různým implementacím. Je důležité používat čipovou kartu správně, nevhodná implementace protokolu může přinést zranitelnosti, i když je protokol sám o sobě bezpečný. Cílem této práce je poskytnout metodu, která může být využita vývojáři protokolů k vytvoření modelu libovolné čipové karty, se zaměřením na bezkontaktní čipové karty, k vytvoření modelu protokolu a k použití model checkingu pro nalezení útoků v tomto modelu. Útok může být následně proveden a pokud není úspěšný, model je upraven pro další běh model checkingu. Pro formální verifikaci byla použita platforma AVANTSSAR, modely jsou psány v jazyce ASLan++. Jsou poskytnuty příklady pro demonstraci použitelnosti navrhované metody. Tato metoda byla použita k nalezení slabiny bezkontaktní čipové karty Mifare DESFire. Tato práce se dále zabývá hrozbami, které není možné pokrýt navrhovanou metodou, jako jsou útoky relay.
|
362 |
Using Timed Model Checking for Verifying WorkflowsGruhn, Volker, Laue, Ralf 31 January 2019 (has links)
The correctness of a workflow specification is critical for the automation of business processes. For this reason, errors in the specification should be detected and corrected as early as possible - at specification time. In this paper, we present a validation method for workflow specifications using model-checking techniques. A formalized workflow specification, its properties and the correctness requirements are translated into a timed state machine that can be analyzed with the Uppaal model checker. The main contribution of this paper is the use of timed model checking for verifying time-related properties of workflow specifications. Using only one tool (the model checker) for verifying these different kinds of properties gives an advantage over using different specialized algorithms for verifying different kinds of properties.
|
363 |
ZipperOTF: Automatic, Precise, and Simple Data Race Detection for Task Parallel Programs with Mutual ExclusionPowell, S. Jacob 31 July 2020 (has links)
Data race in parallel programs can be difficult to precisely detect, and doing so manually can often prove unsuccessful. Task parallel programming models can help reduce defects introduced by the programmer by restricting concurrent functionalities to fork-join operations. Typical data race detection algorithms compute the happens-before relation either by tracking the order that shared accesses happen via a vector clock counter, or by grouping events into sets that help classify which heap locations are accessed sequentially or in parallel. Access sets are simple and efficient to compute, and have been shown to have the potential to outperform vector clock approaches in certain use cases. However, they do not support arbitrary thread synchronization, are limited to fork-join or similar structures, and do not support mutual exclusion. Vector clock approaches do not scale as well to many threads with many shared interactions, rendering them inefficient in many cases. This work combines the simplicity of access sets with the generality of vector clocks by grouping heap accesses into access sets, and attaching the vector clock counter to those groupings. By combining these two approaches, access sets can be utilized more generally to support programs that contain mutual exclusion. Additionally, entire blocks can be ordered with each other rather than single accesses, producing a much more efficient algorithm for data race detection. This novel algorithm, ZipperOTF, is compared to the Computation Graph algorithm (an access set algorithm) as well as FastTrack (a vector clock algorithm) to show comparisons in empirical results and in both time and space complexity.
|
364 |
Formal Methods for Probabilistic Energy ModelsDaum, Marcus 11 April 2019 (has links)
The energy consumption that arises from the utilisation of information processing systems adds a significant contribution to environmental pollution and has a big share of operation costs. This entails that we need to find ways to reduce the energy consumption of such systems. When trying to save energy it is important to ensure that the utility (e.g., user experience) of a system is not unnecessarily degraded, requiring a careful trade-off analysis between the consumed energy and the resulting utility. Therefore, research on energy efficiency has become a very active and important research topic that concerns many different scientific areas, and is as well of interest for industrial companies.
The concept of quantiles is already well-known in mathematical statistics, but its benefits for the formal quantitative analysis of probabilistic systems have been noticed only recently. For instance, with the help of quantiles it is possible to reason about the minimal energy that is required to obtain a desired system behaviour in a satisfactory manner, e.g., a required user experience will be achieved with a sufficient probability. Quantiles also allow the determination of the maximal utility that can be achieved with a reasonable probability while staying within a given energy budget. As those examples illustrate important measures that are of interest when analysing energy-aware systems, it is clear that it is beneficial to extend formal analysis-methods with possibilities for the calculation of quantiles.
In this monograph, we will see how we can take advantage of those quantiles as an instrument for analysing the trade-off between energy and utility in the field of probabilistic model checking. Therefore, we present algorithms for their computation over Markovian models. We will further investigate different techniques in order to improve the computational performance of implementations of those algorithms. The main feature that enables those improvements takes advantage of the specific characteristics of the linear programs that need to be solved for the computation of quantiles. Those improved algorithms have been implemented and integrated into the well-known probabilistic model checker PRISM. The performance of this implementation is then demonstrated by means of different protocols with an emphasis on the trade-off between the consumed energy and the resulting utility. Since the introduced methods are not restricted to the case of an energy-utility analysis only, the proposed framework can be used for analysing the interplay of cost and its resulting benefit in general.:1 Introduction
1.1 Related work
1.2 Contribution and outline
2 Preliminaries
3 Reward-bounded reachability properties and quantiles
3.1 Essentials
3.2 Dualities
3.3 Upper-reward bounded quantiles
3.3.1 Precomputation
3.3.2 Computation scheme
3.3.3 Qualitative quantiles
3.4 Lower-reward bounded quantiles
3.4.1 Precomputation
3.4.2 Computation scheme
3.5 Energy-utility quantiles
3.6 Quantiles under side conditions
3.6.1 Upper reward bounds
3.6.2 Lower reward bounds
3.6.2.1 Maximal reachability probabilities
3.6.2.2 Minimal reachability probabilities
3.7 Reachability quantiles and continuous time
3.7.1 Dualities
4 Expectation Quantiles
4.1 Computation scheme
4.2 Arbitrary models
4.2.1 Existential expectation quantiles
4.2.2 Universal expectation quantiles
5 Implementation
5.1 Computation optimisations
5.1.1 Back propagation
5.1.2 Reward window
5.1.3 Topological sorting of zero-reward sub-MDPs
5.1.4 Parallel computations
5.1.5 Multi-thresholds
5.1.6 Multi-state solution methods
5.1.7 Storage for integer sets
5.1.8 Elimination of zero-reward self-loops
5.2 Integration in Prism
5.2.1 Computation of reward-bounded reachability probabilities
5.2.2 Computation of quantiles in CTMCs
6 Analysed Protocols
6.1 Prism Benchmark Suite
6.1.1 Self-Stabilising Protocol
6.1.2 Leader-Election Protocol
6.1.3 Randomised Consensus Shared Coin Protocol
6.2 Energy-Aware Protocols
6.2.1 Energy-Aware Job-Scheduling Protocol
6.2.1.1 Energy-Aware Job-Scheduling Protocol with side conditions
6.2.1.2 Energy-Aware Job-Scheduling Protocol and expectation quantiles
6.2.1.3 Multiple shared resources
6.2.2 Energy-Aware Bonding Network Device (eBond)
6.2.3 HAECubie Demonstrator
6.2.3.1 Operational behaviour of the protocol
6.2.3.2 Formal analysis
7 Conclusion
7.1 Classification
7.2 Future prospects
Bibliography
List of Figures
List of Tables
|
365 |
January: Search Based On Social Insect BehaviorLamborn, Peter C. 15 April 2005 (has links) (PDF)
January is a group of interacting stateless model checkers. Each agent functions on a processor located on a super computer or a network of workstations (NOW). The agent's search pattern is a semi-random walk based on the behavior of the grey field slug (Agriolimax reticulatus), the house fly (Musca domestica), and the black ant (Lassius niger). The agents communicate to lessen the amount of duplicate work being done. Every algorithm has a memory threshold above which they search efficiently. This threshold varies not only by model but also by algorithm. Janaury's threshold is lower than the thresholds of other algorithms we compared it to.
|
366 |
Using Live Sequence Chart Specifications for Formal VerificationKumar, Rahul 11 July 2008 (has links) (PDF)
Formal methods play an important part in the development as well as testing stages of software and hardware systems. A significant and often overlooked part of the process is the development of specifications and correctness requirements for the system under test. Traditionally, English has been used as the specification language, which has resulted in verbose and difficult to use specification documents that are usually abandoned during product development. This research focuses on investigating the use of Live Sequence Charts (LSCs), a graphical and intuitive language directly suited for expressing communication behaviors of a system as the specification language for a system under test. The research presents two methods for using LSCs as a specification language: first, by translating LSCs to temporal logic, and second, by translating LSCs to an automaton structure that is directly suited for formal verification of systems. The research first presents the translation for each method and further, identifies the pros and cons for each verification method.
|
367 |
Formal Verification of Hardware Peripheral with Security Property / Formell verifikation av extern hårdvara med säkerhetskravYao Håkansson, Jonathan, Rosencrantz, Niklas January 2017 (has links)
One problem with computers is that the operating system automatically trusts any externallyconnected peripheral. This can result in abuse when a peripheral technically can violate the security model because the peripheral is trusted. Because of that the security is an important issue to look at.The aim of our project is to see in which cases hardware peripherals can be trusted. We built amodel of the universal asynchronous transmitter/receiver (UART), a model of the main memory(RAM) and a model of a DMA controller. We analysed interaction between hardware peripherals,user processes and the main memory.One of our results is that connections with hardware peripherals are secure if the hardware is properly configured. A threat scenario could be an eavesdropper or man-in-the-middle trying to steal data or change a cryptographic key.We consider the use-cases of DMA and protecting a cryptographic key. We prove the well-behavior of the algorithm. Some error-traces resulted from incorrect modelling that was resolved by adjusting the models. Benchmarks were done for different memory sizes.The result is that a peripheral can be trusted provided a configuration is done. Our models consist of finite state machines and their corresponding SMV modules. The models represent computer hardware with DMA. We verified the SMV models using the model checkers NuSMV and nuXmv. / Målet med vårt projekt är att verifiera olika specifikationer av externa enheter som ansluts till datorn. Vi utför formell verifikation av sådan datorutrustning och virtuellt minne. Verifikation med temporal logik, LTL, utförs. Specifikt verifierar vi 4 olika use-case och 9 formler för seriell datakommunikation, DMA och virtuellt minne. Slutsatsen är att anslutning av extern hårdvara är säker om den är ordentligt konfigurerad.Vi gör jämförelser mellan olika minnesstorlekar och mätte tidsåtgången för att verifiera olika system. Vi ser att tidsåtgången för verifikation är långsammare än linjärt beroende och att relativt små system tar relativt lång tid att verifiera.
|
368 |
Facing infinity in model checking expressive specification languagesMagnago, Enrico 18 November 2022 (has links)
Society relies on increasingly complex software and hardware systems, hence techniques capable of proving that they behave as expected are of great and growing interest. Formal verification procedures employ mathematically sound reasoning to address this need.
This thesis proposes novel techniques for the verification and falsification of expressive specifications on timed and infinite-state systems. An expressive specification language allows the description of the intended behaviour of a system via compact formal statements written at an abstraction level that eases the review process. Falsifying a specification corresponds to identifying an execution of the system that violates the property (i.e. a witness). The capability of identifying witnesses is a key feature in the iterative refinement of the design of a system, since it provides a description of how a certain error can occur. The designer can analyse the witness and take correcting actions by refining either the description of the system or its specification.
The contribution of this thesis is twofold. First, we propose a semantics for Metric Temporal Logic that considers four different models of time (discrete, dense, super-discrete and super-dense). We reduce its verification problem to finding an infinite fair execution (witness) for an infinite-state system with discrete time. Second, we define a novel SMT-based algorithm to identify such witnesses. The algorithm employs a general representation of such executions that is both informative to the designer and provides sufficient structure to automate the search of a witness.
We apply the proposed techniques to benchmarks taken from software, infinite-state, timed and hybrid systems. The experimental results highlight that the proposed approaches compete and often outperform specific (application tailored) techniques currently used in the state of the art.
|
369 |
Implementation and evaluation of bounded invariant model checking for a subset of Stateflow / Implementering samt utvärdering av invariant-baserad begränsad modellprovning för en delmängd av StateflowUng, Gustav January 2021 (has links)
Stateflowmodels are used for describing logic and implementing state machines in modern safety-critical software. However, the complete Stateflowmodelling language is hard to formally define, therefore a subset relevant for industrial models has been developed in previous works. Proving that the execution of Stateflow models satisfies certain safety properties is intractable in general. However, bounded model checking (BMC) can be used to either prove that safety properties are satisfied up to a bounded execution depth, commonly referred to as the reachability diameter, or find a concrete counterexample. One particular safety property of interest is an invariant property. This thesis project contributes with the following. A bounded model checking tool based on symbolic execution has been developed and is called Stateflow Model Verification Tool (SMVT). This tool has been tested on synthetic models and industrial models. The performance of Stateflow Model Verification Tool (SMVT) has been measured, but not compared against the Simulink DesignVerifier (SLDV) due to licensing issues. The study has shown that many industrial models share a similar model structure. Furthermore, it has been shown that SMVT can perform well for several models. / Stateflow-modeller används för att beskriva logik and implementation av tillståndsmaskiner i modern säkerhetskritisk mjukvara. Det kompletta Stateflowspråket är väldigt komplext, och därför har forskare tidigare definierat en begränsad version av språket relevant för industriella modeller. Bevisning att exekvering av Stateflow-modeller måste uppfylla säkerhetsegenskaper, är svårlösligt rent generellt. Begränsad modellprovning kan användas antingen för att bevisa att säkerhetsegenskaper uppfylls till ett begränsat exekveringsdjup, eller för att hitta ett motexempel. En väldigt viktig säkerhetsegenskap kallas för invariant. Detta examensarbete bidrar med följande. En begränsad modellprövare baserad på symbolisk exekvering har utvecklats och kallas för SMVT. Detta verktyg har blivit testat på syntetiska modeller samt industriella modeller. Prestandan har blivit mätt, men på grund av Simulink Design Verifier (SLDV) licens har ingen jämförelse kunnat göras. Studien har visat att många industriella modeller delar samma modellstruktur. Vidare har det utvecklade verktyget SMVT visats prestera väl för flertalet modeller.
|
370 |
Analytical Exploration and Quantification of Nanowire-based Reconfigurable Digital CircuitsRaitza, Michael 22 December 2022 (has links)
Integrated circuit development is an industry-driven high-risk high-stakes environment. The time from the concept of a new transistor technology to the market-ready product is measured in decades rather than months or years. This increases the risk for any company endeavouring on the journey of driving a new concept. Additionally to the return on investment being in the far future, it is only to be expected at all in high volume production, increasing the upfront investment. What makes the undertaking worthwhile are the exceptional gains that are to be expected, when the production reaches the market and enables better products. For these reasons, the adoption of new transistor technologies is usually based on small increments with foreseeable impact on the production process. Emerging semiconductor device development must be able to prove its value to its customers, the chip-producing industry, the earlier the better. With this thesis, I provide a new approach for early evaluation of emerging reconfigurable transistors in reconfigurable digital circuits. Reconfigurable transistors are a type of MOSFET that features a controllable conduction polarity, i.e., they can be configured by other input signals to work as PMOS or NMOS devices.
Early device and circuit characterisation poses some challenges that are currently largely neglected by the development community. Firstly, to drive transistor development into the right direction, early feedback is necessary, which requires a method that can provide quantitative and qualitative results over a variety of circuit designs and must run mostly automatic. It should also require as little expert knowledge as possible to enable early experimentation on the device and new circuit designs together. Secondly, to actually run early, its device model should need as little data as possible to provide meaningful results. The proposed approach of this thesis tackles both challenges and employs model checking, a formal method, to provide a framework for the automated quantitative and qualitative analysis. It pairs a simple transistor device model with a charge transport model of the electrical network.
In this thesis, I establish the notion of transistor-level reconfiguration and show the kinds of reconfigurable standard cell designs the device facilitates. Early investigation resulted in the discovery of certain modes of reconfiguration that the transistor features and their application to design reconfigurable standard cells. Experiments with device parameters and the design of improved combinational circuits that integrate new reconfigurable standard cells further highlight the need for a thorough investigation and quantification of the new devices and newly available standard cells. As their performance improvements are inconclusive when compared to established CMOS technology, a design space exploration of the possible reconfigurable standard cell variants and a context-aware quantitative analysis turns out to be required.
I show that a charge transport model of the analogue transistor circuit provides the necessary abstraction, precision and compatibility with an automated analysis. Formalised in a DSL, it enables designers to freely characterise and combine parametrised transistor models, circuit descriptions that are device independent, and re-usable experiment setups that enable the analysis of large families of circuit variants. The language is paired with a design space exploration algorithm that explores all implementation variants of a Boolean function that employs various degrees and modes of reconfiguration. The precision of the device models and circuit performance calculations is validated against state-of-the-art FEM and SPICE simulations of production transistors.
Lastly, I show that the exploration and analysis can be done efficiently using two important Boolean functions. The analysis ranges from worst-case measures, like delay, power dissipation and energy consumption to the detection and quantification of output hazards and the verification of the functionality of a circuit implementation. It ends in presenting average performance results that depend on the statistical characterisation of application scenarios. This makes the approach particularly interesting for measures like energy consumption, where average results are more interesting, and for asynchronous circuit designs which highly depend on average delay performance. I perform the quantitative analysis under various input and output load conditions in over 900 fully automated experiments. It shows that the complexity of the results warrants an extension to electronic design automation flows to fully exploit the capabilities of reconfigurable standard cells. The high degree of automation enables a researcher to use as little as a Boolean function of interest, a transistor model and a set of experiment conditions and queries to perform a wide range quantitative analyses and acquire early results.:1 Introduction
1.1 Emerging Reconfigurable Transistor Technology
1.2 Testing and Standard Cell Characterisation
1.3 Research Questions
1.4 Design Space Exploration and Quantitative Analysis
1.5 Contribution
2 Fundamental Reconfigurable Circuits
2.1 Reconfiguration Redefined
2.1.1 Common Understanding of Reconfiguration
2.1.2 Reconfiguration is Computation
2.2 Reconfigurable Transistor
2.2.1 Device geometry
2.2.2 Electrical properties
2.3 Fundamental Circuits
3 Combinational Circuits and Higher-Order Functions
3.1 Programmable Logic Cells
3.1.1 Critical Path Delay Estimation using Logical Effort Method
3.1.2 Multi-Functional Circuits
3.2 Improved Conditional Carry Adder
4 Constructive DSE for Standard Cells Using MC
4.1 Principle Operation of Model Checking
4.1.1 Model Types
4.1.2 Query Types
4.2 Overview and Workflow
4.2.1 Experiment setup
4.2.2 Quantitative Analysis and Results
4.3 Transistor Circuit Model
4.3.1 Direct Logic Network Model
4.3.2 Charge Transport Network Model
4.3.3 Transistor Model
4.3.4 Queries for Quantitative Analysis
4.4 Circuit Variant Generation
4.4.1 Function Expansion
5 Quantitative Analysis of Standard Cells
5.1 Analysis of 3-Input Minority Logic Gate
5.1.1 Circuit Variants
5.1.2 Worst-Case Analysis
5.2 Analysis of 3-Input Exclusive OR Gate
5.2.1 Worst-Case Analysis
5.2.2 Functional Verification
5.2.3 Probabilistic Analysis
6 Conclusion and Future Work
6.1 Future Work
A Notational conventions
B prism-gen Programming Interfaces
Bibliography
Terms & Abbreviations
|
Page generated in 0.0549 seconds