1 |
Explicit or Symbolic Translation of Linear Temporal Logic to AutomataRozier, Kristin Yvonne 24 July 2013 (has links)
Formal verification techniques are growing increasingly vital for the development of safety-critical software and hardware in practice. Techniques such as requirements-based design and model checking for system verification have been successfully used to verify systems for air traffic control, airplane separation assurance, autopilots, CPU logic designs, life-support, medical equipment, and other functions that ensure human safety.
Formal behavioral specifications written early in the system-design process and communicated across all design phases increase the efficiency, consistency, and quality of the system under development. We argue that to prevent introducing design or verification errors, it is crucial to test specifications for satisfiability. We advocate for the adaptation of a new sanity check via satisfiability checking for property assurance. Our focus here is on specifications expressed in Linear Temporal Logic (LTL). We demonstrate that LTL satisfiability checking reduces to model checking and satisfiability checking for the specification, its complement, and a conjunction of all properties should be performed as a first step to LTL model checking.
We report on an experimental investigation of LTL satisfiability checking. We introduce a large set of rigorous benchmarks to enable objective evaluation of LTL-to-automaton algorithms in terms of scalability, performance, correctness, and size of the automata produced. For explicit model checking, we use the Spin model checker; we tested all LTL-to-explicit automaton translation tools that were publicly available when we conducted our study. For symbolic model checking, we use CadenceSMV, NuSMV, and SAL-SMC for both LTL-to-symbolic automaton translation and to perform the satisfiability check. Our experiments result in two major findings. First, scalability, correctness, and other debilitating performance issues afflict most LTL translation tools. Second, for LTL satisfiability checking, the symbolic approach is clearly superior to the explicit approach.
Ironically, the explicit approach to LTL-to-automata had been heavily studied while only one algorithm existed for LTL-to-symbolic automata. Since 1994, there had been essentially no new progress in encoding symbolic automata for BDD-based analysis. Therefore, we introduce a set of 30 symbolic automata encodings. The set consists of novel combinations of existing constructs, such as different LTL formula normal forms, with a novel transition-labeled symbolic automaton form, a new way to encode transitions, and new BDD variable orders based on algorithms for tree decomposition of graphs. An extensive set of experiments demonstrates that these encodings translate to significant, sometimes exponential, improvement over the current standard encoding for symbolic LTL satisfiability checking.
Building upon these ideas, we return to the explicit automata domain and focus on the most common type of specifications used in industrial practice: safety properties. We show that we can exploit the inherent determinism of safety properties to create a set of 26 explicit automata encodings comprised of novel aspects including: state numbers versus state labels versus a state look-up table, finite versus infinite acceptance conditions, forward-looking versus backward-looking transition encodings, assignment-based versus BDD-based alphabet representation, state and transition minimization, edge abbreviation, trap-state elimination, and determinization either on-the-fly or up-front using the subset construction. We conduct an extensive experimental evaluation and identify an encoding that offers the best performance in explicit LTL model checking time and is constantly faster than the previous best explicit automaton encoding algorithm.
|
2 |
Parallel Run-Time VerificationBerkovich, Shay January 2013 (has links)
Run-time verification is a technique to reason about a program correctness. Given a set of desirable properties and a program trace from the inspected program as an input, the monitor module verifies that properties hold on this trace. As this process is taking place at a run time, one of the major drawbacks of run-time verification is the execution overhead caused by a monitoring activity. In this thesis, we intend to minimize this overhead by presenting a collection of parallel verification algorithms. The algorithms verify properties correctness in a parallel fashion, decreasing the verification time by dispersion of computationally intensive calculations over multiple cores (first level of parallelism). We designed the algorithms with the intention to exploit a data-level parallelism, thus specifically suitable to run on Graphics Processing Units (GPUs), although can be utilized on multi-core platforms as well. Running the inspected program and the monitor module on separate platforms (second level of parallelism) results in several advantages: minimization of interference between the monitor and the program, faster processing for non-trivial computations, and even significant reduction in power consumption (when the monitor is running on GPU).
This work also aims to provide a solution to automated run-time verification of C programs by implementing the aforementioned set of algorithms in the monitoring tool called GPU-based online and offline Monitoring Framework (GooMF). The ultimate goal of GooMF is to supply developers with an easy-to-use and flexible verification API that requires minimal knowledge of formal languages and techniques.
|
3 |
Sveplaserns användning till inventering/befästning/kontroll av vägmarkeringLjungberg, Tony, Musoke, Ashraf January 2012 (has links)
The thesis mission is to investigate whether the swiveling laser that Vectura (former Vägverket Konsult) has in use to measure the road areas and straddle measuring can be used to detect differences in the reflective road markings function. This is to increase use of the swiveling laser and get rid of the manual measurements performed today. Currently, the measurement of road markings reflective function is performed with both mobile and handheld measuring instruments. The purpose of this thesis is to develop better methods for determination of road markings’ reflective function, in order to facilitate the description and suggestions for improvements of the maintenance of road markings reflective function.The questions that the thesis has been to start with are: Can the swiveling laser detect differences in longitudinal and transverse road markings reflective function? The accuracy of measurement has been sweeping the laser over the reference instrument (LTL-2000) Is it necessary with a conversion factor for swiveling laser data against the reference instrument data? To answer the question 2 and 3, there has been a field work in Värnamo where a road stretch with longitudinal road markings and zebra crossing (transverse road markings) has been investigated with a swiveling laser and LTL-2000. The formulas and calculations contained in the thesis are drawn from scientific reports that are produced by VTI in Linköping. The same formula that was used to calculate the optical measurement data was also used for the swiveling laser data. This can be explained by the measurement data derived by the swiveling laser data has been processed in a computer program (Matlab) that calculated gray tone image of road markings. This gray tone image was then analyzed in the image processing software ImageJ where every shade of gray represents a value of retroreflection. The results and analysis shows that the swiveling laser can detect differences in longitudinal and transverse road markings reflecting function, but the accuracy is not good and that there has emerged a need for a conversion factor for converting from the swiveling laser data to optical data. / Examensarbetets uppgift är att undersöka om den sveplaser som Vectura (före detta Vägverket Konsult) använder för mätning av vägområden och frihöjdsmätning, kan användas för att detektera skillnader i vägmarkeringars reflekterande funktion. Detta för att öka användningsområdet för sveplasern och få bort de manuella mätningarna som utförs idag. Idag utförs mätning av vägmarkeringars reflekterande funktion med både mobila och handhållna mätinstrument. Syftet med rapporten har varit att ta fram bättre mätmetoder för fastställande av vägmarkeringars reflekterande funktion, för att underlätta beskrivningen och förslagen på förbättringar av underhållet på vägmarkeringars reflekterande funktion. Frågeställningarna som examensarbetet har haft att utgå från är: Kan sveplasern detektera skillnader i längsgående och tvärgående vägmarkeringars reflekterande funktion? Vilken mätnoggranhet har sveplasern gentemot referensinstrumentet (LTL-2000)? Vilken omräkningsfaktor gäller för sveplaserdata gentemot referensinstrumentsdata? För att besvara frågeställning 2 och 3 så har det gjorts en fältstudie i Värnamo där en vägsträcka med längsgående vägmarkeringar och ett övergångsställe (Tvärgående vägmarkeringar)har undersökts med sveplaser och LTL-2000. Formler och beräkningar är hämtade från vetenskapliga rapporter som är framtagna av VTI i Linköping och dessa formler har även använts vid beräkning av sveplaserdata. Detta kan förklaras genom att de mätdata som fåtts fram från sveplasern har behandlats i ett beräkningsprogram (Matlab) som räknat fram en gråtonsbild av vägmarkeringarna. Denna gråtonsbild har sedan analyserats i bildbehandlingsprogrammet ImageJ där varje gråton representerar ett värde på retroreflektionen. Resultat och analys visar att sveplasern kan detektera skillnader i längs- och tvärgående vägmarkeringars reflekterande funktion, men att mätnoggrannheten inte är bra och att det framkommit ett behov av en omräkningsfaktor för sveplaserdata till optiska mätdata.
|
4 |
Parallel Run-Time VerificationBerkovich, Shay January 2013 (has links)
Run-time verification is a technique to reason about a program correctness. Given a set of desirable properties and a program trace from the inspected program as an input, the monitor module verifies that properties hold on this trace. As this process is taking place at a run time, one of the major drawbacks of run-time verification is the execution overhead caused by a monitoring activity. In this thesis, we intend to minimize this overhead by presenting a collection of parallel verification algorithms. The algorithms verify properties correctness in a parallel fashion, decreasing the verification time by dispersion of computationally intensive calculations over multiple cores (first level of parallelism). We designed the algorithms with the intention to exploit a data-level parallelism, thus specifically suitable to run on Graphics Processing Units (GPUs), although can be utilized on multi-core platforms as well. Running the inspected program and the monitor module on separate platforms (second level of parallelism) results in several advantages: minimization of interference between the monitor and the program, faster processing for non-trivial computations, and even significant reduction in power consumption (when the monitor is running on GPU).
This work also aims to provide a solution to automated run-time verification of C programs by implementing the aforementioned set of algorithms in the monitoring tool called GPU-based online and offline Monitoring Framework (GooMF). The ultimate goal of GooMF is to supply developers with an easy-to-use and flexible verification API that requires minimal knowledge of formal languages and techniques.
|
5 |
Probabilistic Logic, Probabilistic Regular Expressions, and Constraint Temporal LogicWeidner, Thomas 29 August 2016 (has links) (PDF)
The classic theorems of Büchi and Kleene state the expressive equivalence of finite automata to monadic second order logic and regular expressions, respectively. These fundamental results enjoy applications in nearly every field of theoretical computer science. Around the same time as Büchi and Kleene, Rabin investigated probabilistic finite automata. This equally well established model has applications ranging from natural language processing to probabilistic model checking.
Here, we give probabilistic extensions Büchi\\\'s theorem and Kleene\\\'s theorem to the probabilistic setting. We obtain a probabilistic MSO logic by adding an expected second order quantifier. In the scope of this quantifier, membership is determined by a Bernoulli process. This approach turns out to be universal and is applicable for finite and infinite words as well as for finite trees. In order to prove the expressive equivalence of this probabilistic MSO logic to probabilistic automata, we show a Nivat-theorem, which decomposes a recognisable function into a regular language, homomorphisms, and a probability measure.
For regular expressions, we build upon existing work to obtain probabilistic regular expressions on finite and infinite words. We show the expressive equivalence between these expressions and probabilistic Muller-automata. To handle Muller-acceptance conditions, we give a new construction from probabilistic regular expressions to Muller-automata. Concerning finite trees, we define probabilistic regular tree expressions using a new iteration operator, called infinity-iteration. Again, we show that these expressions are expressively equivalent to probabilistic tree automata.
On a second track of our research we investigate Constraint LTL over multidimensional data words with data values from the infinite tree. Such LTL formulas are evaluated over infinite words, where every position possesses several data values from the infinite tree. Within Constraint LTL on can compare these values from different positions. We show that the model checking problem for this logic is PSPACE-complete via investigating the emptiness problem of Constraint Büchi automata.
|
6 |
Extended LTLvis Motion Planning InterfaceJanuary 2016 (has links)
abstract: Robots are becoming an important part of our life and industry. Although a lot of robot control interfaces have been developed to simplify the control method and improve user experience, users still cannot control robots comfortably. With the improvements of the robot functions, the requirements of universality and ease of use of robot control interfaces are also increasing. This research introduces a graphical interface for Linear Temporal Logic (LTL) specifications for mobile robots. It is a sketch based interface built on the Android platform which makes the LTL control interface more friendly to non-expert users. By predefining a set of areas of interest, this interface can quickly and efficiently create plans that satisfy extended plan goals in LTL. The interface can also allow users to customize the paths for this plan by sketching a set of reference trajectories. Given the custom paths by the user, the LTL specification and the environment, the interface generates a plan balancing the customized paths and the LTL specifications. We also show experimental results with the implemented interface. / Dissertation/Thesis / Masters Thesis Computer Science 2016
|
7 |
Compositional Synthesis and Most General ControllersKlein, Joachim 18 December 2013 (has links) (PDF)
Given a formal model of the behavior of a system, an objective and some notion of control the goal of controller synthesis is to construct a (finite-state) controller that ensures that the system always satisfies the objective. Often, the controller can base its decisions only on limited observations of the system. This notion of limited observability induces a partial-information game between the controller and the uncontrollable part of the system. A successful controller then realizes an observation-based strategy that enforces the objective.
In this thesis we consider the controller synthesis problem in the linear-time setting where the behavior of the system is given as a nondeterministic, labeled transitions system A, where the controller can only partially observe and control the behavior of A. The goal of the thesis is to develop a compositional approach for constructing controllers, suitable to treat conjunctive cascades of linear-time objectives P_1, P_2, ..., P_k in an online manner. We iteratively construct a controller C_1 for system A enforcing P_1, then a controller C_2 enforcing P_2 for the parallel composition of the first controller with the system, and so on. It is crucial for this approach that each controller C_i enforces P_i in a most general manner, being as permissive as possible. Otherwise, behavior that is needed to enforce subsequent objectives could be prematurely removed.
Standard notions of strategies and controllers only allow the most general treatment for the limited class of safety objectives. We introduce a novel concept of most general strategies and controllers suited for the compositional treatment of objectives beyond safety. We demonstrate the existence of most general controllers for all enforceable, observation-based omega-regular objectives and provide algorithms for the construction of such most general controllers, with specialized variants for the subclass of safety and co-safety objectives.
We furthermore adapt and apply our general framework for the compositional synthesis of most general controllers to the setting of exogenous coordination in the context of the channel-based coordination language Reo and the constraint automata framework and report on our implementation in the verification toolset Vereofy.
The construction of most general controllers in Vereofy for omega-regular objectives relies on our tool ltl2dstar for generating deterministic omega-automata from Linear Temporal Logic (LTL) formulas. We introduce a generic improvement for exploiting insensitiveness to stuttering during the determinization construction and evaluate its effectiveness in practice. We further investigate the performance of recently proposed variants of Safra\'s determinization construction in practice.
|
8 |
Probabilistic Logic, Probabilistic Regular Expressions, and Constraint Temporal LogicWeidner, Thomas 21 June 2016 (has links)
The classic theorems of Büchi and Kleene state the expressive equivalence of finite automata to monadic second order logic and regular expressions, respectively. These fundamental results enjoy applications in nearly every field of theoretical computer science. Around the same time as Büchi and Kleene, Rabin investigated probabilistic finite automata. This equally well established model has applications ranging from natural language processing to probabilistic model checking.
Here, we give probabilistic extensions Büchi\\\''s theorem and Kleene\\\''s theorem to the probabilistic setting. We obtain a probabilistic MSO logic by adding an expected second order quantifier. In the scope of this quantifier, membership is determined by a Bernoulli process. This approach turns out to be universal and is applicable for finite and infinite words as well as for finite trees. In order to prove the expressive equivalence of this probabilistic MSO logic to probabilistic automata, we show a Nivat-theorem, which decomposes a recognisable function into a regular language, homomorphisms, and a probability measure.
For regular expressions, we build upon existing work to obtain probabilistic regular expressions on finite and infinite words. We show the expressive equivalence between these expressions and probabilistic Muller-automata. To handle Muller-acceptance conditions, we give a new construction from probabilistic regular expressions to Muller-automata. Concerning finite trees, we define probabilistic regular tree expressions using a new iteration operator, called infinity-iteration. Again, we show that these expressions are expressively equivalent to probabilistic tree automata.
On a second track of our research we investigate Constraint LTL over multidimensional data words with data values from the infinite tree. Such LTL formulas are evaluated over infinite words, where every position possesses several data values from the infinite tree. Within Constraint LTL on can compare these values from different positions. We show that the model checking problem for this logic is PSPACE-complete via investigating the emptiness problem of Constraint Büchi automata.
|
9 |
Compositional Synthesis and Most General ControllersKlein, Joachim 22 February 2013 (has links)
Given a formal model of the behavior of a system, an objective and some notion of control the goal of controller synthesis is to construct a (finite-state) controller that ensures that the system always satisfies the objective. Often, the controller can base its decisions only on limited observations of the system. This notion of limited observability induces a partial-information game between the controller and the uncontrollable part of the system. A successful controller then realizes an observation-based strategy that enforces the objective.
In this thesis we consider the controller synthesis problem in the linear-time setting where the behavior of the system is given as a nondeterministic, labeled transitions system A, where the controller can only partially observe and control the behavior of A. The goal of the thesis is to develop a compositional approach for constructing controllers, suitable to treat conjunctive cascades of linear-time objectives P_1, P_2, ..., P_k in an online manner. We iteratively construct a controller C_1 for system A enforcing P_1, then a controller C_2 enforcing P_2 for the parallel composition of the first controller with the system, and so on. It is crucial for this approach that each controller C_i enforces P_i in a most general manner, being as permissive as possible. Otherwise, behavior that is needed to enforce subsequent objectives could be prematurely removed.
Standard notions of strategies and controllers only allow the most general treatment for the limited class of safety objectives. We introduce a novel concept of most general strategies and controllers suited for the compositional treatment of objectives beyond safety. We demonstrate the existence of most general controllers for all enforceable, observation-based omega-regular objectives and provide algorithms for the construction of such most general controllers, with specialized variants for the subclass of safety and co-safety objectives.
We furthermore adapt and apply our general framework for the compositional synthesis of most general controllers to the setting of exogenous coordination in the context of the channel-based coordination language Reo and the constraint automata framework and report on our implementation in the verification toolset Vereofy.
The construction of most general controllers in Vereofy for omega-regular objectives relies on our tool ltl2dstar for generating deterministic omega-automata from Linear Temporal Logic (LTL) formulas. We introduce a generic improvement for exploiting insensitiveness to stuttering during the determinization construction and evaluate its effectiveness in practice. We further investigate the performance of recently proposed variants of Safra\'s determinization construction in practice.
|
10 |
Temporal logic encodings for SAT-based bounded model checkingSheridan, Daniel January 2006 (has links)
Since its introduction in 1999, bounded model checking (BMC) has quickly become a serious and indispensable tool for the formal verification of hardware designs and, more recently, software. By leveraging propositional satisfiability (SAT) solvers, BMC overcomes some of the shortcomings of more conventional model checking methods. In model checking we automatically verify whether a state transition system (STS) describing a design has some property, commonly expressed in linear temporal logic (LTL). BMC is the restriction to only checking the looping and non-looping runs of the system that have bounded descriptions. The conventional BMC approach is to translate the STS runs and LTL formulae into propositional logic and then conjunctive normal form (CNF). This CNF expression is then checked by a SAT solver. In this thesis we study the effect on the performance of BMC of changing the translation to propositional logic. One novelty is to use a normal form for LTL which originates in resolution theorem provers. We introduce the normal form conversion early on in the encoding process and examine the simplifications that it brings to the generation of propositional logic. We further enhance the encoding by specialising the normal form to take advantage of the types of runs peculiar to BMC. We also improve the conversion from propositional logic to CNF. We investigate the behaviour of the new encodings by a series of detailed experimental comparisons using both hand-crafted and industrial benchmarks from a variety of sources. These reveal that the new normal form based encodings can reduce the solving time by a half in most cases, and up to an order of magnitude in some cases, the size of the improvement corresponding to the complexity of the LTL expression. We also compare our method to the popular automata-based methods for model checking and BMC.
|
Page generated in 0.0163 seconds