221 |
Higher-order model checking with traversalsNeatherway, Robin Philip January 2014 (has links)
Higher-order recursion schemes are a powerful model of functional computation that grew out of traditional recursive program schemes and generalisations of grammars. It is common to view recursion schemes as generators of possibly-infinite trees, which Ong showed to have a decidable monadic second order theory and opened the door to applications in verification. Kobayashi later presented an intersection type characterisation of the model checking problem, on which most subsequent applied work is based. In recent work, recursion schemes have been considered to play a role similar to Boolean programs in verification of first-order imperative programs: a natural target for abstraction of programs with very large or infinite data domains. In this thesis we focus on the development of model checking algorithms for variants of recursion schemes. We start our contributions with a model checking algorithm inspired by the fully abstract game semantics of recursion schemes, but specified as a goal-directed approach to intersection type inference, that offers a unification of the views of Ong and Kobayashi. We build on this largely theoretical contribution with two orthogonal extensions and practical implementations. First, we develop a new extension of recursion schemes: higher-order recursion schemes with cases, which add non-determinism and a case construct operating over a finite data domain. These additions provide us with a more natural and succinct target for abstraction from functional programs: encoding data using functions inevitably results in an increase in the order and arity of the scheme, which have a direct impact on the worst-case complexity of the problem. We characterise the model checking problem using a novel intersection and union type system and give a practical algorithm for type inference in this system. We have carried out an empirical evaluation of the implementation --- the tool T<sub>RAV</sub>MC --- using a variety of problem instances from the literature and a new suite of problem instances derived via an abstraction-refinement procedure from functional programs. Second, we extend our approach from safety properties to all properties expressible in monadic second order logic using alternating parity tree automata as our specification language. We again provide an implementation and an empirical evaluation, which shows that despite the challenges accompanying liveness properties our tool scales beyond the current state of the art.
|
222 |
Game semantics based equivalence checking of higher-order programsHopkins, David G. B. January 2012 (has links)
This thesis examines the use of game semantics for the automatic equivalence checking of higher-order programs. Game semantics has proved to be a powerful method for constructing fully abstract models of logics and programming languages. Furthermore, the concrete nature of the semantics lends itself to algorithmic analysis. The game-semantic model can be used to identify fragments of languages which have a decidable observational equivalence problem. We investigate decidability results for different languages as well as the efficiency of these algorithms in practice. First we consider the call-by-value higher-order language with state, RML. This can be viewed as a canonical restriction of Standard ML to ground-type references. The O-strict fragment of RML is the largest set of type sequents for which, in the game-semantic denotation, justification pointers from O-moves are always uniquely reconstructible from the underlying move sequence. The O-strict fragment is surprisingly expressive, including higher-order types and difficult examples from the literature. By representing strategies as Visibly Pushdown Automata (VPA) we show that observational equivalence of O-strict terms is decidable (and in fact is ExpTime-complete). We then consider extensions of the O-strict fragment. Adding general recursion or using most non-O-strict types leads to undecidability. However, a limited form of recursion can be added while still preserving decidability (although the full power of DPDA is required). Next we examine languages with non-local control. This involves adding call/cc to our language and is known to correspond to dropping the game-semantic bracketing condition. In the call-by-name game-semantic model of Idealized Algol (IA), in which answers cannot justify questions, the visibility condition still implies a form of weak bracketing. By making bracketing violations explicit we show that we can still model the entire third-order fragment using VPA. We have also implemented tools based on these algorithms. Our model checkers Homer and Hector perform equivalence checking for third-order IA and O-strict RML respectively. Homer uses a naive explicit state method whereas Hector takes advantage of on-the-fly model checking. Our tools perform well on small yet challenging examples. On negative instances, the on-the-fly approach allows Hector to outperform Homer. To improve their performance, we also consider using ideas from symbolic execution. We propose a representation for finite automata using transitions labelled with formulas and guards which aims to take advantage of the symmetries of the game-semantic model so that strategies can be represented compactly. We refer to this representation as Symbolically Executed Automata (SEA). Using SEA allows much larger data types to be handled but is not as effective on larger examples with small data types.
|
223 |
Abstraction discovery and refinement for model checking by symbolic trajectory evaluationAdams, Sara Elisabeth January 2014 (has links)
This dissertation documents two contributions to automating the formal verification of hardware – particularly memory-intensive circuits – by Symbolic Trajectory Evaluation (STE), a model checking technique based on symbolic simulation over abstract sets of states. The contributions focus on improvements to the use of BDD-based STE, which uses binary decision diagrams internally. We introduce a solution to one of the major hurdles in using STE: finding suitable abstractions. Our work has produced the first known algorithm that addresses this problem by automatically discovering good, non-trivial abstractions. These abstractions are computed from the specification, and essentially encode partial input combinations sufficient for determining the specification’s output value. They can then be used to verify whether the hardware model meets its specification using a technique based on and significantly extending previous work by Melham and Jones [2]. Moreover, we prove that our algorithm delivers correct results by construction. We demonstrate that the abstractions received by our algorithm can greatly reduce verification costs with three example hardware designs, typical of the kind of problems faced by the semiconductor design industry. We further propose a refinement method for abstraction schemes when over- abstraction occurs, i.e., when the abstraction hides too much information of the original design to determine whether it meets its specification. The refinement algorithm we present is based on previous work by Chockler et al. [3], which selects refinement candidates by approximating which abstracted input is likely the biggest cause of the abstraction being unsuitable. We extend this work substantially, concentrating on three aspects. First, we suggest how the approach can also work for much more general abstraction schemes. This enables refining any abstraction allowed in STE, rather than just a subset. Second, Chockler et al. describe how to refine an abstraction once a refinement candidate has been identified. We present three additional variants of refining the abstraction. Third, the refinement at its core depends on evaluating circuit logic gates. The previous work offered solutions for NOT- and AND-gates. We propose a general approach to evaluating arbitrary logic gates, which improves the selection process of refinement candidates. We show the effectiveness of our work by automatically refining an abstraction for a content-addressable memory that exhibits over-abstraction, and by evaluating some common logic gates. These two contributions can be used independently to help automate the hard- ware verification by STE, but they also complement each other. To show this, we combine both algorithms to create a fully automatic abstraction discovery and refinement loop. The only inputs required are the hardware design and the specification, which the design should meet. While only small circuits could be verified completely automatically, it clearly shows that our two contributions allow the construction of a verification framework that does not require any user interaction.
|
224 |
Automated quantitative software verificationKattenbelt, Mark Alex January 2010 (has links)
Many software systems exhibit probabilistic behaviour, either added explicitly, to improve performance or to break symmetry, or implicitly, through interaction with unreliable networks or faulty hardware. When employed in safety-critical applications, it is important to rigorously analyse the behaviour of these systems. This can be done with a formal verification technique called model checking, which establishes properties of systems by algorithmically considering all execution scenarios. In the presence of probabilistic behaviour, we consider quantitative properties such as "the worst-case probability that the airbag fails to deploy within 10ms", instead of qualitative properties such as "the airbag eventually deploys". Although many model checking techniques exist to verify qualitative properties of software, quantitative model checking techniques typically focus on manually derived models of systems and cannot directly verify software. In this thesis, we present two quantitative model checking techniques for probabilistic software. The first is a quantitative adaptation of a successful model checking technique called counter-example guided abstraction refinement which uses stochastic two-player games as abstractions of probabilistic software. We show how to achieve abstraction and refinement in a probabilistic setting and investigate theoretical extensions of stochastic two-player game abstractions. Our second technique instruments probabilistic software in such a way that existing, non-probabilistic software verification methods can be used to compute bounds on quantitative properties of the original, uninstrumented software. Our techniques are the first to target real, compilable software in a probabilistic setting. We present an experimental evaluation of both approaches on a large range of case studies and evaluate several extensions and heuristics. We demonstrate that, with our methods, we can successfully compute quantitative properties of real network clients comprising approximately 1,000 lines of complex ANSI-C code — the verification of such software is far beyond the capabilities of existing quantitative model checking techniques.
|
225 |
Petri nets, probability and event structuresGhahremani Azghandi, Nargess January 2014 (has links)
Models of true concurrency have gained a lot of interest over the last decades as models of concurrent or distributed systems which avoid the well-known problem of state space explosion of the interleaving models. In this thesis, we study such models from two perspectives. Firstly, we study the relation between Petri nets and stable event structures. Petri nets can be considered as one of the most general and perhaps wide-spread models of true concurrency. Event structures on the other hand, are simpler models of true concurrency with explicit causality and conflict relations. Stable event structures expand the class of event structures by allowing events to be enabled in more than one way. While the relation between Petri nets and event structures is well understood, the relation between Petri nets and stable event structures has not been studied explicitly. We define a new and more compact unfoldings of safe Petri nets which is directly translatable to stable event structures. In addition, the notion of complete finite prefix is defined for compact unfoldings, making the existing model checking algorithms applicable to them. We present algorithms for constructing the compact unfoldings and their complete finite prefix. Secondly, we study probabilistic models of true concurrency. We extend the definition of probabilistic event structures as defined by Abbes and Benveniste to a newly defined class of stable event structures, namely, jump-free stable event structures arising from Petri nets (characterised and referred to as net-driven). This requires defining the fundamental concept of branching cells in probabilistic event structures, for jump-free net-driven stable event structures, and by proving the existence of an isomorphism among the branching cells of these systems, we show that the latter benefit from the related results of the former models. We then move on to defining a probabilistic logic over probabilistic event structures (PESL). To our best knowledge, this is the first probabilistic logic of true concurrency. We show examples of expressivity achieved by PESL, which in particular include properties related to synchronisation in the system. This is followed by the model checking algorithm for PESL for finite event structures. Finally, we present a logic over stable event structures (SEL) along with an account of its expressivity and its model checking algorithm for finite stable event structures.
|
226 |
State Space Symmetry Reduction for TBP Analysis / State Space Symmetry Reduction for TBP AnalysisČerný, Ondřej January 2011 (has links)
Threaded Behavioral Protocols (TBP) is a specification language for modelling the behavior of software components. This thesis aims at an analysis of TBP specifications within environments which involve an unbounded replication of threads. Such a TBP specification - together with a model of the possible environments - induces infinite state space which contains a vast amount of symmetries caused by thread replication. A model checking technique addressing such a state space and reducing the symmetries by using symbolic counter abstraction is proposed. In order to utilize the symbolic counter abstraction, the properties of the TBP specifications (called provisions) are converted into thread state reachability properties. The proposed analysis is safe in the sense that it discovers all errors in the model. On the other hand, it may yield spurious errors, i.e., errors that do not correspond to any real error in the model. The spurious errors are well identified and further possibilities to reduce them are outlined. Beyond the scope of the specific specifications, this work may also present a small step towards supporting dynamic thread creation in TBP.
|
227 |
A systems biology approach to multi-scale modelling and analysis of planar cell polarity in Drosophila melanogaster wingGao, Qian January 2013 (has links)
Systems biology aims to describe and understand biology at a global scale where biological systems function as a result of complex mechanisms that happen at several scales. Modelling and simulation are computational tools that are invaluable for description, understanding and prediction these mechanisms in a quantitative and integrative way. Thus multi-scale methods that couple the design, simulation and analysis of models spanning several spatial and temporal scales is becoming a new emerging focus of systems biology. This thesis uses an exemplar – Planar cell polarity (PCP) signalling – to illustrate a generic approach to model biological systems at different spatial scales, using the new concept of Hierarchically Coloured Petri Nets (HCPN). PCP signalling refers to the coordinated polarisation of cells within the plane of various epithelial tissues to generate sub-cellular asymmetry along an axis orthogonal to their apical-basal axes. This polarisation is required for many developmental events in both vertebrates and non-vertebrates. Defects in PCP in vertebrates are responsible for developmental abnormalities in multiple tissues including the neural tube, the kidney and the inner ear. In Drosophila wing, PCP is seen in the parallel orientation of hairs that protrude from each of the approximately 30,000 epithelial cells to robustly point toward the wing tip. This work applies HCPN to model a tissue comprising multiple cells hexagonally packed in a honeycomb formation in order to describe the phenomenon of Planar Cell Polarity (PCP) in Drosophila wing. HCPN facilitate the construction of mathematically tractable, compact and parameterised large-scale models. Different levels of abstraction that can be used in order to simplify such a complex system are first illustrated. The PCP system is first represented at an abstract level without modelling details of the cell. Each cell is then sub-divided into seven virtual compartments with adjacent cells being coupled via the formation of intercellular complexes. A more detailed model is later developed, describing the intra- and inter-cellular signalling mechanisms involved in PCP signalling. The initial model is for a wild-type organism, and then a family of related models, permitting different hypotheses to be explored regarding the mechanisms underlying PCP, are constructed. Among them, the largest model consists of 800 cells which when unfolded yields 164,000 places (each of which is described by an ordinary differential equation). This thesis illustrates the power and validity of the approach by showing how the models can be easily adapted to describe well-documented genetic mutations in the Drosophila wing using the proposed approach including clustering and model checking over time series of primary and secondary data, which can be employed to analyse and check such multi-scale models similar to the case of PCP. The HCPN models support the interpretation of biological observations reported in literature and are able to make sensible predictions. As HCPN model multi-scale systems in a compact, parameterised and scalable way, this modelling approach can be applied to other large-scale or multi-scale systems.
|
228 |
Assisted design and analysis of attack trees / Assistance à la conception et l’analyse d’arbres d’attaqueAudinot, Maxime 17 December 2018 (has links)
En analyse de risques, les arbres d’attaque sont utilisés pour évaluer les menaces sur un système. Les méthodes formelles permettent leur analyse quantitative et leur synthèse, mais les propriétés exprimant la qualité des arbres d’attaque par rapport au système n’ont pas été formalisées. Dans ce document, nous définissons un nouveau cadre formel pour les arbres d’attaque prenant en compte un modèle opérationnel du système, et dotant les arbres d’une sémantique de chemins. Nous définissons les propriétés de correction des raffinements, et étudions leurs complexités. A partir d’une attaque optimale dans un modèle de système quantitatif, nous guidons la conception d’un arbre d’attaque, en indiquant ses feuilles qui contribuent à l’attaque optimale considérée. / In risk analysis, attack trees are used to assess threats to a system. Formal methods allow for their quantitative analysis and synthesis, but the properties expressing the quality of the attack trees with respect to the system have not been formalized. In this document, we define a new formal framework for attack trees that takes an operational model of the system into account, and provides the trees with a path semantics. We define the correctness properties of refinements, and study their computational complexity. Given an optimal attack in a quantitative system model, we guide the design of a attack tree, indicating its leaves that contribute to considered the optimal attack.
|
229 |
Verificação de regras para aprovação de projetos de arquitetura em BIM para estações de metrô. / Code checking for subway station architecture design approval.Mainardi Neto, Antonio Ivo de Barros 23 May 2016 (has links)
Tendo em vista a crescente demanda da região metropolitana de São Paulo por transporte público rápido e eficaz, os investimentos em novas linhas e estações de metrô são crescentes, bem como a necessidade de aceleração do projeto e construção de tal infraestrutura. Nesse contexto insere-se a implantação do processo BIM - \"Building Information Modeling\" ou \"Modelagem da Informação da Construção\" pela Companhia do Metropolitano de São Paulo em seus projetos de estações e vias. O Metrô atua produzindo projetos internamente e, principalmente, contratando projetos externos que necessitam de análise para aprovação. Com o desenvolvimento dos projetos básicos civis em BIM, houve alterações no trabalho diário, a análise visual se beneficiou do modelo 3D, representando agora o projeto inteiro e não apenas seções deste. Isto possibilitou antecipar decisões que antes eram tomadas no executivo, atuando assim em um momento com maior capacidade de alterações de projeto. Um dos usos importantes do BIM no processo de análise de projeto é a Verificação de Regras. O uso de verificação automatizada permite que o analista possa focar apenas na lista de não conformidades, ganhando tempo para dedicar-se mais a resolução destas. E, por parte do projetista, é possível verificar antecipadamente o cumprimento de diretrizes, e já providenciar soluções aos problemas apontados. A presente pesquisa visou apoiar o processo de implantação deste uso de BIM, pela proposta de fluxo para implantação da verificação automatizada, identificação dos padrões de regras existentes e desenvolvimento de uma classificação de regras com foco na automatização de sua verificação. Os padrões propostos foram usados na análise de documento de Instruções de Projeto Básico de Arquitetura e na tradução de suas regras para aplicação em software para este fim com a intenção de fechar o ciclo da verificação. / In view of the increasing demand of São Paulo metropolitan area for fast and efficient public transport, the investment in new lines and subway stations are growing, as well as the need for accelerating the design and the construction of such infrastructure. In this context is the deployment of BIM - \"Building Information Modeling\" by Companhia do Metropolitano de São Paulo - Metrô on its stations and rails. Metrô operates the production of projects internally, and particularly contracts external projects that require analysis for approval. With the development of their basic civil projects in BIM, there have been changes in daily work, the visual analysis has been benefited from the 3D model, representing now the entire project and not only its sections. This has made possible to anticipate decisions that used to be taken in the shop drawings phase, thereby acting at a time with greater ability to apply changes in projects. One of the important BIM usages in the project\'s review process is the code checking. This allows the analyst to focus on the list of nonconformities only, gaining time to make further effort on their resolution. And, on the part of the designer, it is possible to verify the compliance of the guidelines in advance, and provide solutions to the problems already presented. The present research aimed to support the implementation process of using this BIM use, based on the flow proposal for the deployment of automated verification, the identification of patterns of existing rules and the developing of classification rules focusing on automating its checking. The proposed standards were used when analyzing documents of the architecture basic design instructions and in the translation of their rules for software application for this purpose with the intention to close the checking cycle.
|
230 |
Revisão de modelos CTL / CTL Model RevisionOliveira, Paulo de Tarso Guerra 16 December 2010 (has links)
Verificação de modelos é uma das mais eficientes técnicas de verificação automática de sistemas. No entanto, apesar de poder lidar com verificações complexas, as ferramentas de verificação de modelos usualmente não fornecem informação alguma sobre como reparar inconsistências nestes modelos. Nesta dissertação, mostramos que abordagens desenvolvidas para a atualização de modelos CTL inconsistentes não são capazes de lidar com todos os tipos de alterações em modelos. Introduzimos então o conceito de revisão de modelos: uma abordagem baseada em revisão de crenças para o reparo de modelos inconsistentes em um contexto estático. Relacionamos nossa proposta com trabalhos clássicos em revisão de crenças. Definimos um operador de revisão de modelos e mostramos que este obedece postulados de racionalidade clássico de revisão de crenças. Propomos um algoritmo de revisão com base no algoritmo utilizado pela abordagem de atualização de modelos. Discutimos sobre problemas e limites do algoritmo proposto, e mostramos que essa estratégia de adaptação não é uma solução apropriada. / Model checking is one of the most robust techniques in automated system verification. But, although this technique can handle complex verifications, model checking tools usually do not give any information on how to repair inconsistent system models. In this dissertation, we show that approaches developed for CTL model update cannot deal with all kinds of model changes. We introduce the concept of CTL model revision: an approach based on belief revision to handle system inconsistency in a static context. We relate our proposal to classical works on belief revision. We define an operator for model revision and we show that it obeys the classical rationality postulates of belief revision. We propose an algorithm for model revision based on the algorithm used by the model update approach. We discuss problems and limitations of our proposed algorithm and show that this strategy of adaptation is not an appropriate solution.
|
Page generated in 0.0338 seconds