• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 24
  • 22
  • 13
  • 9
  • 2
  • 1
  • 1
  • Tagged with
  • 244
  • 244
  • 71
  • 70
  • 64
  • 55
  • 47
  • 46
  • 34
  • 32
  • 31
  • 27
  • 26
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Verificação formal de sistemas modelados em estados finitos. / Formal verification of systems modeled as finite state machines.

Nelson França Guimarães Ferreira 09 March 2006 (has links)
Este trabalho reflete os esforcos realizados no estudo das principais técnicas automaticas de verificacao de sistemas que podem ser modelados em Maquinas de Estados Finitas, em particular as que normalmente se enquadram dentro da denominacao de model checking (verificacao de modelos). De modo a permitir ao leitor uma compreensao das vantagens e desvantagens de cada tecnica, os fundamentos teoricos de cada uma delas sao apresentados e ilustrados atraves de exemplos. Alem de uma apresentacao da teoria associada a cada tecnica, esta dissertação ainda apresenta dois estudos de caso de interesse bastante pratico: a verificacao de propriedades de um sistema de manufatura originalmente modelado atraves de uma rede de Petri e a verificacao de propriedades do intertravamento de uma seção metroviaria. Os dois estudos de caso utilizam tecnicas denominadas simbolicas. No primeiro estudo de caso, propoe-se que as invariantes obtidas da equação de estado sejam acrescentadas ao modelo a ser verificado, o que permite a obtenção de ganhos de desempenho na verificacao. O segundo estudo de caso e resolvido a partir da utilizacao de um procedimento proposto nesta dissertacao. Este procedimento permite a verificacao de algumas propriedades de seguranca sem que a verificacao se inviabilize devido a explosao no numero de estados. A utilizacao deste procedimento permite a verificacao de propriedades de uma secao de intertravamento com cerca de 2000 variaveis digitais em questao de poucos segundos.A principal conclusao a que este trabalho chega e consequencia dos resultados positivos observados nos estudos de caso: o model checking simbólico parece possuir um amplo campo de aplicacoes ainda por ser mais bem explorado / This work is the result of the efforts oriented to the study of the main automatic verification techniques for systems that can be modeled as Finite State Machines, in particular of those techniques which are generally called as model checking. In order to make the reader able to understand the pros and cons of each technique, the theory associated to each one is presented, as well as some examples. This work also presents two case studies of practical interest, both of each were solved with techniques which are called symbolic. The first one is the verification of some properties of a manufacturing system originally modeled by a Petri net. In order to improve the verification performance, it is proposed that the model to be verified be enlarged with the inclusion of the invariants calculated with the help of the state equation. The second case study is the verification of some safety properties of an interlocking system of a subway section. The verification is carried out with the help of a procedure which is proposed in this work. The aim of such a procedure is to bypass the state explosion problem, in order to make the verification feasible. It was possible to verify an interlocking system with about 2000 digital variables in a matter of few seconds. The main conclusion of the work comes from the positive results reached by both case studies: it seems to be a large number of applications yet to be explored in which symbolic model checking may be considered.
42

Studying and analysing transactional memory using interval temporal logic and AnaTempura

El-kustaban, Amin Mohammed Ahmed January 2012 (has links)
Transactional memory (TM) is a promising lock-free synchronisation technique which offers a high-level abstract parallel programming model for future chip multiprocessor (CMP) systems. Moreover, it adapts the well-established popular paradigm of transactions and thus provides a general and flexible way to allow programs to read and modify disparate memory locations atomically as a single operation. In this thesis, we propose a general framework for validating a TM design, starting from a formal specification into a hardware implementation, with its underpinning theory and refinement. A methodology in this work starts with a high-level and executable specification model for an abstract TM with verification for various correctness conditions of concurrent transactions. This model is constructed within a flexible transition framework that allows verifying correctness of a TM system with animation. Then, we present a formal executable specification for a chip-dual single-cycle MIPS processor with a cache coherence protocol and integrate the provable TM system. Finally, we transform the dual processors with the TM from a high-level description into a Hardware Description Language (VHDL), using some proposed refinement and restriction rules. Interval Temporal Logic (ITL) and its programming language subset AnaTempura are used to build, execute and test the model, since they together provide a powerful framework supporting logical reasoning about time intervals as well as programming and simulation.
43

Abstraction discovery and refinement for model checking by symbolic trajectory evaluation

Adams, Sara Elisabeth January 2014 (has links)
This dissertation documents two contributions to automating the formal verification of hardware – particularly memory-intensive circuits – by Symbolic Trajectory Evaluation (STE), a model checking technique based on symbolic simulation over abstract sets of states. The contributions focus on improvements to the use of BDD-based STE, which uses binary decision diagrams internally. We introduce a solution to one of the major hurdles in using STE: finding suitable abstractions. Our work has produced the first known algorithm that addresses this problem by automatically discovering good, non-trivial abstractions. These abstractions are computed from the specification, and essentially encode partial input combinations sufficient for determining the specification’s output value. They can then be used to verify whether the hardware model meets its specification using a technique based on and significantly extending previous work by Melham and Jones [2]. Moreover, we prove that our algorithm delivers correct results by construction. We demonstrate that the abstractions received by our algorithm can greatly reduce verification costs with three example hardware designs, typical of the kind of problems faced by the semiconductor design industry. We further propose a refinement method for abstraction schemes when over- abstraction occurs, i.e., when the abstraction hides too much information of the original design to determine whether it meets its specification. The refinement algorithm we present is based on previous work by Chockler et al. [3], which selects refinement candidates by approximating which abstracted input is likely the biggest cause of the abstraction being unsuitable. We extend this work substantially, concentrating on three aspects. First, we suggest how the approach can also work for much more general abstraction schemes. This enables refining any abstraction allowed in STE, rather than just a subset. Second, Chockler et al. describe how to refine an abstraction once a refinement candidate has been identified. We present three additional variants of refining the abstraction. Third, the refinement at its core depends on evaluating circuit logic gates. The previous work offered solutions for NOT- and AND-gates. We propose a general approach to evaluating arbitrary logic gates, which improves the selection process of refinement candidates. We show the effectiveness of our work by automatically refining an abstraction for a content-addressable memory that exhibits over-abstraction, and by evaluating some common logic gates. These two contributions can be used independently to help automate the hard- ware verification by STE, but they also complement each other. To show this, we combine both algorithms to create a fully automatic abstraction discovery and refinement loop. The only inputs required are the hardware design and the specification, which the design should meet. While only small circuits could be verified completely automatically, it clearly shows that our two contributions allow the construction of a verification framework that does not require any user interaction.
44

Automated quantitative software verification

Kattenbelt, Mark Alex January 2010 (has links)
Many software systems exhibit probabilistic behaviour, either added explicitly, to improve performance or to break symmetry, or implicitly, through interaction with unreliable networks or faulty hardware. When employed in safety-critical applications, it is important to rigorously analyse the behaviour of these systems. This can be done with a formal verification technique called model checking, which establishes properties of systems by algorithmically considering all execution scenarios. In the presence of probabilistic behaviour, we consider quantitative properties such as "the worst-case probability that the airbag fails to deploy within 10ms", instead of qualitative properties such as "the airbag eventually deploys". Although many model checking techniques exist to verify qualitative properties of software, quantitative model checking techniques typically focus on manually derived models of systems and cannot directly verify software. In this thesis, we present two quantitative model checking techniques for probabilistic software. The first is a quantitative adaptation of a successful model checking technique called counter-example guided abstraction refinement which uses stochastic two-player games as abstractions of probabilistic software. We show how to achieve abstraction and refinement in a probabilistic setting and investigate theoretical extensions of stochastic two-player game abstractions. Our second technique instruments probabilistic software in such a way that existing, non-probabilistic software verification methods can be used to compute bounds on quantitative properties of the original, uninstrumented software. Our techniques are the first to target real, compilable software in a probabilistic setting. We present an experimental evaluation of both approaches on a large range of case studies and evaluate several extensions and heuristics. We demonstrate that, with our methods, we can successfully compute quantitative properties of real network clients comprising approximately 1,000 lines of complex ANSI-C code — the verification of such software is far beyond the capabilities of existing quantitative model checking techniques.
45

Automatic verification of cryptographic protocols : privacy-type properties / Vérification automatique des protocoles cryptographiques : propriétés d'équivalence

Cheval, Vincent 03 December 2012 (has links)
Plusieurs outils ont été développé pour vérifier automatiquement les propriétés de sécurité sur des protocoles cryptographiques. Jusqu'à maintenant, la plupart de ces outils permettent de vérifier des propriétés de trace (ou propriétés d'accessibilité) tel que le secret simple ou l'authentification. Néanmoins, plusieurs propriétés de sécurité ne peuvent pas être exprimés en tant que propriété de trace, mais peuvent l'être en tant que propriété d'équivalence. L'anonymat, la non-tracabilité ou le secret fort sont des exemples classique de propriété d'équivalence. Typiquement, deux protocoles P et Q sont équivalent si les actions d'un adversaire (intrus) ne lui permettent pas de distinguer P de Q. Dans la littérature, plusieurs notions d'équivalence ont été étudiés, par exemple l'équivalence de trace ou l'équivalence observationnelle. Néanmoins, ces équivalences se relèvent être très difficiles à démontrer , d'où l'importance de développer des outils de vérification automatique efficaces de ces équivalences. Au sein de cette thèse, nous avons dans un premier temps travaillé sur une approche reposant sur des techniques de résolution de contraintes et nous avons créé un nouvel algorithme pour décider l'équivalence de trace entre deux protocoles pouvant contenir des conditionnelles avec branches "else", et pouvant également être non-déterministe. Cet algorithme a été appliqué sur des exemples concrets comme le "Private authentification protocol" ainsi que le "E-passport protocol". Cette thèse propose également des résultats de composition pour l'équivalence de trace. En particulier, nous nous sommes intéressé à la composition parallèle de protocoles partageant certains secrets. Ainsi dans cette thèse, nous avons démontré que, sous certaines conditions, la composition parallèle de protocoles préserve les propriétés d'équivalence. Ce résultat fut appliqué au "E-passport protocol". Enfin, cette thèse présente une extension à l'outil de vérification automatique ProVerif afin de démontrer automatiquement plus de propriétés d'équivalence. Cette extension a été implémenté au sein de ProVerif ce qui a permis de démontrer la propriété d'anonymat pour le "Private authentification protocol" . / Many tools have been developed to automatically verify security properties on cryptographic protocols. But until recently, most tools focused on trace properties (or reachability properties) such as authentication and secrecy. However, many security properties cannot be expressed as trace properties, but can be written as equivalence properties. Privacy, unlinkability, and strong secrecy are typical examples of equivalence properties. Intuitively, two protocols P, Q are equivalent if an adversary can not distinguish P from Q by interacting with these processes. In the literature, several notions of equivalence were studied, e.g. trace equivalence or a stronger one, observational equivalence. However, it is often very difficult to prove by hand any of these equivalences, hence the need for efficient and automatic tools. We first worked on an approach that rely on constraint solving techniques and that is well suited for bounded number of sessions. We provided a new algorithm for deciding the trace equivalence between processes that may contain negative tests and non-determinism. We applied our results on concrete examples such as anonymity of the Private Authentication protocol and the E-passport protocol. We also investigated composition results. More precisely, we focused on parallel composition under shared secrets. We showed that under certain conditions on the protocols, the privacy type properties are preserved under parallel composition and under shared secrets. We applied our result on the e-passport protocol. At last this work presents an extension of the automatic protocol verifier ProVerif in order to prove more observational equivalences. This extension have been implemented in ProVerif and allows us to automatically prove anonymity in the private authentication protocol.
46

Développement et vérification des logiques probabilistes et des cadres logiques / Development and verification of probability logics and logical frameworks

Maksimović, Petar 15 October 2013 (has links)
On présente une Logique Probabiliste avec des opérateurs Conditionnels - LPCP, sa syntaxe, sémantique, axiomatisation correcte et fortement complète, comprenant une règle de déduction infinitaire. On prouve que LPCP est décidable, et on l'étend pour qu’il puisse représenter l'évidence, en créant ainsi la première axiomatisation propositionnelle du raisonnement basé sur l'évidence. On codifie les Logiques Probabilistes LPP1Q et LPPQ2 dans l'Assistant de Preuve Coq, et on vérifie formellement leurs propriétés principales: correction, complétude fort et non-compacité. Les deux logiques étendent la Logique Classique avec des opérateurs de probabilité, et présentent une règle de déduction infinitaire. LPPQ1 permet des itérations des opérateurs de probabilité, lorsque LPPQ2 ne le permet pas. On a formellement justifié l'utilisation des solveurs SAT probabilistes pour vérifier les questions liées à la cohérence. On présente LFP, un Cadre Logique avec Prédicats Externes, en introduisant un mécanisme pour bloquer et débloquer types et termes dans LF, en permettant l'utilisation d’oracles externes. On démontre que LFP satisfait tous les principales propriétés et on développe un cadre canonique correspondant, qui permet de prouver l’adéquation. On fournit diverses encodages - le λ-calcul non-typé avec la stratégie de réduction CBV, Programmation-par-Contrats, un langage impératif avec la Logique de Hoare, des Logiques Modales et la Logique Linéaire Non-Commutative, en montrant que en LFP on peut codifier aisément des side-conditions dans l'application des règles de typage et atteindre une séparation entre vérification et computation, en obtenant des preuves plus claires et lisibles. / We introduce a Probability Logic with Conditional Operators - LPCP, its syntax, semantics, and a sound and strongly-complete axiomatic system, featuring an infinitary inference rule. We prove the obtained formalism decidable, and extend it so as to represent evidence, making it the first propositional axiomatisation of reasoning about evidence. We encode Probability Logics LPP1Q and LPP2Q in the Proof Assistant Coq and formally verify their key properties - soundness, strong completeness, and non-compactness. Both logics extend Classical Logic with modal-like probability operators, and both feature an infinitary inference rule. LPP1Q allows iterations of probability operators, while LPP2Q does not. In this way, we have formally justified the use of Probabilistic SAT-solvers for the checking of consistency-related questions. We present LFP - a Logical Framework with External Predicates, by introducing a mechanism for locking and unlocking types and terms into LF, allowing the use of external oracles. We prove that LFP satisfies all the main meta-theoretic properties and develop a corresponding canonical framework, allowing for easy proofs of adequacy. We provide a number of encodings - the simple untyped λ-calculus with a Call-by-Value reduction strategy, the Design-by-Contract paradigm, a small imperative language with Hoare Logic, Modal Logics in Hilbert and Natural Deduction style, and Non-Commutative Linear Logic (encoded for the first time in an LF-like framework), illustrating that in LFP we can encode side-conditions on the application of rules elegantly, and achieve a separation between verification and computation, resulting in cleaner and more readable proofs.
47

Des spécifications en langage naturel aux spécifications formelles via une ontologie comme modèle pivot / From natural language specifications to formal specifications via an ontology as a pivot model

Sadoun, Driss 17 June 2014 (has links)
Le développement d'un système a pour objectif de répondre à des exigences. Aussi, le succès de sa réalisation repose en grande partie sur la phase de spécification des exigences qui a pour vocation de décrire de manière précise et non ambiguë toutes les caractéristiques du système à développer.Les spécifications d'exigences sont le résultat d'une analyse des besoins faisant intervenir différentes parties. Elles sont généralement rédigées en langage naturel (LN) pour une plus large compréhension, ce qui peut mener à diverses interprétations, car les textes en LN peuvent contenir des ambiguïtés sémantiques ou des informations implicites. Il n'est donc pas aisé de spécifier un ensemble complet et cohérent d'exigences. D'où la nécessité d'une vérification formelle des spécifications résultats.Les spécifications LN ne sont pas considérées comme formelles et ne permettent pas l'application directe de méthodes vérification formelles.Ce constat mène à la nécessité de transformer les spécifications LN en spécifications formelles.C'est dans ce contexte que s'inscrit cette thèse.La difficulté principale d'une telle transformation réside dans l'ampleur du fossé entre spécifications LN et spécifications formelles.L'objectif de mon travail de thèse est de proposer une approche permettant de vérifier automatiquement des spécifications d'exigences utilisateur, écrites en langage naturel et décrivant le comportement d'un système.Pour cela, nous avons exploré les possibilités offertes par un modèle de représentation fondé sur un formalisme logique.Nos contributions portent essentiellement sur trois propositions :1) une ontologie en OWL-DL fondée sur les logiques de description, comme modèle de représentation pivot permettant de faire le lien entre spécifications en langage naturel et spécifications formelles; 2) une approche d'instanciation du modèle de représentation pivot, fondée sur une analyse dirigée par la sémantique de l'ontologie, permettant de passer automatiquement des spécifications en langage naturel à leur représentation conceptuelle; et 3) une approche exploitant le formalisme logique de l'ontologie, pour permettre un passage automatique du modèle de représentation pivot vers un langage de spécifications formelles nommé Maude. / The main objective of system development is to address requirements. As such, success in its realisation is highly dependent on a requirement specification phase which aims to describe precisely and unambiguously all the characteristics of the system that should be developed. In order to arrive at a set of requirements, a user needs analysis is carried out which involves different parties (stakeholders). The system requirements are generally written in natural language to garantuee a wider understanding. However, since NL texts can contain semantic ambiguities, implicit information, or other inconsistenties, this can lead to diverse interpretations. Hence, it is not easy to specify a set of complete and consistent requirements, and therefore, the specified requirements must be formally checked. Specifications written in NL are not considered to be formal and do not allow for a direct application of formal methods. We must therefore transform NL requirements into formal specifications. The work presented in this thesis was carried out in this framework. The main difficulty of such transformation is the gap between NL requirements and formal specifications. The objective of this work is to propose an approach for an automatic verification of user requirements which are written in natural language and describe a system's expected behaviour. Our approach uses the potential offered by a representation model based on a logical formalism. Our contribution has three main aspects: 1) an OWL-DL ontology based on description logic, used as a pivot representation model that serves as a link between NL requirements to formal specifications; 2) an approach for the instantiation of the pivot ontology, which allows an automatic transformation of NL requirements to their conceptual representations; and 3) an approach exploiting the logical formalism of the ontology in order to automatically translate the ontology into a formal specification language called Maude.
48

Geração de propriedades sobre programas Java a partir de objetivos de teste / Generation of Java program properties from test purposes

Hanazumi, Simone 29 October 2015 (has links)
Com a presença cada vez maior de sistemas computacionais e novas tecnologias no cotidiano das pessoas, garantir que eles não falhem e funcionem corretamente tornou-se algo de extrema importância. Além de indicar a qualidade do sistema, assegurar seu bom funcionamento é essencial para se evitar perdas, desde financeiras até de vidas. Uma das técnicas utilizadas para esta finalidade é a chamada verificação formal de programas. A partir da especificação do sistema, descrita numa linguagem formal, são definidas propriedades a serem satisfeitas e que certificariam a qualidade do software. Estas propriedades devem então ser implementadas para uso num verificador, que é a ferramenta responsável por executar a verificação e informar quais propriedades foram satisfeitas e quais não foram; no caso das propriedades terem sido violadas, o verificador deve indicar aos desenvolvedores os possíveis locais com código incorreto no sistema. A desvantagem do uso da verificação formal é, além do seu alto custo, a necessidade de haver pessoas com experiência em métodos formais para definir propriedades a partir da especificação formal do sistema, e convertê-las numa representação que possa ser entendida pelo verificador. Este processo de definição de propriedades é particularmente complexo, demorado e suscetível a erros, por ser feito em sua maior parte de forma manual. Para auxiliar os desenvolvedores na utilização da verificação formal em programas escritos em Java, propomos neste trabalho a geração de representação de propriedades para uso direto num verificador. As propriedades a serem geradas são objetivos de teste derivados da especificação formal do sistema. Estes objetivos de teste descrevem o comportamento esperado do sistema que deve ser observado durante sua execução. Ao estabelecer que o universo de propriedades corresponde ao universo de objetivos de teste do programa, garantimos que as propriedades geradas em nosso trabalho descrevem o comportamento esperado do programa por meio de caminhos de execução que levam a um estado de aceitação da propriedade, ou a um estado de violação. Assim, quando o verificador checa o objetivo de teste, ele consegue dar como resultado o veredicto de sucesso ou falha para a propriedade verificada, além de dados da cobertura dos caminhos de execução do programa que podem ser usados para análise do comportamento do programa que levou ao sucesso ou falha da propriedade verificada. / The task of guaranteeing that computational systems do not fail and work correctly has become extremely important with the growing presence of new technologies in people\'s lives. Therefore, it is essential to ensure that such systems work properly to confirm their high-quality and to avoid financial and even life losses. One of the techniques used to this purpose is called formal verification of programs. From the system specification, which should be described in a formal language, we define properties that must be satisfied during system execution to guarantee the software quality. Then, these properties are checked using a verifier, which is the tool responsible for running the verification and for notifying whether the property was satisfied by the program; if the property was violated, it indicates to software developers the possible location of faults in the system. The disadvantages of using formal verification are the high cost to apply this technique in practice, and the necessity of having people with experience in formal methods to derive the properties from system specification and define them in a formal representation that can be read by a program verifier. This particular task of deriving a property from system specification and defining it to be checked by a verifier is complex, time-consuming and error-prone, since it is usually done by hand. To help software developers in the application of formal verification in Java programs, we propose in this work the generation of properties formal representation for direct use in a verifier. The generated properties are test purposes, which are derived from system formal specification and present the desirable system behavior that must be observed during the system execution. Establishing that the universe of properties correspond to the universe of test purposes of a program, we guarantee that the generated properties describe the expected program behavior through execution traces that lead to either an accept state or a refuse state. Thus, when the verifier checks the test purpose, it can give a success/fail verdict for the property, and provide traces coverage data that can be used to analyze the program behavior that led to that verdict.
49

Modelagem e análise de requisitos de sistemas automatizados usando UML e Redes de Petri. / Modeling and analysis of requirements of automated systems using UML and Petri nets.

Salmon, Arianna Zoila Olivera 18 April 2017 (has links)
A fase inicial de projeto de sistemas, baseada na elicitação, modelagem e análise dos requisitos é considerada a mais complexa e a mais estratégica para obtenção de bons resultados. Por conseguinte, erros precisam ser detectados durante esta fase inicial, antes de começar a implementação, evitando assim o desperdício de tempo e recursos. Este trabalho propõe um método formal de modelagem, análise e verificação de requisitos, partindo de uma representação semi-formal de requisitos em UML, e utilizando o formalismo das Redes de Petri para proceder à modelagem, análise e verificação. Propriedades das redes de Petri, tais como invariantes, são usadas para analisar os requisitos, permitindo uma validação antecipada dos requisitos no processo de design. O objetivo deste trabalho é estender a aplicação das redes de Petri como representação formal para a modelagem e análise de requisitos, endereçando assim a fase anterior às especificações. Pressupõe-se a existência de uma disciplina de projeto baseada em modelos (model driven) que abrange a fase inicial do projeto e se propaga para a modelagem e verificação de soluções. Assim, a abordagem proposta nesta tese se encaixa plenamente em um processo orientado a modelos que use a mesma linguagem: as redes de Petri. / The first stage of system design, which includes elicitation, modeling, and analysis of requirements is considered at the same time, very complex and very strategic to achieve proper results.Therefore it is important to detect mistakes both, conceptual and in requirements, before implementation begins, avoiding the waste of time and resources. This work proposes a formal method for modeling, analysis and verification of requirements, starting with a semi-formal representation of requirements in UML. Petri net and its properties such as invariants are used to analyze requirements, thereby allowing that requirements can be validated earlier. The main objective of this work is to extend the application of Petri Nets, as a formal representation, to requirements phase, addressing specification building. It is assumed that there is a model driven design approach that encompass the initial (requirements) phase and goes through the modeling and verification of solutions. Therefore the approach proposed in this work fits a model driven general approach which could use the same formal language: Petri Nets.
50

An Axiomatic Semantics for Functional Reactive Programming

King, Christopher T. 29 April 2008 (has links)
Functional reactive programming (FRP) is a paradigm extending functional languages with primitives which operate on state. Typical FRP systems contain many dozens of such primitives. This thesis aims to identify a minimal subset of primitives which captures the same set of behavior as these systems, and to provide an axiomatic semantics for them using first-order linear temporal logic, with the aim of utilizing these semantics in formal verification of FRP programs. Furthermore, we identify several important properties of these primitives and prove that they are satisfied using the Coq proof assistant.

Page generated in 0.1497 seconds