• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 871
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 21
  • 12
  • 9
  • 8
  • 6
  • 6
  • 5
  • 5
  • Tagged with
  • 1768
  • 421
  • 360
  • 299
  • 272
  • 263
  • 254
  • 223
  • 211
  • 193
  • 179
  • 172
  • 129
  • 124
  • 123
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

An Automated Tool For Requirements Verification

Tekin, Yasar 01 September 2004 (has links) (PDF)
In today&amp / #8217 / s world, only those software organizations that consistently produce high quality products can succeed. This situation enforces the effective usage of defect prevention and detection techniques. One of the most effective defect detection techniques used in software development life cycle is verification of software requirements applied at the end of the requirements engineering phase. If the existing verification techniques can be automated to meet today&amp / #8217 / s work environment needs, the effectiveness of these techniques can be increased. This study focuses on the development and implementation of an automated tool that automates verification of software requirements modeled in Aris eEPC and Organizational Chart for automatically detectable defects. The application of reading techniques on a project and comparison of results of manual and automated verification techniques applied to a project are also discussed.
222

Information Flow Security in Component-Based Models : From verification to Implementation / Sécurité du flux d'information : de la vérification à l'implémentation

Ben Said, Najah 07 November 2016 (has links)
La sécurité des systèmes d'information sont primordiales dans la vie d'aujourd'hui, en particulier avec la croissance des systèmes informatiques complexes et fortement interconnectés. Par exemple, les systèmes bancaires ont l'obligation de garantir l'intégrité et la confidentialité de leurs comptes clients. Le vote électronique, des ventes aux enchères et le commerce doit aussi assurer leurs la confidentialité et l'intégrité.Cependant, la vérification de la sécurité et sa mise en œuvre en distribuée sont des processus lourds en général, les compétences de sécurité avancées sont nécessaires puisque les deux configuration de sécurité et l'implementation de systèmes distribué sont complexes et sujette d'erreurs. Avec les attaques de sécurité divers menés par l'environnement Internet, comment pouvons-nous être sûrs que les systèmes informatiques que nous construisons ne satisfont la propriété de sécurité prévu?La propriété de la sécurité que nous étudions dans cette thèse est la non-ingérence, qui est une propriété globale qui permet de suivre les informations sensibles dans l'ensemble du système et de garantir la confidentialité et l'intégrité. La non-ingérence est exprimée par l'exigence selon laquelle aucune information sur des données secrètes est une fuite à travers l'observation de la variation des données publiques. Cette définition est plus subtile qu'une spécification de base de l'accès légitime pour les informations sensibles, ce qui permet d'exploiter et de détecter les dysfonctionnements et malveillants programmes intrusions pour les données sensibles (par exemple, un cheval de Troie qui envoie des données confidentielles aux utilisateurs non fiables). Cependant, comme une propriété globale, la non-interférence est difficile à vérifier et à mettre en œuvre.À cette fin, nous proposons un flux de conception basée sur un modèle qui assure la propriété non-interference dans un logiciel d'application de son modèle de haut niveau conduisant à la mise en œuvre sécurisée décentralisée. Nous présentons la plateforme secureBIP, qui est une extension pour le modèle à base de composants avec des interactions multi-partie pour la sécurité. La non-interference est garantie à l'aide de deux manières pratiques: (1) nous annotons les variables et les ports du modèle, puis selon un ensemble défini de contraintes syntaxiques suffisantes, nous vérifions la satisfaction de la propriété, (2), nous annotons partiellement le modèle, puis en extrayant ses graphes de dépendances de composition nous appliquons un algorithme de synthèse qui calcule la configuration sécurisée moins restrictive du modèle si elle existe.Une fois que la sécurité des flux d'information est établie et la non-interference est établie sur un modèle de haut niveau du système, nous suivons une méthode automatisée pratique pour construire une application distribuée sécurisée. Un ensemble de transformations sont appliquées sur le modèle abstrait de transformer progressivement en bas niveau des modèles distribués et enfin à la mise en œuvre distribuée, tout en préservant la sécurité des flux d'information. La transformations du modèles remplacent coordination de haut niveau en utilisant des interactions multi-partites par des protocoles en utilisant des envoies et reception de messages asynchrone. La distribution est donc prouvé "sécuriser par construction" qui est, le code final est conforme à la politique de sécurité souhaitée. Pour montrer la facilité d'utilisation de notre méthode, nous appliquons et d'expérimenter sur des études et des exemples de cas réels de domaines d'application distincts. / The security of information systems are paramount in today’s life, especially with the growth of complex and highly interconnected computer systems. For instance, bank systems have the obligation to guarantee the integrity and confidentiality of their costumers accounts. The electronic voting, auctions and commerce also needs confidentiality and integrity preservation.However, security verification and its distributed implementation are heavy processes in general, advanced security skills are required since both security configuration and coding distributed systems are complex and error-prone. With the diverse security attacks leaded by the Internet advent, how can we be sure that computer systems that we are building do satisfy the intended security property?The security property that we investigate in this thesis is the noninterference, which is a global property that tracks sensitive information in the entire system and ensures confidentiality and integrity. Non-interference is expressed by the requirement that no information about secret data is leaked through the observation of public data variation. Such definition is more subtle than a basic specification of legitimate access for sensitive information, allowing to exploit and detect malfunctioning and malicious programs intrusions for sensitive data (e.g, Trojan horse that sends confidential data to untrusted users). However as a global property, the noninterference is hard to verify and implement.To this end, we propose a model-based design flow that ensures the noninterference property in an application software from its high-level model leading to decentralized secure implementation. We present the secureBIP framework that is an extension for the component-based model with multyparty interactions for security. Non-interference is guaranteed using two practical manners: (1) we annotate the entire variables and ports of the model and then according to a defined set of sufficient syntactic constraints we check the satisfaction of the property, (2) we partially annotate the model way and then by extracting its compositional dependency graphswe apply a synthesis algorithm that computes the less restrictive secure configuration of the model if it exists.Once the information flow security is established and non-interference is established on an high-level model of the system, we follow a practical automated method to build a secure distributed implementation. A set of transformations are applied on the abstract model to progressively transform it into low-level distributed models and finally to distributed implementation, while preserving information flow security. Model transformations replace high-level coordination using multiparty interactions by protocols using asynchronous Send/Receive message-passing. The distributedimplementation is therefore proven ”secure-by-construction” that is, the final code conforms to the desired security policy. To show the usability of our method, we apply and experiment it on real case studies and examples from distinct application domains.
223

Precise abstract interpretation of hardware designs

Mukherjee, Rajdeep January 2018 (has links)
This dissertation shows that the bounded property verification of hardware Register Transfer Level (RTL) designs can be efficiently performed by precise abstract interpretation of a software representation of the RTL. The first part of this dissertation presents a novel framework for RTL verification using native software analyzers. To this end, we first present a translation of the hardware circuit expressed in Verilog RTL into the software in C called the software netlist. We then present the application of native software analyzers based on SAT/SMT-based decision procedures as well as abstraction-based techniques such as abstract interpretation for the formal verification of the software netlist design generated from the hardware RTL. In particular, we show that the path-based symbolic execution techniques, commonly used for automatic test case generation in system softwares, are also effective for proving bounded safety as well as detecting bugs in the software netlist designs. Furthermore, by means of experiments, we show that abstract interpretation techniques, commonly used for static program analysis, can also be used for bounded as well as unbounded safety property verification of the software netlist designs. However, the analysis using abstract interpretation shows high degree of imprecision on our benchmarks which is handled by manually guiding the analysis with various trace partitioning directives. The second part of this dissertation presents a new theoretical framework and a practical instantiation for automatically refining the precision of abstract interpretation using Conflict Driven Clause Learning (CDCL)-style analysis. The theoretical contribution is the abstract interpretation framework that generalizes CDCL to precise safety verification for automatic transformer refinement called Abstract Conflict Driven Learning for Safety (ACDLS). The practical contribution instantiates ACDLS over a template polyhedra abstract domain for bounded safety verification of the software netlist designs. We experimentally show that ACDLS is more efficient than a SAT-based analysis as well as sufficiently more precise than a commercial abstract interpreter.
224

Ferramenta CAD para extração de modelo de cobertura de saída por itens em verificação funcional. / CAD tool for output coverage model extraction in functional verification.

Joel Iván Muñoz Quispe 25 October 2011 (has links)
Nos ambientes de desenvolvimento de sistemas integrados da atualidade, os requisitos dos sistemas devidos ao alto grau de funcionalidades incorporadas vêm-se incrementando, gerando uma alta complexidade nos projetos. Isto traz como consequência o aumento na quantidade de ciclos dentro do fluxo de projeto. Uma solução tem sido o uso de blocos IP para acelerar o desenvolvimento. Entretanto, para garantir um grau elevado de confiabilidade destes componentes, os processos de verificação devem comprovar que todas as propriedades do circuito estejam sendo cumpridas. Uma das técnicas utilizadas para isto é verificação funcional por simulação, que procura explorar, através da injeção de vetores de teste, a maior porção possível de todo o espaço de estados do circuito. Quanto maior o número de estados possíveis, maior o número de vetores de testes que devem ser inseridos. Portanto, o número de vetores de teste deve ser reduzido de forma considerável, entretanto, por este fato, métricas para determinar a completeza do processo de verificação, definidas como modelos de cobertura, têm sido necessárias. As métricas de cobertura são estabelecidas segundo as estratégias de observação do projeto sob verificação, DUV, sendo bastante comum na indústria a de caixa preta que tem como objetivo a estimulação das entradas e a observação dos eventos de saída do DUV. Neste caso, para determinar se o sistema cumpre com as especificações, o engenheiro de verificação, deve definir os eventos à saída que considera relevantes e as métricas para determinar a quantidade de vezes que devem ser observadas. Este tipo de modelagem é conhecido como cobertura por itens. A quantidade de itens e os eventos a serem observados podem ser dfinidos pelo conhecimento especialista, dos engenheiros de verificação ou, para simplificar esta tarefa, uma distribuição uniforme é adotada. Como estas formas de modelagem não abstraem todas as propriedades do circuito, o perfil da distribuição de valores dos eventos (parâmetros) escolhidos, em geral, não estão correlacionados com o perfil real verficado durante a execução dos testbenches , tendo como consequência o aumento dos tempos de simulação. Para tratar do problema acima, o presente trabalho tem como objetivo geral o desenvolvimento de uma metodologia para obter um modelo de cobertura de saída que apresente um perfil de distribuição semelhante ao real e que, assim, assista o engenheiro de verificação na seleção dos pontos ou intervalos de saída de interesse, adicionado-os às decisões derivadas de seu conhecimento especialista. Pela metodologia utilizada, encontra-se a(s) equação(ões) que define(m) a(s) saída(s) do circuito sob verificação e, a partir destas, a distribuição probabilística por evento observável. No centro da metodologia está a ferramenta PrOCov (Probabilistic Output Coverage), projetada com os objetivos acima. A metodologia e a ferramenta foram testadas com alguns exemplos de circuitos, modelos em alto nível do filtro FIR, do processador FFT e do filtro Elliptic, todos descritos em SystemC. Nos três casos testados, o PrOCov encontrou satisfatoriamente os respectivos perfis de saída. Estes foram comparados com os perfis obtidos por simulação, mostrando que uma excelente precisão pode ser obtida; apenas pequenas variações foram encontradas devidas a erros de aproximação. Também variações de precisão e tempo de simulação em função da resolução dos parâmetros de saída (eventos) foram analisadas nesta dissertação. / In current integrated system development environments, the requirements for the design of multi-function systems have increased constantly. Consequently, the number of iterations in the design flow has also grown. A solution for this problem has been the use of IP-cores to speed up the hardware development. However, to guarantee high level of reliability for these components, the verification process has to be kept strict in other to prove if the all system properties have been satisfied. The mainstream technique that has been used in the industry for the verification process is the dynamic functional verification. It aims to explore, by test vector injection, all the state space of the circuit. The higher the number of possible states, the higher the number of test vectors to be inserted. Therefore, the number of test vectors must be kept as low as possible. Due to that, completion and sufficiency metrics, identified as the coverage model, should be carefully defined. The coverage metrics are established according the observation strategies of the design under verification, DUV, where the black box approach is very common in the industry, being aimed at the stimulation of the inputs and observing the events of the DUV output. To determine whether the system meets the specifications, the verification engineer must define the events (s)he considers relevant at the output and the metrics used to determine the amount of times that the results must be observed. This type of modeling is known as item coverage. The amount of items and events to be observed may be defined by the experience of the engineer, but in most cases, to simplify this task, a uniform distribution is adopted. Those forms of modeling do not abstract the functionality of the circuit, then, the probability distribution of the chosen events is uncorrelated to the real simulated distribution, when the testbenchs are implemented. Therefore, the resulting simulation time increases. To solve the problem that is mentioned above, this work aims the development of a methodology to compute the output coverage, which should be similar to the real output value distribution and thus assist the engineer in the selection of the proper check points or output ranges of interest, by adding them to the decisions derived from his(her) knowledge. This methodology finds the equations that represent the outputs of the DUV and, from them, it computes the output probabilistic distribution. At the core of this methodology is the PrOCov (Probabilistic Output Coverage) tool, which was developed with the goals above. Both methodology and tool were tested with three circuits described in high level language, the FIR filter, FFT processor and Elliptic filter, written in SystemC. In all three cases, PrOCov presented a satisfactorily output distribution. Excellent precision could be achieved by the results, with only small variations found due to approximation errors. Also variations of accuracy and simulation time due to different resolutions of the output parameters (events) were analyzed in this dissertation.
225

Architecture-Based Verification of Software-Intensive Systems

Johnsen, Andreas January 2010 (has links)
Development of software-intensive systems such as embedded systems for telecommunications, avionics and automotives occurs under severe quality, schedule and budget constraints. As the size and complexity of software-intensive systems increase dramatically, the problems originating from the design and specification of the system architecture becomes increasingly significant. Architecture-based development approaches promise to improve the efficiency of software-intensive system development processes by reducing costs and time, while increasing quality. This paradox is partially explained by the fact that the system architecture abstracts away unnecessary details, so that developers can concentrate both on the system as a whole, and on its individual pieces, whether it's the components, the components' interfaces, or connections among components. The use of architecture description languages (ADLs) provides an important basis for verification since it describes how the system should behave, in a high level view and in a form where automated tests can be generated. Analysis and testing based on architecture specifications allow detection of problems and faults early in the development process, even before the implementation phase, thereby reducing a significant amount of costs and time. Furthermore, tests derived from the architecture specification can later be applied to the implementation to see the conformance of the implementation with respect to the specification. This thesis extends the knowledge base in the area of architecture-based verification. In this thesis report, an airplane control system is specified using the Architecture Analysis and Description Language (AADL). This specification will serve as a starting point of a system development process where developed architecture-based verification algorithms are applied.
226

Algorithmic multiparameterised verification of safety properties:process algebraic approach

Siirtola, A. (Antti) 28 September 2010 (has links)
Abstract Due to increasing amount of concurrency, systems have become difficult to design and analyse. In this effort, formal verification, which means proving the correctness of a system, has turned out to be useful. Unfortunately, the application domain of the formal verification methods is often indefinite, tools are typically unavailable, and most of the techniques do not suit especially well for the verification of software systems. These are the questions addressed in the thesis. A typical approach to modelling systems and specifications is to consider them parameterised by the restrictions of the execution environment, which results in an (infinite) family of finite-state verification tasks. The thesis introduces a novel approach to the verification of such infinite specification-system families represented as labelled transition systems (LTSs). The key idea is to exploit the algebraic properties of the correctness relation. They allow the correctness of large system instances to be derived from that of smaller ones and, in the best case, an infinite family of finite-state verification tasks to be reduced to a finite one, which can then be solved using existing tools. The main contribution of the thesis is an algorithm that automates the reduction method. A specification and a system are given as parameterised LTSs and the allowed parameter values are encoded using first order logic. Parameters are sets and relations over these sets, which are typically used to denote, respectively, identities of replicated components and relationships between them. Because the number of parameters is not limited and they can be nested as well, one can express multiply parameterised systems with a parameterised substructure, which is an essential property from the viewpoint of modelling software systems. The algorithm terminates on all inputs, so its application domain is explicit in this sense. Other proposed parameterised verification methods do not have both these features. Moreover, some of the earlier results on the verification of parameterised systems are obtained as a special case of the results presented here. Finally, several natural and significant extensions to the formalism are considered, and it is shown that the problem becomes undecidable in each of the cases. Therefore, the algorithm cannot be significantly extended in any direction without simultaneously restricting some other aspect.
227

Calcul par analyse intervalle de certificats de barrière pour les systèmes dynamiques hybrides / Computation of barrier certificates for dynamical hybrids systems using interval analysis

Djaballah, Adel 03 July 2017 (has links)
Cette thèse développe des outils permettant de prouver qu’un système dynamique est sûr. En supposant qu’une partie de l’espace d’état est dangereuse, un système dynamique est dit sûr lorsque son état n’atteint jamais cette partie dangereuse au cours du temps, quel que soit l’état initial appartenant à un ensemble d’états initiaux admissibles et quel que soit le niveau de perturbation restant dans un domaine admissible. Les outils proposés cherchent à établir des preuves de sûreté pour des systèmes décrits par des modèles dynamiques non-linéaires et des modèles dynamiques hybrides. Prouver qu’un système dynamique est sûr en calculant explicitement l’ensemble des trajectoires possibles du système lorsque le modèle dynamique est non-linéaire et perturbé reste une tâche très difficile. C’est pourquoi cette thèse aborde ce problème à l’aide de fonctions barrières paramétrées. Une barrière, lorsqu’elle existe, permet de partitionner l’espace d’état et d’isoler l’ensemble des trajectoires possibles de l’état du système de la partie dangereuse de l’espace d’état. La fonction paramétrique décrivant la barrière doit satisfaire un certain nombre de contraintes impliquant la dynamique du modèle, l’ensemble des états initiaux possibles, et l’ensemble dangereux. Ces contraintes ne sont pas convexes en général, ce qui complique la recherche de fonctions barrières satisfaisantes. Précédemment, seules des fonctions barrières polynomiales ont été considérées pour des modèles dynamiques polynomiaux. Cette thèse considère des systèmes dynamiques relativement généraux avec des barrières paramétriques quelconques. Les solutions présentées exploitent des outils de satisfaction de contraintes sur des domaines continus et des outils issus de l’analyse par intervalles. Dans un premier temps, cette thèse considère des systèmes dynamiques non-linéaires à temps continu. Le problème de conception d’une barrière paramétrique est formulé comme un problème de satisfaction des contraintes sur des domaines réels avec des variables quantifiées de manière existentielle et universelle. L’algorithme CSC-FPS a été adapté afin de résoudre le problème de synthèse de barrière. Cet algorithme combine une exploration de l’espace des paramètres de la barrière et une phase de vérification des propriétés de la barrière. A l’aide de contracteurs, il est possible de significativement accélérer la recherche de solutions. Dans un second temps, ces résultats sont étendus au cas de systèmes décrits par des modèles dynamiques hybrides. La propriété de sûreté doit être prouvée lors de l’évolution à temps continu du système dynamique, mais aussi pendant les transitions du système. Ceci nécessite l’introduction de contraintes supplémentaires qui lient les fonctions barrières associées à chaque mode à temps continu entre elles. Réaliser la synthèse de toutes les fonctions barrières pour les différents modes simultanément n’est envisageable que pour des systèmes de très petite dimension avec peu de modes. Une approche séquentielle a été proposée. Les contraintes liées aux transitions sont introduites progressivement entre les modes pour lesquels une barrière a déjà été obtenue. Lorsque certaines contraintes de transition ne sont pas satisfaites, une méthode de backtracking doit être mise en œuvre afin de synthétiser des barrières offrant une meilleure prise en compte des contraintes de transition non satisfaites. Ces approches ont été évaluées et comparées avec des techniques de l’état de l’art sur des systèmes décrits par des modèles à temps continu et des modèles hybrides. / This thesis addresses the problem of proving the safety of systems described by non-linear dynamical models and hybrid dynamical models. A system is said to be safe if all trajectories of its state do not reach an unsafe region. Proving the safety of systems by explicitly computing all its trajectories when its dynamic is non-linear or when its behavior is described by an hybrid model with non-linear dynamics remains a challenging task. This thesis considers the barrier function approach to prove the safety of a system. A barrier function, when it exists, partitions the state space and isolates the trajectories of the system starting from any possible initial values of the state and the unsafe part of the state space. The set of constraints, which have to be satisfied by a barrier function are usually non-convex, rendering the search of satisfying barrier functions hard. Previously, only polynomial barrier functions were taken in consideration and for systems with polynomial dynamics. This thesis considers relatively general dynamical systems with generic non-linear barrier functions. The solutions presented are based on template barrier functions, constraint satisfaction problems, and interval analysis. The first part of the thesis focuses on non-linear dynamical systems. The barrier function design problem is formulated as a constraint satisfaction problem that can be solved using tools from interval analysis. This formulation allows one to prove the safety of a non-linear dynamical system by finding the parameters of a template barrier function such that all constraints are satisfied using the FPS-CSC algorithm, which has been adapted and supplemented with contractors to improve its efficiency. The second part of the thesis is dedicated to the design of barrier functions for systems described by hybrid dynamical models. Safety properties have to be proven during the continuous-time evolution of the system, but also during transitions. This leads to additional constraints that have to be satisfied by candidate barrier functions. Solving all the constraints simultaneously to find all the barrier functions is usually computationally intractable. In the proposed approach, the algorithm explores all the locations sequentially. Transition constraints are introduced progressively between the already explored locations. Backtracking to previous location is considered when transition constraints are not satisfied. The efficiency of the proposed approaches has been compared with state-of-the-art solutions.
228

Relational properties for specification and verification of C programs in Frama-C / Propriétés relationnnelles pour la spécification et la vérification de programmes C avec Frama-C

Blatter, Lionel 26 September 2019 (has links)
Les techniques de vérification déductive fournissent des méthodes puissantes pour la vérification formelle des propriétés exprimées dans la Logique de Hoare. Dans cette formalisation, également connue sous le nom de sémantique axiomatique, un programme est considéré comme un transformateur de prédicat, où chaque programme c exécuté sur un état vérifiant une propriété P conduit à un état vérifiant une autre propriété Q. Les propriétés relationnelles, de leur côté, lient un ensemble de programmes à deux propriétés. Plus précisément, une propriété relationnelle est une propriété concernant n programmes c1; ::::; cn, indiquant que si chaque programme ci commence dans un état si et termine dans un état s0 i tel que P(s1; ::::; sn) soit vérifié, alors Q(s0 1; :::; s0 n) est vérifié. Ainsi, les propriétés relationnelles invoquent tout nombre fini d’exécutions de programmes éventuellement dissemblables. De telles propriétés ne peuvent pas être exprimées directement dans le cadre traditionnel de la vérification déductive modulaire, car la sémantique axiomatique ne peut se référer à deux exécutions distinctes d’un programme c, ou à des programmes différents c1 et c2. Cette thèse apporte deux solutions à la vérification déductive des propriétés relationnelles. Les deux approches permettent de prouver une propriété relationnelle et de l’utiliser comme hypothèse dans des vérifications ultérieures. Nous modélisons ces solutions à l’aide d’un mini-langage impératif contenant des appels de procédures. Les deux solutions sont implémentées dans le contexte du langage de programmation C, de la plateforme FRAMA-C, du langage de spécification ACSL et du plugin de vérification déductive WP. Le nouvel outil, appelé RPP, permet de spécifier une propriété relationnelle, de la prouver en utilisant la vérification déductive classique, et de l’utiliser comme hypothèse dans la preuve d’autres propriétés. L’outil est évalué sur une série d’exemples illustratifs. Des expériences ont également été faites sur la vérification à l’exécution de propriétés relationnelles et la génération de contre-exemples lorsqu’une propriété ne peut être prouvée. / Deductive verification techniques provide powerful methods for formal verification of properties expressed in Hoare Logic. In this formalization, also known as axiomatic semantics, a program is seen as a predicate transformer, where each program c executed on a state verifying a property P leads to a state verifying another property Q. Relational properties, on the other hand, link n program to two properties. More precisely, a relational property is a property about n programs c1; :::; cn stating that if each program ci starts in a state si and ends in a state s0 i such that P(s1; :::; sn) holds, then Q(s0 1; :::; s0 n) holds. Thus, relational properties invoke any finite number of executions of possibly dissimilar programs. Such properties cannot be expressed directly in the traditional setting of modular deductive verification, as axiomatic semantics cannot refer to two distinct executions of a program c, or different programs c1 and c2. This thesis brings two solutions to the deductive verification of relational properties. Both of them make it possible to prove a relational property and to use it as a hypothesis in the subsequent verifications. We model our solutions using a small imperative language containing procedure calls. Both solutions are implemented in the context of the C programming language, the FRAMA-C platform, the ACSL specification language and the deductive verification plugin WP. The new tool, called RPP, allows one to specify a relational property, to prove it using classic deductive verification, and to use it as hypothesis in the proof of other properties. The tool is evaluated over a set of illustrative examples. Experiments have also been made on runtime checking of relational properties and counterexample generation when a property cannot be proved.
229

Function Verification of Combinational Arithmetic Circuits

Liu, Duo 17 July 2015 (has links)
Hardware design verification is the most challenging part in overall hardware design process. It is because design size and complexity are growing very fast while the requirement for performance is ever higher. Conventional simulation-based verification method cannot keep up with the rapid increase in the design size, since it is impossible to exhaustively test all input vectors of a complex design. An important part of hardware verification is combinational arithmetic circuit verification. It draws a lot of attention because flattening the design into bit-level, known as the bit-blasting problem, hinders the efficiency of many current formal techniques. The goal of this thesis is to introduce a robust and efficient formal verification method for combinational integer arithmetic circuit based on an in-depth analysis of recent advances in computer algebra. The method proposed here solves the verification problem at bit level, while avoiding bit-blasting problem. It also avoids the expensive Groebner basis computation, typically employed by symbolic computer algebra methods. The proposed method verifies the gate-level implementation of the design by representing the design components (logic gates and arithmetic modules) by polynomials in Z2n . It then transforms the polynomial representing the output bits (called “output signature”) into a unique polynomial in input signals (called “input signature”) using gate-level information of the design. The computed input signature is then compared with the reference input signature (golden model) to determine whether the circuit behaves as anticipated. If the reference input signature is not given, our method can be used to compute (or extract) the arithmetic function of the design by computing its input signature. Additional tools, based on canonical word-level design representations (such as TED or BMD) can be used to determine the function of the computed input signature represents. We demonstrate the applicability of the proposed method to arithmetic circuit verification on a large number of designs.
230

Formal Modeling and Verification Methodologies for Quasi-Delay Insensitive Asynchronous Circuits

Sakib, Ashiq Adnan January 2019 (has links)
Pre-Charge Half Buffers (PCHB) and NULL convention Logic (NCL) are two major commercially successful Quasi-Delay Insensitive (QDI) asynchronous paradigms, which are known for their low-power performance and inherent robustness. In industry, QDI circuits are synthesized from their synchronous counterparts using custom synthesis tools. Validation of the synthesized QDI implementation is a critical design prerequisite before fabrication. At present, validation schemes are mostly extensive simulation based that are good enough to detect shallow bugs, but may fail to detect corner-case bugs. Hence, development of formal verification methods for QDI circuits have been long desired. The very few formal verification methods that exist in the related field have major limiting factors. This dissertation presents different formal verification methodologies applicable to PCHB and NCL circuits, and aims at addressing the limitations of previous verification approaches. The developed methodologies can guarantee both safety (full functional correctness) and liveness (absence of deadlock), and are demonstrated using several increasingly larger sequential and combinational PCHB and NCL circuits, along with various ISCAS benchmarks. / National Science Foundation (Grant No. CCF-1717420)

Page generated in 0.1006 seconds