• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 6
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Testing and Verification Strategies for Enhancing Trust in Third Party IPs

Banga, Mainak 17 December 2010 (has links)
Globalization in semiconductor industry has surged up the trend of outsourcing component design and manufacturing process across geographical boundaries. While cost reduction and short time to market are the driving factors behind this trend, the authenticity of the final product remains a major question. Third party deliverables are solely based on mutual trust and any manufacturer with a malicious intent can fiddle with the original design to make it work otherwise than expected in certain specific situations. In case such a backfire happens, the consequences can be disastrous especially for mission critical systems such as space-explorations, defense equipments such as missiles, life saving equipments such as medical gadgets where a single failure can translate to a loss of lives or millions of dollars. Thus accompanied with outsourcing, comes the question of trustworthy design - "how to ensure that integrity of the product manufactured by a third party has not been compromised". This dissertation aims towards developing verification methodologies and implementing non-destructive testing strategies to ensure the authenticity of a third party IP. This can be accomplished at various levels in the IC product life cycle. At the design stage, special testability features can be incorporated in the circuit to enhance its overall testability thereby making the otherwise hard to test portions of the design testable at the post silicon stage. We propose two different approaches to enhance the testability of the overall circuit. The first allows improved at-speed testing for the design while the second aims to exaggerate the effect of unwanted tampering (if present) on the IC. At the verification level, techniques like sequential equivalence checking can be employed to compare the third-party IP against a genuine specification and filter out components showing any deviation from the intended behavior. At the post silicon stage power discrepancies beyond a certain threshold between two otherwise identical ICs can indicate the presence of a malicious insertion in one of them. We have addressed all of them in this dissertation and suggested techniques that can be employed at each stage. Our experiments show promising results for detecting such alterations/insertions in the original design. / Ph. D.
2

Sufficiency-based Filtering of Invariants for Sequential Equivalence Checking

Hu, Wei 14 February 2011 (has links)
Verification, as opposed to Testing and Post-Silicon Validation, is a critical step for Integrated Circuits (IC) Design, answering the question "Are we designing the right function?" before the chips are manufactured. One of the core areas of Verification is Equivalence Checking (EC), which is a special yet independent case of Model Checking (MC). Equivalence Checking aims to prove that two circuits, when fed with the same inputs, produce the exact same outputs. There are broadly two ways to conduct Equivalence Checking, simulation and Formal Equivalence Checking. Simulation requires one to try out different input combinations and observe if the two circuits produce the same output. Obviously, since it is not possible to enumerate all combinations of different inputs, completeness cannot be guaranteed. On the other hand, Formal Equivalence Checking can achieve 100% confidence. As the number of gates and in particular, the number of flip-flops, in circuits has grown tremendously during the recent years, the problem of Formal Equivalence Checking has become much harder â A recent evaluation of a general-case Formal Equivalence Checking engine [1] shows that about 15% of industrial designs cannot be verified after a typical sequential synthesis flow. As a result, a lot of attention on Formal Equivalence Checking has been drawn both academically and industrially. For years Combinational Equivalence Checking(CEC) has been the pervasive framework for Formal Equivalence Checking(FEC) in the industry. However, due to the limitation of being able to verify circuits only with 1:1 flip-flop pairing, a pure CEC-based methodology requires a full regression of the verification process, meaning that performing sequential optimizations like retiming or FSM re-encoding becomes somewhat of a bottleneck in the design cycle [2]. Therefore, a more powerful framework — Sequential Equivalence Checking (SEC) — has been gradually adopted in industry. In this thesis, we target on Sequential Equivalence Checking by finding efficient yet powerful group of relationships (invariants) among the signals of the two circuits being compared. In order to achieve a high success rate on some of the extremely hard-to-verify circuits, we are interested in both two-node and multi-node (up to 4 nodes) invariants. Also we are interested in invariants among both flip-flops and internal signals. For large circuits, there can be too many potential invariants requiring much time to prove. However, we observed that a large portion of them may not even contribute to equivalence checking. Moreover, equivalence checking can be significantly helped if there exists a method to check if a subset of potential invariants would be sufficient (e.g., whether two-nodes are enough or multi-nodes are also needed) prior to the verification step. Therefore, we propose two sufficiency-based approaches to identify useful invariants out of the initial potential invariants for SEC. Experimental results show that our approach can either demonstrate insufficiency of the invariants or select a small portion of them to successfully prove the equivalence property. Our approaches are quite case-independent and flexible. They can be applied on circuits with different synthesis techniques and combined with other techniques. / Master of Science
3

Sequential Equivalence Checking with Efficient Filtering Strategies for Inductive Invariants

Nguyen, Huy 24 May 2011 (has links)
Powerful sequential optimization techniques can drastically change the Integrated Circuit (IC) design paradigm. Due to the limited capability of sequential verification tools, aggressive sequential optimization is shunned nowadays as there is no efficient way to prove the preservation of equivalence after optimization. Due to the fact that the number of transistors fitting on single fixed-size die increases with Moore's law, the problem gets harder over time and in an exponential rate. It is no surprise that functional verification becomes a major bottleneck in the time-to-market of a product. In fact, literature has reported that 70% of design time is spent on making sure the design is bug-free and operating correctly. One of the core verification tasks in achieving high quality products is equivalence checking. Essentially, equivalence checking ensures the preservation of optimized product's functionality to the unoptimized model. This is important for industry because the products are modified constantly to meet different goals such as low power, high performance, etc. The mainstream in conducting equivalence checking includes simulation and formal verification. In simulation approach, golden design and design under verification (DUV) are fed with same stimuli for input expecting outputs to produce identical responses. In case of discrepancy, traces will be generated and DUV will undergo modifications. With the increase in input pins and state elements in designs, exhaustive simulation becomes infeasible. Hence, the completeness of the approach is not guaranteed and notions of coverage has to be accompanied. On the other hand, formal verification incorporates mathematical proofs and guarantee the completeness over the search space. However, formal verification has problems of its own in which it is usually resource intensive. In addition, not all design can be verified after optimization processes. That is to say the golden model and DUV are vastly different in structure which cause modern checker to give inconclusive result. Due to this nature, this thesis focuses in improving the strength and the efficiency of sequential equivalence checking (SEC) using formal approach. While there has been great strides made in the verification for combinational circuits, SEC still remains rather rudimentary. Without powerful SEC as a backbone, aggressive sequential synthesis and optimization are often avoided if the optimized design cannot be proved to be equivalent to the original one. In an attempt to take on the challenges of SEC, we propose two frameworks that successfully determining equivalence for hard-to-verify circuits. The first framework utilizes arbitrary relations between any two nodes within the two sequential circuits in question. The two nodes can reside in the same or across the circuits; likewise, they can be from the same time-frame or across time-frames. The merit for this approach is to use global structure of the circuits to speed up the verification process. The second framework introduces techniques to identify subset but yet powerful multi-node relations (involve more than 2 nodes) which then help to prune large don't care search space and result in a successful SEC framework. In contrast with previous approaches in which exponential number of multi-node relations are mined and learned, we alleviate the computation cost by selecting much fewer invariants to achieve desired conclusion. Although independent, the two frameworks could be used in sequential to complement each other. Experimental results demonstrate that our frameworks can take on many hard-to-verify cases and show a significant speed up over previous approaches. / Master of Science
4

Sequential Equivalence Checking of Circuits with Different State Encodings by Pruning Simulation-based Multi-Node Invariants

Yuan, Zeying 05 October 2015 (has links)
Verification is an important step for Integrated Circuit (IC) design. In fact, literature has reported that up to 70% of the design effort is spent on checking if the design is functionally correct. One of the core verification tasks is Equivalence Checking (EC), which attempts to check if two structurally different designs are functionally equivalent for all reachable states. Powerful equivalence checking can also provide opportunities for more aggressive logic optimizations, meeting different goals such as smaller area, better performance, etc. The success of Combinational Equivalence Checking (CEC) has laid a foundation to industry-level combinational logic synthesis and optimization. However, Sequential Equivalence Checking (SEC) still faces much challenge, especially for those complex circuits that have different state encodings and few internal signal equivalences. In this thesis, we propose a novel simulation-based multi-node inductive invariant generation and pruning technique to check the equivalence of sequential circuits that have different state encodings and very few equivalent signals between them. By first grouping flip-flops into smaller subsets to make it scalable for large designs, we then propose a constrained logic synthesis technique to prune potential multi-node invariants without inadvertently losing important constraints. Our pruning technique guarantees the same conclusion for different instances (proving SEC or not) compared to previous approaches in which merging of such potential invariants might lose important relations if the merged relation does not turn out to be a true invariant. Experimental results show that the smaller invariant set can be very effective for sequential equivalence checking of such hard SEC instances. Our approach is up to 20x-- faster compared to previous mining-based methods for larger circuits. / Master of Science
5

Checagem de equivalência de sequências de estados de projetos digitais em RTL com modelos de referência em alto nível e de protocolo de comunicação. / Equivalence checking of digital RTL design state sequences with high-level reference and communication protocol models.

Castro Márquez, Carlos Iván 20 February 2014 (has links)
A verificação funcional é o conjunto de tarefas destinado a descobrir erros gerados durante o projeto de circuitos integrados, e representa um importante desafio ao influenciar fortemente a eficiência do ciclo inteiro de produção. Estima-se que até 80% dos custos totais de projeto são devidos à verificação, tornando esta atividade o gargalo principal para reduzir o time-to-market. Tal problemática tem provocado a aparição de diversas estratégias para diminuir o esforço, ou para aumentar a capacidade de cobertura da verificação. Por um lado existe a simulação, que permite descobrir um número razoável de erros de projeto; porém, a lentidão da simulação de descrições RTL torna mínima a cobertura real de estados. Por outro lado, os métodos formais de verificação fornecem alta cobertura de estados. Um deles é a checagem de modelos, que checa a validade de um conjunto de propriedades para todos os estados do projeto sob verificação. No entanto, esta técnica padece do problema de explosão de estados, e da dificuldade de especificar um conjunto robusto de propriedades. Outra alternativa formal é a checagem de equivalência que, ao invés de verificar propriedades, compara o projeto com um modelo de referência. No entanto, a checagem de equivalência tradicional é aplicável, unicamente, a descrições no mesmo nível de abstração, e com interfaces idênticas. Como fato importante, não foram encontrados registros na literatura de sobre a verificação formal de descrições RTL, considerando ambos os aspectos computacionais (presentes no modelo de referência) e de comunicação às interfaces (provenientes da especificação funcional de protocolo). Neste trabalho apresenta-se uma metodologia de verificação formal, através do uso de técnicas de checagem de equivalência para determinar a validade de uma implementação em RTL, comparando-a com um modelo de referência em alto nível, e com um modelo formal do protocolo de comunicação. Para permitir tal checagem, a metodologia baseia-se no conceito de sequências de estados, ao invés de estados individuais como na checagem de equivalência tradicional. As discrepâncias entre níveis diferentes de abstração são consideradas, incluindo alfabetos diferentes, mapeamento entre estados, e dessemelhanças temporais. A caracterização e solução do problema são desenvolvidas através de um quadro teórico, onde se apresentam conceitos, e definições, cuja validade é provada formalmente. Uma ferramenta para aplicação prática da metodologia foi desenvolvida e aplicada sobre diferentes tipos de descrições RTL, escritas nas linguagens VHDL e SystemC. Os resultados demonstram efetividade e eficiência na verificação formal de circuitos digitais que incluem, mas não se limitam à correção de erros, encriptação, processamento de imagens, e funções matemáticas. Também, evidencia-se a capacidade da ferramenta para descobrir erros de tipo combinatório e sequencial injetados propositalmente, relacionados com a funcionalidade do modelo de referência, assim como, com a da especificação do protocolo de comunicação, dentro de tempos e número de iterações praticáveis em casos reais. / Functional verification is the group of tasks aiming the discovery of bugs created during integrated circuit design, and represents an important challenge by its strong influence on efficiency throughout production cycles. As an estimative, up to 80% of the whole design costs are due to verification, which makes verification the greatest bottleneck while attempting to reduce time-to-market. Such problem has given rise to a series of techniques to reduce the effort, or to increase verification coverage capability. On the one side, simulation allows finding a good number of bugs, but it is still far from reaching high state coverage because of RTL cycle-accurate slowness. On the other side, formal approaches supply high state coverage. Model checking, for instance, checks the validness of a set of properties for all designs states. However, a strong disadvantage resides in defining and determining the quality of the set of properties to verify, not to mention state explosion. Sequential equivalence checking, which instead of checking properties compares the design with a reference model. Nevertheless, traditionally it can only be applied between circuit descriptions where a one-to-one correspondence for states, as well as for memory elements, is expected. As a remarkable issue, no works were found in literature that dealt with formal verification of RTL designs, while taking care of both computational aspects, present in the high-level reference model, and interface communication aspects, which proceed from the protocol functional specification. This work presents a formal verification methodology, which uses equivalence checking techniques, to validate RTL descriptions through direct comparison with a high-level reference model, and with formal model of the communication protocol. It is based on extracting and comparing complete sequences of states, instead of single states as in traditional equivalence checking, in order to determine if the design intention is maintained in RTL implementation. The natural discrepancies between system level and RTL code are considered, including non-matching interface and memory elements, state mapping, and process concurrency. For the complete problem characterization and solution, a theoretical framework is introduced, where concepts and definitions are provided, and whose validity is formally proved. A tool to apply systematically the methodology was developed and applied on different types of RTL descriptions, written in VHDL and SystemC languages. The results show that the approach may be applied effectively and efficiently to verify formally digital circuits that include, but are not limited to error correction, encryption, image processing, and math functions. Also, evidence has been obtained about the capacity of the tool to discover both combinatory and sequential bugs injected on purpose, related with computational and protocol functionalities, on real scenarios.
6

Checagem de equivalência de sequências de estados de projetos digitais em RTL com modelos de referência em alto nível e de protocolo de comunicação. / Equivalence checking of digital RTL design state sequences with high-level reference and communication protocol models.

Carlos Iván Castro Márquez 20 February 2014 (has links)
A verificação funcional é o conjunto de tarefas destinado a descobrir erros gerados durante o projeto de circuitos integrados, e representa um importante desafio ao influenciar fortemente a eficiência do ciclo inteiro de produção. Estima-se que até 80% dos custos totais de projeto são devidos à verificação, tornando esta atividade o gargalo principal para reduzir o time-to-market. Tal problemática tem provocado a aparição de diversas estratégias para diminuir o esforço, ou para aumentar a capacidade de cobertura da verificação. Por um lado existe a simulação, que permite descobrir um número razoável de erros de projeto; porém, a lentidão da simulação de descrições RTL torna mínima a cobertura real de estados. Por outro lado, os métodos formais de verificação fornecem alta cobertura de estados. Um deles é a checagem de modelos, que checa a validade de um conjunto de propriedades para todos os estados do projeto sob verificação. No entanto, esta técnica padece do problema de explosão de estados, e da dificuldade de especificar um conjunto robusto de propriedades. Outra alternativa formal é a checagem de equivalência que, ao invés de verificar propriedades, compara o projeto com um modelo de referência. No entanto, a checagem de equivalência tradicional é aplicável, unicamente, a descrições no mesmo nível de abstração, e com interfaces idênticas. Como fato importante, não foram encontrados registros na literatura de sobre a verificação formal de descrições RTL, considerando ambos os aspectos computacionais (presentes no modelo de referência) e de comunicação às interfaces (provenientes da especificação funcional de protocolo). Neste trabalho apresenta-se uma metodologia de verificação formal, através do uso de técnicas de checagem de equivalência para determinar a validade de uma implementação em RTL, comparando-a com um modelo de referência em alto nível, e com um modelo formal do protocolo de comunicação. Para permitir tal checagem, a metodologia baseia-se no conceito de sequências de estados, ao invés de estados individuais como na checagem de equivalência tradicional. As discrepâncias entre níveis diferentes de abstração são consideradas, incluindo alfabetos diferentes, mapeamento entre estados, e dessemelhanças temporais. A caracterização e solução do problema são desenvolvidas através de um quadro teórico, onde se apresentam conceitos, e definições, cuja validade é provada formalmente. Uma ferramenta para aplicação prática da metodologia foi desenvolvida e aplicada sobre diferentes tipos de descrições RTL, escritas nas linguagens VHDL e SystemC. Os resultados demonstram efetividade e eficiência na verificação formal de circuitos digitais que incluem, mas não se limitam à correção de erros, encriptação, processamento de imagens, e funções matemáticas. Também, evidencia-se a capacidade da ferramenta para descobrir erros de tipo combinatório e sequencial injetados propositalmente, relacionados com a funcionalidade do modelo de referência, assim como, com a da especificação do protocolo de comunicação, dentro de tempos e número de iterações praticáveis em casos reais. / Functional verification is the group of tasks aiming the discovery of bugs created during integrated circuit design, and represents an important challenge by its strong influence on efficiency throughout production cycles. As an estimative, up to 80% of the whole design costs are due to verification, which makes verification the greatest bottleneck while attempting to reduce time-to-market. Such problem has given rise to a series of techniques to reduce the effort, or to increase verification coverage capability. On the one side, simulation allows finding a good number of bugs, but it is still far from reaching high state coverage because of RTL cycle-accurate slowness. On the other side, formal approaches supply high state coverage. Model checking, for instance, checks the validness of a set of properties for all designs states. However, a strong disadvantage resides in defining and determining the quality of the set of properties to verify, not to mention state explosion. Sequential equivalence checking, which instead of checking properties compares the design with a reference model. Nevertheless, traditionally it can only be applied between circuit descriptions where a one-to-one correspondence for states, as well as for memory elements, is expected. As a remarkable issue, no works were found in literature that dealt with formal verification of RTL designs, while taking care of both computational aspects, present in the high-level reference model, and interface communication aspects, which proceed from the protocol functional specification. This work presents a formal verification methodology, which uses equivalence checking techniques, to validate RTL descriptions through direct comparison with a high-level reference model, and with formal model of the communication protocol. It is based on extracting and comparing complete sequences of states, instead of single states as in traditional equivalence checking, in order to determine if the design intention is maintained in RTL implementation. The natural discrepancies between system level and RTL code are considered, including non-matching interface and memory elements, state mapping, and process concurrency. For the complete problem characterization and solution, a theoretical framework is introduced, where concepts and definitions are provided, and whose validity is formally proved. A tool to apply systematically the methodology was developed and applied on different types of RTL descriptions, written in VHDL and SystemC languages. The results show that the approach may be applied effectively and efficiently to verify formally digital circuits that include, but are not limited to error correction, encryption, image processing, and math functions. Also, evidence has been obtained about the capacity of the tool to discover both combinatory and sequential bugs injected on purpose, related with computational and protocol functionalities, on real scenarios.

Page generated in 0.1048 seconds