• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 18
  • 18
  • 18
  • 10
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Multiple Scan Trees Synthesis for Test Time/Data and Routing Length Reduction under Output Constraint

Hung, Yu-Chen 29 July 2009 (has links)
A synthesis methodology for multiple scan trees that considers output pin limitation, scan chain routing length, test application time and test data compression rate simultaneously is proposed in this thesis. Multiple scan trees, also known as a scan forest, greatly reduce test data volume and test application time in SOC testing. However, previous research on scan tree synthesis rarely considered issues such as routing length and output port limitation, and hence created scan trees with a large number of scan output ports and excessively long routing paths. The proposed algorithm provides a mechanism that effectively reduces test time and test data volume, and routing length under output port constraint. As a result, no output compressors are required, which significantly reduce the hardware overhead.
2

Test Generation Guided Design for Testability

Wu, Peng 01 July 1988 (has links)
This thesis presents a new approach to building a design for testability (DFT) system. The system takes a digital circuit description, finds out the problems in testing it, and suggests circuit modifications to correct those problems. The key contributions of the thesis research are (1) setting design for testability in the context of test generation (TG), (2) using failures during FG to focus on testability problems, and (3) relating circuit modifications directly to the failures. A natural functionality set is used to represent the maximum functionalities that a component can have. The current implementation has only primitive domain knowledge and needs other work as well. However, armed with the knowledge of TG, it has already demonstrated its ability and produced some interesting results on a simple microprocessor.
3

Low-Cost IP Core Test Using Tri-Template-Based Codes

ITO, Hideo, ZENG, Gang 01 January 2007 (has links)
No description available.
4

Interconnect-Driven Layout-Aware Multiple Scan Tree Synthesis Simultaneously for Test Time, Compression and Routing

Huang, Jr-Yang 29 July 2008 (has links)
An interconnect-driven layout-aware multiple scan tree synthesis methodology is proposed in this paper. Multiple scan trees, also known as a scan forest, greatly reduce test data volume and test application time in SOC testing. However, previous researches on scan tree synthesis rarely considered routing length issues, and hence create scan trees with excessively long routing paths. The proposed algorithm effectively considers both test compression rate and routing length and hence produces better results than all previous known methods in both regards. In this method, a density-driven dynamic clustering algorithm is applied to determine scan cells in each scan tree. A compatibility based clique partition algorithm is used to determine tree topology, and then a Voronoi diagram is used to establish physical connections. Compared with previous works on scan tree synthesis, the proposed method reduces test data volume by 1.4X to 2.1X, while the reduction in test application time ranges from 15.9X to 24.6X. The significant improvement in test application time is mainly due to the multiple scan trees architecture. The final routing structure is also better, as 1.3X to 3.2X reduction in routing length is achieved.
5

Testing and Verification Strategies for Enhancing Trust in Third Party IPs

Banga, Mainak 17 December 2010 (has links)
Globalization in semiconductor industry has surged up the trend of outsourcing component design and manufacturing process across geographical boundaries. While cost reduction and short time to market are the driving factors behind this trend, the authenticity of the final product remains a major question. Third party deliverables are solely based on mutual trust and any manufacturer with a malicious intent can fiddle with the original design to make it work otherwise than expected in certain specific situations. In case such a backfire happens, the consequences can be disastrous especially for mission critical systems such as space-explorations, defense equipments such as missiles, life saving equipments such as medical gadgets where a single failure can translate to a loss of lives or millions of dollars. Thus accompanied with outsourcing, comes the question of trustworthy design - "how to ensure that integrity of the product manufactured by a third party has not been compromised". This dissertation aims towards developing verification methodologies and implementing non-destructive testing strategies to ensure the authenticity of a third party IP. This can be accomplished at various levels in the IC product life cycle. At the design stage, special testability features can be incorporated in the circuit to enhance its overall testability thereby making the otherwise hard to test portions of the design testable at the post silicon stage. We propose two different approaches to enhance the testability of the overall circuit. The first allows improved at-speed testing for the design while the second aims to exaggerate the effect of unwanted tampering (if present) on the IC. At the verification level, techniques like sequential equivalence checking can be employed to compare the third-party IP against a genuine specification and filter out components showing any deviation from the intended behavior. At the post silicon stage power discrepancies beyond a certain threshold between two otherwise identical ICs can indicate the presence of a malicious insertion in one of them. We have addressed all of them in this dissertation and suggested techniques that can be employed at each stage. Our experiments show promising results for detecting such alterations/insertions in the original design. / Ph. D.
6

Testabilité versus Sécurité : Nouvelles attaques par chaîne de scan & contremesures / Testability versus Security : New scan-based attacks & countermeasures

Joaquim da Rolt, Jean 14 December 2012 (has links)
Dans cette thèse, nous analysons les vulnérabilités introduites par les infrastructures de test, comme les chaines de scan, utilisées dans les circuits intégrés digitaux dédiés à la cryptographie sur la sécurité d'un système. Nous développons de nouvelles attaques utilisant ces infrastructures et proposons des contre-mesures efficaces. L'insertion des chaînes de scan est la technique la plus utilisée pour assurer la testabilité des circuits numériques car elle permet d'obtenir d'excellents taux de couverture de fautes. Toutefois, pour les circuits intégrés à vocation cryptographique, les chaînes de scan peuvent être utilisées comme une porte dérobée pour accéder à des données secrètes, devenant ainsi une menace pour la sécurité de ces données. Nous commençons par décrire une série de nouvelles attaques qui exploitent les fuites d'informations sur des structures avancées de conception en vue du test telles que le compacteur de réponses, le masquage de valeur inconnues ou le scan partiel, par exemple. Au travers des attaques que nous proposons, nous montrons que ces structures ne protégent en rien les circuits à l'inverse de ce que certains travaux antérieurs ont prétendu. En ce qui concerne les contre-mesures, nous proposons trois nouvelles solutions. La première consiste à déplacer la comparaison entre réponses aux stimuli de test et réponses attenduesde l'équipement de test automatique vers le circuit lui-même. Cette solution entraine un surcoût de silicium négligeable, n'aucun impact sur la couverture de fautes. La deuxième contre-mesure viseà protéger le circuit contre tout accès non autorisé, par exemple au mode test du circuit, et d'assurer l'authentification du circuit. A cet effet, l'authentification mutuelle utilisant le protocole de Schnorr basé sur les courbes elliptiques est mis en oeuvre. Enfin, nous montronsque les contre-mesures algorithmiques agissant contre l'analyse différentielle peuvent être également utilisées pour se prémunir contre les attaques par chaine de scan. Parmi celles-ci on citera en particulier le masquage de point et le masquage de scalaire. / In this thesis, we firstly analyze the vulnerabilities induced by test infrastructures onto embedded secrecy in digital integrated circuits dedicated to cryptography. Then we propose new scan-based attacks and effective countermeasures. Scan chains insertion is the most used technique to ensure the testability of digital cores, providing high-fault coverage. However, for ICs dealing with secret information, scan chains can be used as back doors for accessing secret data, thus becominga threat to device's security. We start by describing a series of new attacks that exploit information leakage out of advanced Design-for-Testability structures such as response compaction, X-Masking and partial scan. Conversely to some previous works that proposed that these structures are immune to scan-based attacks, we show that our new attacks can reveal secret information that is embedded inside the chip boundaries. Regarding the countermeasures, we propose three new solutions. The first one moves the comparison between test responses and expected responses from the AutomaticTest Equipment to the chip. This solution has a negligible area overhead, no effect on fault coverage. The second countermeasure aims to protect the circuit against unauthorized access, for instance to the test mode, and also ensure the authentication of the circuit. For thatpurpose, mutual-authentication using Schnorr protocol on Elliptic Curves is implemented. As the last countermeasure, we propose that Differential Analysis Attacks algorithm-level countermeasures, suchas point-blinding and scalar-blinding can be reused to protect the circuit against scan-based attacks.
7

Integrated Enhancement of Testability and Diagnosability for Digital Circuits

Rahagude, Nikhil Prakash 29 November 2010 (has links)
While conventional test point insertions commonly used in design for testability can improve fault coverage, the test points selected may not necessarily be the best candidates to aid <em>silicon diagnosis</em>. In this thesis, test point insertions are conducted with the aim to detect more faults and also synergistically distinguish currently indistinguishable fault-pairs. We achieve this by identifying those points in the circuit, which are not only hard-to-test but also lie on distinguishable frontiers, as Testability-Diagnosability (TD) points. To this end, we propose a novel low-cost metric to identify such TD points. Further, we propose a new DFT + DFD architecture, which adds just one pin (to identify test/functional mode) and small additional combinational logic to the circuit under test. Our experiments indicate that the proposed architecture can distinguish 4x more previously indistinguishable fault-pairs than existing DFT architectures while maintaining similar fault coverages. Further, the experiments illustrate that quality results can be achieved with an area overhead of around 5%. Additional experiments conducted on hard-to-test circuits show an increase in <em>fault coverage</em> by 48% while maintaining similar diagnostic resolution. Built-in Self Test (BIST) is a technique of adding additional blocks of hardware to the circuits to allow them to perform self-testing. This enables the circuits to test themselves thereby reducing the dependency on the expensive external automated test equipment (ATE). At the end of a test session, BIST generates a signature which is a compaction of the obtained output responses of the circuit for that session. Comparison of this signature with the reference signature categorizes the circuit as error free or buggy. While BIST provides a quick and low cost alternative to check circuit's correctness, diagnosis in BIST environment remains poor because of the limited information present in the lossily compacted final signature. The signature does not give any information about the possible defect location in the circuit. To facilitate diagnosis, researchers have proposed the use of two additional on-chip embedded memories,response memory to store reference responses and fail memory to store failing responses. We propose a novel architecture in which only one additional memory is required. Experimental results conducted on benchmark circuits substantiate that the same fault coverage can be maintained using just 5% of the available test vectors. This reduces the size of memory required to store responses which in turn reduces area overhead. Further, by adding test points to the circuit using our proposed architecture, we can improve the diagnostic resolution by 60% with respect to external testing. / Master of Science
8

An 8 bit Serial Communication module Chip Design Using Synopsys tools and ASIC Design Flow Methodology

Munugala, Anvesh 23 May 2018 (has links)
No description available.
9

On Detection, Analysis and Characterization of Transient and Parametric Failures in Nano-scale CMOS VLSI

Sanyal, Alodeep 01 May 2010 (has links)
As we move deep into nanometer regime of CMOS VLSI (45nm node and below), the device noise margin gets sharply eroded because of continuous lowering of device threshold voltage together with ever increasing rate of signal transitions driven by the consistent demand for higher performance. Sharp erosion of device noise margin vastly increases the likelihood of intermittent failures (also known as parametric failures) during device operation as opposed to permanent failures caused by physical defects introduced during manufacturing process. The major sources of intermittent failures are capacitive crosstalk between neighbor interconnects, abnormal drop in power supply voltage (also known as droop), localized thermal gradient, and soft errors caused by impact of high energy particles on semiconductor surface. In nanometer technology, these intermittent failures largely outnumber the permanent failures caused by physical defects. Therefore, it is of paramount importance to come up with efficient test generation and test application methods to accurately detect and characterize these classes of failures. Soft error rate (SER) is an important design metric used in semiconductor industry and represented by number of such errors encountered per Billion hours of device operation, known as Failure-In-Time (FIT) rate. Soft errors are rare events. Traditional techniques for SER characterization involve testing multiple devices in parallel, or testing the device while keeping it in a high energy neutron bombardment chamber to artificially accelerate the occurrence of single events. Motivated by the fact that measurement of SER incurs high time and cost overhead, in this thesis, we propose a two step approach: hii a new filtering technique based on amplitude of the noise pulse, which significantly reduces the set of soft error susceptible nodes to be considered for a given design; followed by hiii an Integer Linear Program (ILP)-based pattern generation technique that accelerates the SER characterization process by 1-2 orders of magnitude compared to the current state-of-the-art. During test application, it is important to distinguish between an intermittent failure and a permanent failure. Motivated by the fact that most of the intermittent failures are temporally sparse in nature, we present a novel design-for-testability (DFT) architecture which facilitates application of the same test vector twice in a row. The underlying assumption here is that a soft fail will not manifest its effect in two consecutive test cycles whereas the error caused by a physical defect will produce an identically corrupt output signature in both test cycles. Therefore, comparing the output signature for two consecutive applications of the same test vector will accurately distinguish between a soft fail and a hard fail. We show application of this DFT technique in measuring soft error rate as well as other circuit marginality related parametric failures, such as thermal hot-spot induced delay failures. A major contribution of this thesis lies on investigating the effect of multiple sources of noise acting together in exacerbating the noise effect even further. The existing literature on signal integrity verification and test falls short of taking the combined noise effects into account. We particularly focus on capacitive crosstalk on long signal nets. A typical long net is capacitively coupled with multiple aggressors and also tend to have multiple fanout gates. Gate leakage current that originates in fanout receivers, flows backward and terminates in the driver causing a shift in driver output voltage. This effect becomes more prominent as gate oxide is scaled more aggressively. In this thesis, we first present a dynamic simulation-based study to establish the significance of the problem, followed by proposing an automatic test pattern generation (ATPG) solution which uses 0-1 Integer Linear Program (ILP) to maximize the cumulative voltage noise at a given victim net due to crosstalk and gate leakage loading in conjunction with propagating the fault effect to an observation point. Pattern pairs generated by this technique are useful for both manufacturing test application as well as signal integrity verification for nanometer designs. This research opens up a new direction for studying nanometer noise effects and motivates us to extend the study to other noise sources in tandem including voltage drop and temperature effects.
10

A Design Methodology for Physical Design for Testability

Almajdoub, Salahuddin A. 01 July 1996 (has links)
Physical design for testability (PDFT) is a strategy to design circuits in a way to avoid or reduce realistic physical faults. The goal of this work is to define and establish a speci c methodology for PDFT. The proposed design methodology includes techniques to reduce potential bridging faults in complementary metal-oxide-semiconductor (CMOS) circuits. To compare faults, the design process utilizes a new parameter called the fault index. The fault index for a particular fault is the probability of occurrence of the fault divided by the testability of the fault. Faults with the highest fault indices are considered the worst faults and are targeted by the PDFT design process to eliminate them or reduce their probability of occurrence. An implementation of the PDFT design process is constructed using several new tools in addition to other "off-the-shelf" tools. The first tool developed in this work is a testability measure tool for bridging faults. Two other tools are developed to eliminate or reduce the probability of occurrence of bridging faults with high fault indices. The row enhancer targets faults inside the logic elements of the circuit, while the channel enhancer targets faults inside the routing part of the circuit. To demonstrate the capabilities and test the eff ectiveness of the PDFT design process, this work conducts an experiment which includes designing three CMOS circuits from the ISCAS 1985 benchmark circuits. Several layouts are generated for every circuit. Every layout, except the rst one, utilizes information from the previous layout to minimize the probability of occurrence for faults with high fault indices. Experimental results show that the PDFT design process successfully achieves two goals of PDFT, providing layouts with fewer faults and minimizing the probability of occurrence of hard-to-test faults. Improvement in the total fault index was about 40 percent in some cases, while improvement in total critical area was about 30 percent in some cases. However, virtually all the improvements came from using the row enhancer; the channel enhancer provided only marginal improvements. / Ph. D.

Page generated in 0.0926 seconds