• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 3
  • 2
  • 2
  • Tagged with
  • 12
  • 12
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Power supply partitioning for placement of built-in current sensors for IDDQ testing

Prasad, Abhijit 30 September 2004 (has links)
IDDQ testing has been a very useful test screen for CMOS circuits. However, with each technology node the background leakage of chips is rapidly increasing. As a result it is becoming more difficult to distinguish between faulty and fault-free chips using IDDQ testing. Power supply partitioning has been proposed to increase test resolution by partitioning the power supply network, such that each partition has a relatively small defect-free IDDQ level. However, at present no practical partitioning strategy is available. The contribution of this thesis is to present a practical power supply partitioning strategy. We formulate various versions of the power supply partitioning problem that are likely to be of interest depending on the constraints of the chip design. Solutions to all the variants of the problem are presented. The basic idea behind all solutions is to abstract the power topology of the chip as a flow network. We then use flow techniques to find the min-cut of the transformed network to get solutions to our various problem formulations. Experimental results for benchmark circuits verify the feasibility of our solution methodology. The problem formulations will give complete flexibility to a test engineer to decide which factors cannot be compromised (e.g. area of BICS, test quality, etc) for a particular design and accordingly choose the appropriate problem formulation. The application of this work will be the first step in the placement of Built-In Current Sensors for IDDQ testing.
2

Automated Software Solutions to Logic Restructuring and Silicon Debug

Yang, Yu-Shen 17 February 2011 (has links)
With the growing size of modern integrated circuit designs, automated design tools have taken an important role in the development flow. Through the use of these tools, designers can develop circuits in a robust and systematic manner. However, due to the high complexity of the designs and strict resource constraints, it is inevitable that mistakes will be made during the design process. Today, a significant amount of development time is dedicated to pre- and post-silicon verification and debug, which can increase the production cost and jeopardize the future growth of the industry. Hence, there is an urgent need for scalable automated verification and debugging techniques, as well as new methodologies that improve circuits in order to reduce errors. This dissertation presents a set of methodologies to automate three important processes in the VLSI design flow that are related to improving the quality of designs. The first contribution, automated logic restructuring, is a systematic methodology used to devise transformations in logic designs. This technique can be used for a wide range of post-synthesis applications, such as logic optimization, debugging and engineer change orders, which modify synthesized designs to accommodate their goals. Better results can be achieved if there are various transformations for those applications to select. Experiments demonstrate that the proposed technique is capable of re-structuring designs at a location where other methods fail and also identifies multiple transformations for each location. The second contribution is a logic-level, technology-independent soft error rate mitigation technique. Soft errors are transient logic pulses induced by radiation from the environment or released from the silicon chip package materials. This technique identifies conditions where soft errors can cause discrepancies at the primary outputs of the design, and eliminates those conditions through wire replacement. Experimental results confirm the effectiveness of the proposed technique and show that the soft error rate can be reduced at no or small additional overhead to other design parameters. The final contribution of the dissertation is a software environment for post-silicon debug. The proposed algorithms analyze the data collected during operating silicon chips at speed in the test system and provide useful information to expedite the debugging process. Experiments show that the proposed techniques eliminate one third of design modules, which are suspected as the root cause of the failure. Such a reduction can save engineers time on manual inspection and shorten the turnaround time of silicon debug.
3

Automated Software Solutions to Logic Restructuring and Silicon Debug

Yang, Yu-Shen 17 February 2011 (has links)
With the growing size of modern integrated circuit designs, automated design tools have taken an important role in the development flow. Through the use of these tools, designers can develop circuits in a robust and systematic manner. However, due to the high complexity of the designs and strict resource constraints, it is inevitable that mistakes will be made during the design process. Today, a significant amount of development time is dedicated to pre- and post-silicon verification and debug, which can increase the production cost and jeopardize the future growth of the industry. Hence, there is an urgent need for scalable automated verification and debugging techniques, as well as new methodologies that improve circuits in order to reduce errors. This dissertation presents a set of methodologies to automate three important processes in the VLSI design flow that are related to improving the quality of designs. The first contribution, automated logic restructuring, is a systematic methodology used to devise transformations in logic designs. This technique can be used for a wide range of post-synthesis applications, such as logic optimization, debugging and engineer change orders, which modify synthesized designs to accommodate their goals. Better results can be achieved if there are various transformations for those applications to select. Experiments demonstrate that the proposed technique is capable of re-structuring designs at a location where other methods fail and also identifies multiple transformations for each location. The second contribution is a logic-level, technology-independent soft error rate mitigation technique. Soft errors are transient logic pulses induced by radiation from the environment or released from the silicon chip package materials. This technique identifies conditions where soft errors can cause discrepancies at the primary outputs of the design, and eliminates those conditions through wire replacement. Experimental results confirm the effectiveness of the proposed technique and show that the soft error rate can be reduced at no or small additional overhead to other design parameters. The final contribution of the dissertation is a software environment for post-silicon debug. The proposed algorithms analyze the data collected during operating silicon chips at speed in the test system and provide useful information to expedite the debugging process. Experiments show that the proposed techniques eliminate one third of design modules, which are suspected as the root cause of the failure. Such a reduction can save engineers time on manual inspection and shorten the turnaround time of silicon debug.
4

VLSI testing for high reliability: Mixing IDDQ and logic testing

Hwang, Suntae January 1993 (has links)
No description available.
5

Scalable algorithms for software based self test using formal methods

Prabhu, Mahesh 09 July 2014 (has links)
Transistor scaling has kept up with Moore's law with a doubling of the number of transistors on a chip. More logic on a chip means more opportunities for manufacturing defects to slip in. This, in turn, has made processor testing after manufacturing a significant challenge. At-speed functional testing, being completely non-intrusive, has been seen as the ideal way of testing chips. However for processor testing, generating instruction level tests for covering all faults is a challenge given the issue of scalability. Data-path faults are relatively easier to control and observe compared to control-path faults. In this research we present a novel method to generate instruction level tests for hard to detect control-path faults in a processor. We initially map the gate level stuck-at fault to the Register Transfer Level (RTL) and build an equivalent faulty RTL model. The fault activation and propagation constraints are captured using Control and Data Flow Graphs of the RTL as a Liner Temporal Logic (LTL) property. This LTL property is then negated and given to a Bounded Model Checker based on a Bit-Vector Satisfiability Module Theories (SMT) solver. From the counter-example to the property we can extract a sequence of instructions that activates the gate level fault and propagates the fault effect to one of the observable points in the design. Other than the user supplying instruction constraints, this approach is completely automatic and does not require any manual intervention. Not all the design behaviors are required to generate a test for a fault. We use this insight to scale our previous methodology further. Underapproximations are design abstractions that only capture a subset of the original design behaviors. The use of RTL for test generation affords us two types of under-approximations: bit-width reduction and operator approximation. These are abstractions that perform reductions based on semantics of the RTL design. We also explore structural reductions of the RTL, called path based search, where we search through error propagation paths incrementally. This approach increases the size of the test generation problem step by step. In this way the SMT solver searches through the state space piecewise rather than doing the entire search at once. Experimental results show that our methods are robust and scalable for generating functional tests for hard to detect faults. / text
6

Compaction mechanism to reduce test pattern counts and segmented delay fault testing for path delay faults

Jha, Sharada 01 May 2013 (has links)
With rapid advancement in science and technology and decreasing feature size of transistors, the complexity of VLSI designs is constantly increasing. With increasing density and complexity of the designs, the probability of occurrence of defects also increases. Therefore testing of designs becomes essential in order to guarantee fault-free operation of devices. Testing of VLSI designs involves generation of test patterns, test pattern application and identification of defects in design. In case of scan based designs, the test set size directly impacts the test application time which is determined by the number of memory elements in the design and the test storage requirements. There are various methods in literature which are used to address the issue of large test set size classified as static or dynamic compaction methods depending on whether the test compaction algorithm is performed as a post-processing step after test generation or is integrated within the test generation. In general, there is a trade-off between the test compaction achievable and the run-time. Methods which are computationally intensive might provide better compaction, however, might have longer run times owing to the complexity of the algorithm. In the first part of the thesis we address the problem of large test set size in partially scanned designs by proposing an incremental dynamic compaction method. Typically, the fault coverage curve of designs ramp up very quickly in the beginning and later slows down and ultimately the curve flattens towards the tail of the curve. In the initial phase of test generation a greedy compaction method is used because initially there are easy-to-detect faults and the scope for compaction is better. However, in the later portion of the curve, there are hard-to-detect faults which affect compaction and we propose to use a dynamic compaction approach. We propose a novel mechanism to identify redundant faults during dynamic compaction to avoid targeting them later. The effectiveness of method is demonstrated on industrial designs and test size reduction of 30% is achieved. As the device complexity is increasing, delay defects are also increasing. Speed path debug is necessary in order to meet performance requirements. Speed paths are the frequency limiting paths in a design identified during debug. Speed paths can be tested using functional patterns, transition n-detect patterns or path delay patterns. However, usage of functional patterns for speed path debug is expensive because generation of functional patterns is expensive and the application cost is also high because the number of patterns is large and requires functional testers. In the second part of the dissertation we propose a simple path sensitization approach that can be used to generate pseudo-robust tests, which are near robust tests and can be used for designs that have multiple clock domains. The fault coverage for path delay fault APTG can be further improved by dividing the paths that are not testable under pseudo robust conditions, into shorter sub-paths. The effectiveness of the method is demonstrated on industrial designs.
7

Testability considerations for implementing an embedded memory subsystem

Seok, Geewhun 01 February 2012 (has links)
There are a number of testability considerations for VLSI design, but test coverage, test time, accuracy of test patterns and correctness of design information for DFD (Design for debug) are the most important ones in design with embedded memories. The goal of DFT (Design-for-Test) is to achieve zero defects. When it comes to the memory subsystem in SOCs (system on chips), many flavors of memory BIST (built-in self test) are able to get high test coverage in a memory, but often, no proper attention is given to the memory interface logic (shadow logic). Functional testing and BIST are the most prevalent tests for this logic, but functional testing is impractical for complicated SOC designs. As a result, industry has widely used at-speed scan testing to detect delay induced defects. Compared with functional testing, scan-based testing for delay faults reduces overall pattern generation complexity and cost by enhancing both controllability and observability of flip-flops. However, without proper modeling of memory, Xs are generated from memories. Also, when the design has chip compression logic, the number of ATPG patterns is increased significantly due to Xs from memories. In this dissertation, a register based testing method and X prevention logic are presented to tackle these problems. An important design stage for scan based testing with memory subsystems is the step to create a gate level model and verify with this model. The flow needs to provide a robust ATPG netlist model. Most industry standard CAD tools used to analyze fault coverage and generate test vectors require gate level models. However, custom embedded memories are typically designed using a transistor-level flow, there is a need for an abstraction step to generate the gate models, which must be equivalent to the actual design (transistor level). The contribution of the research is a framework to verify that the gate level representation of custom designs is equivalent to the transistor-level design. Compared to basic stuck-at fault testing, the number of patterns for at-speed testing is much larger than for basic stuck-at fault testing. So reducing test and data volume are important. In this desertion, a new scan reordering method is introduced to reduce test data with an optimal routing solution. With in depth understanding of embedded memories and flows developed during the study of custom memory DFT, a custom embedded memory Bit Mapping method using a symbolic simulator is presented in the last chapter to achieve high yield for memories. / text
8

Statistical methods for rapid system evaluation under transient and permanent faults

Mirkhani, Shahrzad 10 February 2015 (has links)
Traditional solutions for test and reliability do not scale well for modern designs with their size and complexity increasing with every technology generation. Therefore, in order to meet time-to-market requirements as well as acceptable product quality, it is imperative that new methodologies be developed for quickly evaluating a system in the presence of faults. In this research, statistical methods have been employed and implemented to 1) estimate the stuck-at fault coverage of a test sequence and evaluate the given test vector set without the need for complete fault simulation, and 2) analyze design vulnerabilities in the presence of radiation-based (soft) errors. Experimental results show that these statistical techniques can evaluate a system under test orders of magnitude faster than state-of-the-art methods with a small margin of error. In this dissertation, I have introduced novel methodologies that utilize the information from fault-free simulation and partial fault simulation to predict the fault coverage of a long sequence of test vectors for a design under test. These methodologies are practical for functional testing of complex designs under a long sequence of test vectors. Industry is currently seeking efficient solutions for this challenging problem. The last part of this dissertation discusses a statistical methodology for a detailed vulnerability analysis of systems under soft errors. This methodology works orders of magnitude faster than traditional fault injection. In addition, it is shown that the vulnerability factors calculated by this method are closer to complete fault injection (which is the ideal way of soft error vulnerability analysis), compared to statistical fault injection. Performing such a fast soft error vulnerability analysis is very cruicial for companies that design and build safety-critical systems. / text
9

Low Power Test Methodology For SoCs : Solutions For Peak Power Minimization

Tudu, Jaynarayan Thakurdas 07 1900 (has links) (PDF)
Power dissipated during scan testing is becoming increasingly important for today’s very complex sequential circuits. It is shown that the power dissipated during test mode operation is in general higher than the power dissipated during functional mode operation, the test mode average power may sometimes go upto 3x and the peak power may sometimes go upto 30x of normal mode operation. The power dissipated during the scan operation is primarily due to the switching activity that arises in scan cells during the shift and capture operation. The switching in scan cells propagates to the combinational block of the circuit during scan operation, which in turn creates many transition in the circuit and hence it causes higher dynamic power dissipation. The excessive average power dissipated during scan operation causes circuit damage due to higher temperature and the excessive peak power causes yield loss due to IR-drop and cross talk. The higher peak power also causes the thermal related issue if it last for sufficiently large number of cycles. Hence, to avoid all these issues it is very important to reduce the peak power during scan testing. Further, in case of multi-module SoC testing the reduction in peak power facilitates in reducing the test application time by scheduling many test sessions parallelly. In this dissertation we have addressed all the above stated issues. We have proposed three different techniques to deal with the excessive peak power dissipation problem during test. The first solution proposes an efficient graph theoretic methodology for test vector reordering to achieve minimum peak power supported by the given test vector set. Three graph theoretic problems are formulated and corresponding algorithms to solve the problems are proposed. The proposed methodology also minimizes average power for the given minimum peak power. Further, a lower bound on minimum achievable peak power for a given test set is defined. The results on several benchmarks show that the proposed methodology is able to reduce peak power significantly. To address the peak power problem during scan test-cycle (the cycle between launch and capture pulse) we have proposed a scan chain reordering technique. A new formulation for scan chain reordering as TSP (Traveling Sales Person) problem and a solution is proposed. The experimental results show that the proposed methodology is able to minimize considerable amount of peak power compared to the earlier proposals. The capture power (power dissipated during capture cycle) problem in testing multi chip module (MCM) is also addressed. We have proposed a methodology to schedule the test set to reduce capture power. The scheduling algorithm consist of reordering of test vector and insertion of idle cycle to prevent capture cycle coincidence of scheduled cores. The experimental results show the significant reduction in capture power without increase in test application time.
10

Τεχνικές ελέγχου ορθής λειτουργίας με έμφαση στη χαμηλή κατανάλωση ισχύος / VLSI testing techniques focused on low power dissipation

Μπέλλος, Μάτσιεϊ 25 June 2007 (has links)
Η διατριβή ασχολείται με το αντικείμενο του ελέγχου ορθής λειτουργίας κυκλωμάτων κατά τον οποίο λαμβάνεται υπόψη και η συμπεριφορά ως προς την κατανάλωση ισχύος. Οι τεχνικές που προτείνονται αφορούν α) τη συμπίεση ενός συνόλου δοκιμής σε περιβάλλον ενσωματωμένου ελέγχου με χρήση εξωτερικών ελεγκτών, β) την εμφώλευση διανυσμάτων δοκιμής σε περιβάλλον ενσωματωμένου ελέγχου και γ) τη μείωση της κατανάλωση ισχύς και ενέργειας σε περιβάλλον εξωτερικού ελέγχου. Η συμπίεση των δεδομένων βασίζεται στην παρατήρηση ότι ένα διάνυσμα δοκιμής μπορεί να παραχθεί από το προηγούμενό του με την αντικατάσταση κάποιων τμημάτων του. Μεγαλύτερη συμπίεση επιτυγχάνεται όταν γίνει αναδιαταξή διανυσμάτων και αναδιάταξη των φλιπ-φλοπ της αλυσίδας ανίχνευσης. Αν η αναδιάταξη των φλιπ-φλοπ γίνει με βάση τη συχνότητα αλλαγών κατάστασης γειτονικών φλιπ-φλοπ τότε επιτυγχάνεται και μείωση της κατανάλωσης ισχύος. Όσον αφορά τις τεχνικές ενσωματωμένου αυτοελέγχου, μελετήθηκε το πρόβλημα της εμφώλευσης διανυσμάτων δοκιμής. Προτάθηκαν αποδοτικά κυκλώματα παραγωγής διανυσμάτων δοκιμής βασισμένα σε ολισθητές γραμμικής ανάδρασης και δέντρα πυλών XOR και σε ολισθητές συνδυασμένους με δέντρα πυλών OR. Όταν τα κυκλώματα υπό έλεγχο είναι κανονικής μορφής όπως είναι οι αθροιστές του αριθμητικού συστήματος υπολοίπων, προτείνονται κυκλώματα που εκμεταλεύονται την κανονική μορφή του συνόλου δοκιμής. Τέλος, σε περιβάλλον εξωτερικού ελέγχου, προτείνονται μέθοδοι αναδιάταξης διανυσμάτων δοκιμής με επανάληψη διανυσμάτων που μειώνουν την κατανάλωση. Οι μέθοδοι αυτές βασίζονται στην επιλογή των κατάλληλων ελάχιστων γεννητικών δέντρων και στη μετατροπή των κατάλληλων επαναλαμβανόμενων διανυσμάτων επιτυγχάνοντας σημαντική μείωση στην κατανάλωση ενέργειας, στη μέση και στη μέγιστη κατανάλωση ισχύος. / The dissertation is focused on VLSI testing while power dissipation is also taken into account. The techniques proposed are: a) test data compression in an embedded test environment, b) test set embedding in a built-in self test environment and c) reduction in test power dissipation in an external testing environment. Test data compression is based on the observation that a test vector can be produced from the previous one by replacing some parts of the previous vector with new parts of the current vector. The compression is even higher when the test vectors are ordered and scan cell reordering is also performed. If the scan cell reordering is based on a transition frequency approach then reduction in power dissipation is also achieved. In the case of built-in self test the problem of test set embedding was studied and efficient circuits based on linear feedback shift registers combined with XOR trees or shift registers combined with OR trees were proposed. If the circuits have a regular structure, such as the structure of residue number system adders, then a circuit taking advantage of the regular form of the test set can be derived. Finally, when external testing is considered, we proposed test vector ordering with vector repetition methods, which reduce power consumption. The methods are based on the selection of the appropriate minimum spanning trees and through the modification of the repeated vectors they achieve considerable savings in energy, average and peak power dissipation.

Page generated in 0.0717 seconds