• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 8
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 55
  • 25
  • 18
  • 12
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Identification and Analysis of Illegal States in the Apoptotic Discrete Transition System Model using ATPG and SAT-based Techniques

Shrivastava, Anupam 14 November 2008 (has links)
Programmed Cell Death, or Apoptosis, plays a critical role in human embryonic development and in adult tissue homeostasis. Recent research efforts in Bioinformatics and Computational Biology focus on gaining deep insight into the Apoptosis process. This allows researchers to clearly study the relation between the dysregulation of apoptosis and the development of cancer. Research in this highly interdisciplinary field of bioinformatics has become much more quantitative, using tools from computational sciences to understand the behavior of Biological systems. Previously, an abstracted model has been developed to study the Apoptosis process as a Finite State Discrete Transition Model. This model facilitates the reutilization of the digital design verification and testing techniques developed in the Electronic Design Automation domain. These verification and testing techniques for hardware have become robust over the past few decades. Usually simulation is the cornerstone of the Design Verification industry and bulk of states are covered by simulation. Formal verification techniques are then used to analyze the remaining corner case states. Techniques like Genetic Algorithm guided Logic Simulation (GALS) and SAT-based Induction have already been applied to the Apoptosis Discrete Transition Model. However, the Apoptosis model presents some unique problems. The simulation techniques have shown to be unable to cover most of the states of the Apoptosis model. When SAT-based Induction is applied to the Apoptosis model, in particular to find illegal states, very few illegal states are identified. It particularly suffers from the fact that the Apoptosis Model is rather complex and the formulation for testing and verification is hard to tackle at larger bounds greater than 20 or so. Consequently, the state space of the Apoptosis model largely lies in the unknown region, meaning that we are unable to either reach those states or prove that they are illegal. Unless we know whether these states are reachable or illegal, it is not feasible to infer information about the model such as what protein concentrations can be reached under what kind of input stimuli. Questions such as whether certain protein concentrations can be reached or not in this model can only be answered if we have a clear picture of the reachability of state space. In this thesis, we propose techniques based on ATPG and SAT based image computation of the Apoptosis finite transition model. Our method leverages the results obtained in previous research work. It uses the reachable states obtained from the simulation traces of the previous work as initial states for our technique. This enables us to identify more illegal states in less number of iterations; in other words, we are able to reach the fixed point in image computation faster. Our experimental analysis illustrates that the proposed techniques could prove most of the former unknown states as illegal states. We are able to extend our analysis to obtain clearer picture of the interaction of any two proteins in the system considered together. / Master of Science
32

Testinių rinkinių atrinkimo programinės įrangos sudarymas ir tyrimas / Construction and research of software for test patterns selection

Drovnenkov, Aleksej 16 August 2007 (has links)
Automatinis testų rinkinių generavimas (pasaulyje priimtas angliškas sutrumpinimas – ATPG) yra pakankamai senai sprendžiama problema. Jos tikslas – surasti optimalų testinių vektorių sekas, kurios pilnai užtikrintų visas schemos gamybos etape padarytas klaidas per mažiausią laiką. Vienas iš skaitmeninių schemų testavimo ir testų rinkinių sudarymo metodas yra funkcinis testavimo metodas. Jo privalumai yra tame, kad testų rinkinių sudarymo programa nežino schemos vidinės struktūros, o testuoja tik idealų schemos modelį, kuri yra pateikta juodos dėžės pavidale, tai yra programa gali gauti idealaus schemos reakciją į tam tikrą įvedimo signalų vektorių. Šiame darbe parinktas funkcinis testavimo metodas. Šiame darbe aprašoma testinių rinkinių atrinkimo programinės įrangos teorinė bazė, automatinio testų rinkinio formavimo trumpa istorinė apžvalga, baltos ir juodos dėžės modelių pagristų formavimo algoritmų palyginimai. Aprašoma programų sistemos statinė struktūra bei jos komponentai, sistemos panaudojimo atvejai. Tyrimų dalyje aprašoma tyrimo metodika, siūlomi programos kokybės tobulinimo metodai. Eksperimentų dalyje aprašomi tyrimų eksperimentų rezultatai. / Automated test pattern generation (ATPG) problem is being solved for a relatively long time. Its' point is to find optimal test vector sequences, which would cover most of all production-caused digital circuit faults and would run for the minimum amount of time. One of the ways to test and generate test vectors for digital circuits is functional test method. Its' benefit is that system does not need to be aware of digital circuit's inner logical model, but has to deal only with the input, so that just the ideal model of the digital circuit can be used as a "black box". The program's algorithm can get ideal digital circuit's reaction for corresponding input test vector. This paper will mostly cover functional model approach to ATPG. This paper covers automated test vector generation software basic theory with brief historical review, comparison of white box and black box models' testing and test vector generation algorithms. Also the software's static structures along with its components, system’s typical use cases are covered. The research part of the paper is focused mostly on the algorithms used, containing research methods which provide the results for the experiment part.
33

FORMAL: A SEQUENTIAL ATPG-BASED BOUNDED MODEL CHECKING SYSTEM FOR VLSI CIRCUITS

Qiang, Qiang 10 April 2006 (has links)
No description available.
34

Amélioration des solutions de test fonctionnel et structurel des circuits intégrés / Improving Functional and Structural Test Solutions for Integrated Circuits

Touati, Aymen 21 October 2016 (has links)
Compte tenu de la complexité des circuits intégrés de nos jours et des nœuds technologiques qui ne cessent pas de diminuer, être au rendez-vous avec les demandes de design, test et fabrication des dispositifs de haute qualité est devenu un des plus grands défis. Avoir des circuits intégrés de plus en plus performants devrait être atteint tout en respectant les contraintes de basse consommation, de niveaux de fiabilité demandés, de taux de défauts acceptables ainsi que du bas coût. Avec ce fascinant progrès de l’industrie des semi-conducteurs, les processus de fabrication sont devenus de plus en plus difficile à contrôler, ce qui rend les puces électroniques de nos jours plus disposés aux défauts physiques. Le test était et restera l’unique solution pour lutter contre l’occurrence des défauts de fabrication ; même il est devenu un facteur prédominant dans le coût totale de fabrication des circuits intégrés. Même si des solutions de test, qui existent déjà, étaient capables de satisfaire ce fameux compromis coût-qualité ces dernières années, il arrive d’observer encore des mécanismes de défauts malheureusement incontrôlables. Certains sont intrinsèquement reliés au processus de fabrication en lui-même. D’autres reviennent sans doute aux pratiques de test et surtout quand on analyse le taux de défauts détectés et le niveau de fiabilité atteint.L’objectif principal de cette thèse est d’implémenter des stratégies de test robustes et efficaces qui répondent aux lacunes des techniques de tests classiques et qui proposent des modèles de fautes plus réalistes et répondent au mieux aux attentes des fournisseurs. Dans l’objectif d’améliorer l’efficacité de test en termes de coût, capacité de couverture de faute, nous présentons divers contributions significatives qui touchent différents domaines entre-autres le test sur le terrain, les tests à hautes fréquences sous contraintes de puissance et finalement le test des chaines de scan.La partie majeure de cette thèse était consacrée pour le développement de nouvelles techniques de tests fonctionnels ciblant les systèmes à processeurs.Les méthodologies appliquées couvrent les problèmes de test sur terrain aussi bien que les problèmes de test de fabrication. Dans le premier cas, la techniques adoptée consiste à fusionner et compacter un ensemble initial de programmes fonctionnels afin d’atteindre une couverture de faute satisfaisante tout en respectant les contraintes du test sur terrain (temps de test réduit et ressource mémoire limitée). Cependant dans le deuxième cas, comme nous avons assez d’informations sur la structure du design, nous proposons un nouveau protocole de test qui va exploiter l’architecture de test existante. Dans ce contexte, nous avons validé et confirmé la relation complémentaire qui joint le test fonctionnel avec le test structurel. D’autres part, cette prometteuse approche assure un test qui respecte les limites de la consommation fonctionnelle et donc une fiabilité meilleure.La dernière contribution de cette thèse accorde toute l’attention à l’amélioration de test de la structure DFT « Design For Test » la plus utilisée qui est la chaîne de scan. Nous présentons dans cette contribution une approche de test qui cible les défauts physiques au sein de la cellule en elle-même.Cette approche représente une couverture de défauts meilleure et une longueur de test plus réduit si nous la comparons avec l’ATPG classique ciblant les mêmes défauts « Intra-cell defect ATPG ».Comme résultat majeur de cette efficace solution de test, nous avons observé une amélioration de 7.22% de couverture de défaut accompagné d’une réduction de 33.5% du temps de test en comparaison avec la couverture et le temps du test atteints par le « Cell-awer ATPG ». / In light of the aggressive scaling and increasing complexity of digital circuits, meeting the demands for designing, testing and fabricating high quality devices is extremely challenging.Higher performance of integrated circuits needs to be achieved while respecting the constraints of low power consumption, required reliability levels, acceptable defect rates and low cost. With these advances in the SC industry, the manufacturing process are becoming more and more difficult to control, making chips more prone to defects.Test was and still is the unique solution to cover manufacturing defects; it is becoming a dominant factor in overall manufacturing cost.Even if existing test solutions were able to satisfy the cost-reliability trade-off in the last decade, there are still uncontrolled failure mechanisms. Some of them are intrinsically related to the manufacturing process and some others belong to the test practices especially when we consider the amount of detected defects and achieved reliability.The main goal of this thesis is to implement robust and effective test strategies to complement the existing test techniques and cope with the issues of test practices and fault models. With the objective to further improve the test efficiency in terms of cost and fault coverage capability, we present significant contributions in the diverse areas of in-field test, power-aware at-speed test and finally scan-chain testing.A big part of this thesis was devoted to develop new functional test techniques for processor-based systems. The applied methodologies cover both in-field and end-of manufacturing test issues. In the farmer, the implemented test technique is based on merging and compacting an initial functional program set in order to achieve higher fault coverage while reducing the test time and the memory occupation. However in the latter, since we already have the structure information of the design, we propose to develop a new test scheme by exploiting the existing scan chain. In this case we validate the complementary relationship between functional and structural testing while avoiding over as well under-testing issues.The last contribution of this thesis deals with the test improvement of the most used DFT structure that is the scan chain. We present in this contribution an intra-cell aware testing approach showing higher intra-cell defect coverage and lower test length when compared to conventional cell-aware ATPG. As major results of this effective test solution, we show that an intra-cell defect coverage increase of up to 7.22% and test time decrease of up to 33.5 % can be achieved in comparison with cell-aware ATPG.
35

On low power test and DFT techniques for test set compaction

Remersaro, Santiago 01 January 2008 (has links)
The objective of manufacturing test is to separate the faulty circuits from the good circuits after they have been manufactured. Three problems encompassed by this task will be mentioned here. First, the reduction of the power consumed during test. The behavior of the circuit during test is modified due to scan insertion and other testing techniques. Due to this, the power consumed during test can be abnormally large, up to several times the power consumed during functional mode. This can result in a good circuit to fail the test or to be damaged due to heating. Second, how to modify the design so that it is easily testable. Since not every possible digital circuit can be tested properly it is necessary to modify the design to alter its behavior during test. This modification should not alter the functional behavior of the circuit. An example of this is test point insertion, a technique aimed at reducing test time and decreasing the number of faulty circuits that pass the test. Third, the creation of a test set for a given design that will both properly accomplish the task and require the least amount of time possible to be applied. The precision in separation of faulty circuits from good circuits depends on the application for which the circuit is intended and, if possible, must be maximized. The test application time is should be as low as possible to reduce test cost. This dissertation contributes to the discipline of manufacturing test and will encompass advances in the afore mentioned areas. First, a method to reduce the power consumed during test is proposed. Second, in the design modification area, a new algorithm to compute test points is proposed. Third, in the test set creation area, a new algorithm to reduce test set application time is introduced. The three algorithms are scalable to current industrial design sizes. Experimental results for the three methods show their effectiveness.
36

Design for test methods to reduce test set size

Liu, Yingdi 01 August 2018 (has links)
With rapid development in semiconductor technology, today's large and complex integrated circuits require a large amount of test data to achieve desired test coverage. Test cost, which is proportional to the size of the test set, can be reduced by generating a small number of highly effective test patterns. Automatic Test Pattern Generators (ATPGs) generate effective deterministic test patterns for different fault models and can achieve high test coverage. To reduce ATPG-produced test set size, design for test (DFT) methods can be used to further improve the ATPG process and apply generated test patterns in more efficient ways. The first part of this dissertation introduces a test point insertion (TPI) technique that reduces the test pattern counts and test data volume of a design by adding additional hardware called control points. These dedicated control points are inserted at internal nodes of the design to resolve large internal conflicts during ATPG. Therefore, more faults can be detected by a single test pattern. To minimize silicon area needed to implement these control points, we propose a method that reuses some existing functional flip-flops as drivers of the control points, instead of inserting dedicated flip-flops for the control points. Experimental results on industrial designs indicate that the proposed technique can achieve significant test pattern reductions, similar to the control points using dedicated flip-flops. The second part of this dissertation proposes a staggered ATPG scheme that produces deterministic test-per-clock-based staggered test patterns by using dedicated compactor scan chains to capture additional test responses during scan shift cycles that are used for regular scan cells to completely load each test pattern. These compactor scan chains are formed by dedicated capture-per-cycle observation test points inserted at suitable locations of the design. By leveraging this new scan infrastructure, more compacted test patterns can be generated, and more faults can also be systematically detected during the simulation process, thus reducing the overall test pattern count. To meet the stringent test requirements for in-system test (especially for automotive test), a built-in self-test (BIST) approach, called Stellar BIST, is introduced in the last part of this dissertation. Stellar BIST employs a dedicated BIST infrastructure with additional on-system memory to store some parent test patterns (seeds). Derivative test patterns can be obtained by complementing selected bits of corresponding parent patterns through an on-chip Stellar BIST controller. A dedicated ATPG process is also proposed for generating a minimal set of test patterns that need to be stored and their effective derivative patterns that require short test application time. Furthermore, the proposed scheme can provide flexible trade-offs between stored test data volume and test application time.
37

Automated Test Grading and Pattern Selection for Small-Delay Defects

Yilmaz, Mahmut January 2009 (has links)
<p>Timing-related defects are becoming increasingly important in nanometer-technology integrated circuits (ICs). Small delay variations induced by crosstalk, process variations, power-supply noise, as well as resistive opens and shorts can potentially cause timing failures in a design, thereby leading to quality and reliability concerns. All these effects are noticeable in today's technologies and they are likely to become more prominent in the next-generation process technologies~\cite{itrs2007}.</p><p>The detection of small-delay defects (SDDs) is difficult because of the small size of the introduced delay. Although the delay introduced by each SDD is small, the overall impact can be significant if the target path is critical, has low slack, or includes many SDDs. The overall delay of the path may become larger than the clock period, causing circuit failure or temporarily incorrect results. As a result, the detection of SDDs typically requires fault excitation through least-slack paths. However, widely-used automatic test-pattern generation (ATPG) techniques are not effective at exciting small delay defects. On the other hand, the usage of commercially available timing-aware tools is expensive in terms of pattern count inflation and very high test-generation times. Furthermore, these tools do not target real physical defects.</p><p>SDDs are induced not only by physical defects, but also by run-time variations such as crosstalk and power-supply noise. These variations are ignored by today's commercial ATPG tools. As a result, new methods are required for comprehensive coverage of SDDs.</p><p>Test data volume and test application time are also major concerns for large industrial circuits. In recent years, many compression techniques have been proposed and evaluated using industrial designs. However, these methods do not target sequence- or timing-dependent failures while compressing the test patterns. Since timing-related failures in high-performance integrated circuits are now increasingly dominated by SDDs, it is necessary to develop timing-aware compression techniques.</p><p>This thesis addresses the problem of selecting the most effective test patterns for detecting SDDs. A new gate and interconnect delay-defect probability measure is defined to model delay variations for nanometer technologies. The proposed technique intelligently selects the best set of patterns for SDD detection from a large pattern set generated using timing-unaware ATPG. It offers significantly lower computational complexity and it excites a larger number of long paths compared to previously proposed timing-aware ATPG methods. It is shown that, for the same pattern count, the selected patterns are more effective than timing-aware ATPG for detecting small delay defects caused by resistive shorts, resistive opens, process variations, and crosstalk. The proposed technique also serves as the basis for an efficient SDD-aware test compression scheme. The effectiveness of the proposed technique is highlighted for industrial circuits.</p><p>In summary, this research is targeted at the testing of SDDs caused by various underlying reasons. The proposed techniques are expected to generate high-quality and compact test patterns for various types of defects in nanometer ICs. The results of this research are expected to provide low-cost and effective test methods for nanometer devices, and they will lead to higher shipped-product quality.</p> / Dissertation
38

AN EFFICIENT APPROACH TO REDUCE TEST APPLICATION TIME THROUGH LIMITED SHIFT OPERATIONS IN SCAN CHAINS

Kuchi, Jayasurya 01 August 2017 (has links)
Scan Chains in DFT has gained more prominence in recent years due to the increase in the complexity of the sequential circuits. As the test time increases along with the number of memory elements in the circuit, new and improved methods came in to prominence. Even though scan chain increases observability and controllability, a big portion of the time is wasted while shifting in and shifting out the test patterns through the scan chain. This thesis focus on reducing the number of clock cycles that are needed to test the circuit. The proposed Algorithm uses modified shift procedures based on 1) Finding hard to detect faults in the circuit. 2) Productive way to generate test patterns for the combinational blocks in between the flip flops. 3) Rearranging test patterns and changing the shift procedures to achieve fault coverage in reduced number of clock cycles. In this model, the selection process is based on calculating the fault value of a fault and total fault value of the vector which is used to find the hard faults and the order in which the vectors are applied. This method reduces the required number of shifts for detecting the faults and thereby reducing the testing time. This thesis concentrates on appropriate utilization of scan chains for testing the sequential circuits. In this context, the proposed method shows promising results in reduction of the number of shifts, thereby reducing the test time. The experimental results are based on the widely cited ISCAS 89 benchmark circuits.
39

Generation of compact test sets and a design for the generation of tests with low switching activity

Kumar, Amit 01 December 2014 (has links)
Test generation procedures for large VLSI designs are required to achieve close to 100% fault coverage using a small number of tests. They also must accommodate on-chip test compression circuits which are widely used in modern designs. To obtain test sets with small sizes one could use extra hardware such as test points or use software techniques. An important aspects impacting test generation is the number of specified positions, which facilitate the encoding of test cubes when using test compression logic. Fortuitous detection or generation of tests such that they facilitate detection of yet not targeted faults, is also an important goal for test generation procedures. At first, we consider the generation of compact test sets for designs using on-chip test compression logic. We introduce two new measures to guide automatic test generation procedures (ATPGs) to balance between these two contradictory requirements of fortuitous detection and number of specifications. One of the new measures is meant to facilitate detection of yet undetected faults, and the value of the measures is periodically updated. The second measure reduces the number of specified positions, which is crucial when using high compression. Additionally, we introduce a way to randomly choose between the two measures. We also propose an ATPG methodology tailored for BIST ready designs with X-bounding logic and test points. X-bounding and test points used to have a significant impact on test data compression by reducing the number of specified positions. We propose a new ATPG guidance mechanism that balances between reduced specifications in BIST ready designs, and also facilitates detection of undetected faults. We also found that compact test generation for BIST ready designs is influenced by the order in which faults are targeted, and we proposed a new fault ordering technique based on fault location in a FFR. Transition faults are difficult to test and often result in longer test lengths, we propose a new fault ordering technique based on test enumeration, this ordering technique and a new guidance approach was also proposed for transition faults. Test set sizes were reduced significantly for both stuck-at and transition fault models. In addition to reducing data volume, test time, and test pin counts, the test compression schemes have been used successfully to limit test power dissipation. Indisputably, toggling of scan cells in scan chains that are universally used to facilitate testing of industrial designs can consume much more power than a circuit is rated for. Balancing test set sizes against the power consumption in a given design is therefore a challenge. We propose a new Design for Test (DFT) scheme that deploys an on-chip power-aware test data decompressor, the corresponding test cube encoding method, and a compression-constrained ATPG that allows loading scan chains with patterns having low transition counts, while encoding a significant number of specified bits produced by ATPG in a compression-friendly manner. Moreover, the new scheme avoids periods of elevated toggling in scan chains and reduces scan unload switching activity due to unique test stimuli produced by the new technique, leading to a significantly reduced power envelope for the entire circuit under test.
40

Techniques for Enhancing Test and Diagnosis of Digital Circuits

Prabhu, Sarvesh P. 10 January 2015 (has links)
Test and Diagnosis are critical areas in semiconductor manufacturing. Every chip manufactured using a new or premature technology or process needs to be tested for manufacturing defects to ensure defective chips are not sold to the customer. Conventionally, test is done by mounting the chip on an Automated Test Equipment (ATE) and applying test patterns to test for different faults. With shrinking feature sizes, the complexity of the circuits on chip is increasing, which in turn increases the number of test patterns needed to test the chip comprehensively. This increases the test application time which further increases the cost of test, ultimately leading to increase in the cost per device. Furthermore, chips that fail during test need to be diagnosed to determine the cause of the failure so that the manufacturing process can be improved to increase the yield. With increase in the size and complexity of the circuits, diagnosis is becoming an even more challenging and time consuming process. Fast diagnosis of failing chips can help in reducing the ramp-up to the high volume manufacturing stage and thus reduce the time to market. To reduce the time needed for diagnosis, efficient diagnostic patterns have to be generated that can distinguish between several faults. However, in order to reduce the test application time, the total number of patterns should be minimized. We propose a technique for generating diagnostic patterns that are inherently compact. Experimental results show up to 73% reduction in the number of diagnostic patterns needed to distinguish all faults. Logic Built-in Self-Test (LBIST) is an alternative methodology for testing, wherein all components needed to test the chip are on the chip itself. This eliminates the need of expensive ATEs and allows for at-speed testing of chips. However, there is hardware overhead incurred in storing deterministic test patterns on chip and failing chips are hard to diagnose in this LBIST architecture due to limited observability. We propose a technique to reduce the number of patterns needed to be stored on chip and thus reduce the hardware overhead. We also propose a new LBIST architecture which increases the diagnosability in LBIST with a minimal hardware overhead. These two techniques overcome the disadvantages of LBIST and can make LBIST more popular solution for testing of chips. Modern designs may contain a large number of small embedded memories. Memory Built-in Self-Test (MBIST) is the conventional technique of testing memories, but it incurs hardware overhead. Using MBIST for small embedded memories is impractical as the hardware overhead would be significantly high. Test generation for such circuits is difficult because the fault effect needs to be propagated through the memory. We propose a new technique for testing of circuits with embedded memories. By using SMT solver, we model memory at a high level of abstraction using theory of array, while keeping the surrounding logic at gate level. This effectively converts the test generation problem into a combinational test generation problem and make test generation easier than the conventional techniques. / Ph. D.

Page generated in 0.0287 seconds