• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 29
  • 29
  • 19
  • 12
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Generování testovacích vzorů / Test pattern generation

Hašek, Martin January 2010 (has links)
This thesis is focused on application development for simulation lenses’ optical distortions and also for creation own patterns. In the first part are discussed common problems of optical distortion and concept of software analysis. Further is described realization and implementation of particular modules in the application. In the end is show up graphical user interface and its functionality.
12

SINGLE TROJAN INJECTION MODEL GENERATION AND DETECTION

Bhamidipati, Harini January 2009 (has links)
No description available.
13

An automatic test pattern generation in the logic gate level circuits and MOS transistor circuits at Ohio University

Lee, Hoon-Kyeu January 1986 (has links)
No description available.
14

Techniques for Enhancing Test and Diagnosis of Digital Circuits

Prabhu, Sarvesh P. 10 January 2015 (has links)
Test and Diagnosis are critical areas in semiconductor manufacturing. Every chip manufactured using a new or premature technology or process needs to be tested for manufacturing defects to ensure defective chips are not sold to the customer. Conventionally, test is done by mounting the chip on an Automated Test Equipment (ATE) and applying test patterns to test for different faults. With shrinking feature sizes, the complexity of the circuits on chip is increasing, which in turn increases the number of test patterns needed to test the chip comprehensively. This increases the test application time which further increases the cost of test, ultimately leading to increase in the cost per device. Furthermore, chips that fail during test need to be diagnosed to determine the cause of the failure so that the manufacturing process can be improved to increase the yield. With increase in the size and complexity of the circuits, diagnosis is becoming an even more challenging and time consuming process. Fast diagnosis of failing chips can help in reducing the ramp-up to the high volume manufacturing stage and thus reduce the time to market. To reduce the time needed for diagnosis, efficient diagnostic patterns have to be generated that can distinguish between several faults. However, in order to reduce the test application time, the total number of patterns should be minimized. We propose a technique for generating diagnostic patterns that are inherently compact. Experimental results show up to 73% reduction in the number of diagnostic patterns needed to distinguish all faults. Logic Built-in Self-Test (LBIST) is an alternative methodology for testing, wherein all components needed to test the chip are on the chip itself. This eliminates the need of expensive ATEs and allows for at-speed testing of chips. However, there is hardware overhead incurred in storing deterministic test patterns on chip and failing chips are hard to diagnose in this LBIST architecture due to limited observability. We propose a technique to reduce the number of patterns needed to be stored on chip and thus reduce the hardware overhead. We also propose a new LBIST architecture which increases the diagnosability in LBIST with a minimal hardware overhead. These two techniques overcome the disadvantages of LBIST and can make LBIST more popular solution for testing of chips. Modern designs may contain a large number of small embedded memories. Memory Built-in Self-Test (MBIST) is the conventional technique of testing memories, but it incurs hardware overhead. Using MBIST for small embedded memories is impractical as the hardware overhead would be significantly high. Test generation for such circuits is difficult because the fault effect needs to be propagated through the memory. We propose a new technique for testing of circuits with embedded memories. By using SMT solver, we model memory at a high level of abstraction using theory of array, while keeping the surrounding logic at gate level. This effectively converts the test generation problem into a combinational test generation problem and make test generation easier than the conventional techniques. / Ph. D.
15

Testinių rinkinių atrinkimo programinės įrangos sudarymas ir tyrimas / Construction and research of software for test patterns selection

Drovnenkov, Aleksej 16 August 2007 (has links)
Automatinis testų rinkinių generavimas (pasaulyje priimtas angliškas sutrumpinimas – ATPG) yra pakankamai senai sprendžiama problema. Jos tikslas – surasti optimalų testinių vektorių sekas, kurios pilnai užtikrintų visas schemos gamybos etape padarytas klaidas per mažiausią laiką. Vienas iš skaitmeninių schemų testavimo ir testų rinkinių sudarymo metodas yra funkcinis testavimo metodas. Jo privalumai yra tame, kad testų rinkinių sudarymo programa nežino schemos vidinės struktūros, o testuoja tik idealų schemos modelį, kuri yra pateikta juodos dėžės pavidale, tai yra programa gali gauti idealaus schemos reakciją į tam tikrą įvedimo signalų vektorių. Šiame darbe parinktas funkcinis testavimo metodas. Šiame darbe aprašoma testinių rinkinių atrinkimo programinės įrangos teorinė bazė, automatinio testų rinkinio formavimo trumpa istorinė apžvalga, baltos ir juodos dėžės modelių pagristų formavimo algoritmų palyginimai. Aprašoma programų sistemos statinė struktūra bei jos komponentai, sistemos panaudojimo atvejai. Tyrimų dalyje aprašoma tyrimo metodika, siūlomi programos kokybės tobulinimo metodai. Eksperimentų dalyje aprašomi tyrimų eksperimentų rezultatai. / Automated test pattern generation (ATPG) problem is being solved for a relatively long time. Its' point is to find optimal test vector sequences, which would cover most of all production-caused digital circuit faults and would run for the minimum amount of time. One of the ways to test and generate test vectors for digital circuits is functional test method. Its' benefit is that system does not need to be aware of digital circuit's inner logical model, but has to deal only with the input, so that just the ideal model of the digital circuit can be used as a "black box". The program's algorithm can get ideal digital circuit's reaction for corresponding input test vector. This paper will mostly cover functional model approach to ATPG. This paper covers automated test vector generation software basic theory with brief historical review, comparison of white box and black box models' testing and test vector generation algorithms. Also the software's static structures along with its components, system’s typical use cases are covered. The research part of the paper is focused mostly on the algorithms used, containing research methods which provide the results for the experiment part.
16

On Detection, Analysis and Characterization of Transient and Parametric Failures in Nano-scale CMOS VLSI

Sanyal, Alodeep 01 May 2010 (has links)
As we move deep into nanometer regime of CMOS VLSI (45nm node and below), the device noise margin gets sharply eroded because of continuous lowering of device threshold voltage together with ever increasing rate of signal transitions driven by the consistent demand for higher performance. Sharp erosion of device noise margin vastly increases the likelihood of intermittent failures (also known as parametric failures) during device operation as opposed to permanent failures caused by physical defects introduced during manufacturing process. The major sources of intermittent failures are capacitive crosstalk between neighbor interconnects, abnormal drop in power supply voltage (also known as droop), localized thermal gradient, and soft errors caused by impact of high energy particles on semiconductor surface. In nanometer technology, these intermittent failures largely outnumber the permanent failures caused by physical defects. Therefore, it is of paramount importance to come up with efficient test generation and test application methods to accurately detect and characterize these classes of failures. Soft error rate (SER) is an important design metric used in semiconductor industry and represented by number of such errors encountered per Billion hours of device operation, known as Failure-In-Time (FIT) rate. Soft errors are rare events. Traditional techniques for SER characterization involve testing multiple devices in parallel, or testing the device while keeping it in a high energy neutron bombardment chamber to artificially accelerate the occurrence of single events. Motivated by the fact that measurement of SER incurs high time and cost overhead, in this thesis, we propose a two step approach: hii a new filtering technique based on amplitude of the noise pulse, which significantly reduces the set of soft error susceptible nodes to be considered for a given design; followed by hiii an Integer Linear Program (ILP)-based pattern generation technique that accelerates the SER characterization process by 1-2 orders of magnitude compared to the current state-of-the-art. During test application, it is important to distinguish between an intermittent failure and a permanent failure. Motivated by the fact that most of the intermittent failures are temporally sparse in nature, we present a novel design-for-testability (DFT) architecture which facilitates application of the same test vector twice in a row. The underlying assumption here is that a soft fail will not manifest its effect in two consecutive test cycles whereas the error caused by a physical defect will produce an identically corrupt output signature in both test cycles. Therefore, comparing the output signature for two consecutive applications of the same test vector will accurately distinguish between a soft fail and a hard fail. We show application of this DFT technique in measuring soft error rate as well as other circuit marginality related parametric failures, such as thermal hot-spot induced delay failures. A major contribution of this thesis lies on investigating the effect of multiple sources of noise acting together in exacerbating the noise effect even further. The existing literature on signal integrity verification and test falls short of taking the combined noise effects into account. We particularly focus on capacitive crosstalk on long signal nets. A typical long net is capacitively coupled with multiple aggressors and also tend to have multiple fanout gates. Gate leakage current that originates in fanout receivers, flows backward and terminates in the driver causing a shift in driver output voltage. This effect becomes more prominent as gate oxide is scaled more aggressively. In this thesis, we first present a dynamic simulation-based study to establish the significance of the problem, followed by proposing an automatic test pattern generation (ATPG) solution which uses 0-1 Integer Linear Program (ILP) to maximize the cumulative voltage noise at a given victim net due to crosstalk and gate leakage loading in conjunction with propagating the fault effect to an observation point. Pattern pairs generated by this technique are useful for both manufacturing test application as well as signal integrity verification for nanometer designs. This research opens up a new direction for studying nanometer noise effects and motivates us to extend the study to other noise sources in tandem including voltage drop and temperature effects.
17

FORMAL: A SEQUENTIAL ATPG-BASED BOUNDED MODEL CHECKING SYSTEM FOR VLSI CIRCUITS

Qiang, Qiang 10 April 2006 (has links)
No description available.
18

Online Techniques for Enhancing the Diagnosis of Digital Circuits

Tanwir, Sarmad 05 April 2018 (has links)
The test process for semiconductor devices involves generation and application of test patterns, failure logging and diagnosis. Traditionally, most of these activities cater for all possible faults without making any assumptions about the actual defects present in the circuit. As the size of the circuits continues to increase (following the Moore's Law) the size of the test sets is also increasing exponentially. It follows that the cost of testing has already surpassed that of design and fabrication. The central idea of our work in this dissertation is that we can have substantial savings in the test cost if we bring the actual hardware under test inside the test process's various loops -- in particular: failure logging, diagnostic pattern generation and diagnosis. Our first work, which we describe in Chapter 3, applies this idea to failure logging. We modify the existing failure logging process that logs only the first few failure observations to an intelligent one that logs failures on the basis of their usefulness for diagnosis. To enable the intelligent logging, we propose some lightweight metrics that can be computed in real-time to grade the diagnosibility of the observed failures. On the basis of this grading, we select the failures to be logged dynamically according to the actual defects in the circuit under test. This means that the failures may be logged in a different manner for devices having different defects. This is in contrast with the existing method that has the same logging scheme for all failing devices. With the failing devices in the loop, we are able to optimize the failure log in accordance with every particular failing device thereby improving the quality of diagnosis subsequently. In Chapter 4, we investigate the most lightweight of these metrics for failure log optimization for the diagnosis of multiple simultaneous faults and provide the results of our experiments. Often, in spite of exploiting the entire potential of a test set, we might not be able to meet our diagnosis goals. This is because the manufacturing tests are generated to meet the fault coverage goals using as fewer tests as possible. In other words, they are optimized for `detection count' and `test time' and not for `diagnosis'. In our second work, we leverage realtime measures of diagnosibility, similar to the ones that were used for failure log optimization, to generate additional diagnostic patterns. These additional patterns help diagnose the existing failures beyond the power of existing tests. Again, since the failing device is inside the test generation loop, we obtain highly specific tests for each failing device that are optimized for its diagnosis. Using our proposed framework, we are able to diagnose devices better and faster than the state of the art industrial tools. Chapter 5 provides a detailed description of this method. Our third work extends the hardware-in-the-loop framework to the diagnosis of scan chains. In this method, we define a different metric that is applicable to scan chain diagnosis. Again, this method provides additional tests that are specific to the diagnosis of the particular scan chain defects in individual devices. We achieve two further advantages in this approach as compared to the online diagnostic pattern generator for logic diagnosis. Firstly, we do not need a known good device for generating or knowing the good response and secondly, besides the generation of additional tests, we also perform the final diagnosis online i.e. on the tester during test application. We explain this in detail in Chapter 6. In our research, we observe that feedback from a device is very useful for enhancing the quality of root-cause investigations of the failures in its logic and test circuitry i.e. the scan chains. This leads to the question whether some primitive signals from the devices can be indicative of the fault coverage of the applied tests. In other words, can we estimate the fault coverage without the costly activities of fault modeling and simulation? By conducting further research into this problem, we found that the entropy measurements at the circuit outputs do indeed have a high correlation with the fault coverage and can also be used to estimate it with a good accuracy. We find that these predictions are accurate not only for random tests but also for the high coverage ATPG generated tests. We present the details of our fourth contribution in Chapter 7. This work is of significant importance because it suggests that high coverage tests can be learned by continuously applying random test patterns to the hardware and using the measured entropy as a reward function. We believe that this lays down a foundation for further research into gate-level sequential test generation, which is currently intractable for industrial scale circuits with the existing techniques. / Ph. D.
19

Power Modeling and Scheduling of Tests for Core-based System Chips

Samii, Soheil January 2005 (has links)
<p>The technology today makes it possible to integrate a complete system on a single chip, called "System-on-Chip'' (SOC). Nowadays SOC designers use previously designed hardware modules, called cores, together with their user defined logic (UDL), to form a complete system on a single chip. The manufacturing process may result in defect chips, for instance due to the base material, and therefore testing chips after production is important in order to ensure fault-free chips. </p><p>The testing time for a chip will affect its final cost. Thus it is important to minimize the testing time for each chip. For core-based SOCs this can be done by testing several cores at the same time, instead of testing the cores sequentially. However, this will result in a higher activity in the chip, hence higher power consumption. Due to several factors in the manufacturing process there are limitations of the power consumption for a chip. Therefore, the power limitations should be carefully considered when planning the testing of a chip. Otherwise it can be damaged during test, due to overheating. This leads to the problem of minimizing testing time under such power constraints. </p><p>In this thesis we discuss test power modeling and its application to SOC testing. We present previous work in this area and conclude that current power modeling techniques in SOC testing are rather pessimistic. We therefore propose a more accurate power model that is based on the analysis of the test data. Furthermore, we present techniques for test pattern reordering, with the objective of partitioning the test power consumption into low parts and high parts. </p><p>The power model is included in a tool for SOC test architecture design and test scheduling, where the scheduling heuristic is designed for SOCs with fixed- width test bus architectures. Several experiments have been conducted in order to evaluate the proposed approaches. The results show that, by using the presented power modeling techniques in test scheduling algorithms, we will get lower testing times and thus lower test cost.</p>
20

Otimização genética de sequências de padrões de teste para circuitos VLSI.

Dias, Leonardo Alves 29 February 2016 (has links)
Submitted by Morgana Silva (morgana_linhares@yahoo.com.br) on 2016-08-08T19:40:34Z No. of bitstreams: 1 arquivototal.pdf: 3706352 bytes, checksum: 29aeb9abd002f9b433386245e34fc85b (MD5) / Made available in DSpace on 2016-08-08T19:40:34Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 3706352 bytes, checksum: 29aeb9abd002f9b433386245e34fc85b (MD5) Previous issue date: 2016-02-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq / An integrated circuit (IC) in test mode has a higher energy consumption compared to the normal operating mode, due to the increased number of transitions in the nodes of the resulting circuit applying test patterns used to stimulate the CI during the test run resulting in high power dissipation which can damage the IC, resulting in higher costs for manufacturers. In this work we propose a genetic algorithm to optimize sequences of test patterns aiming at low energy consumption during the test run, maintaining an adequate fault coverage. It is also proposed using the Berlekamp-Massey algorithm to synthesize an integrated test patterns with low hardware sobreárea generator capable of generating sequences optimized based on Shift Register with Linear Feedback. The optimization of the sequences is done by reducing the number of transitions at nodes whose evaluation is done by a computer program developed in this study in C ++. Finally, simulations were performed with the genetic algorithm to check the behavior to optimize the number of transitions, the fault coverage and hardware sobreárea. / Um circuito integrado (CI) em modo de teste apresenta um maior consumo energético comparado ao modo de operação normal, devido ao aumento do número de transições nos nós do circuito decorrentes da aplicação de padrões de teste utilizados para estimular o CI durante a execução do teste resultando em uma alta dissipação de potência que pode danificar o CI, acarretando em maiores custos para as fabricantes. Assim, neste trabalho é proposto um algoritmo genético para otimização de sequências de padrões de teste visando o baixo consumo energético, durante a execução do teste, mantendo uma adequada cobertura de falhas. É proposto também o uso do algoritmo de Berlekamp-Massey para sintetizar um gerador integrado de padrões de teste com baixa sobreárea de hardware capaz de gerar as sequências otimizadas baseado em Registrador de Deslocamento com Realimentação Linear. A otimização das sequências é feita através da redução do número de transições nos nós cuja avaliação é feita por um programa de computador desenvolvido nesta pesquisa em C++. Por fim, simulações foram realizadas com o algoritmo genético para verificar o comportamento em relação a otimização do número de transições, da cobertura de falhas e da sobreárea de hardware.

Page generated in 0.0755 seconds