• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 11
  • 11
  • 7
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improving encoding efficiency in test compression using sequential linear decompressors with retained free variables

Muthyala Sudhakar, Sreenivaas 23 October 2013 (has links)
This thesis proposes an approach to improve test compression using sequential linear decompressors by using retained free variables. Sequential linear decompressors are inherently efficient and attractive for encoding test vectors with high percentages of don't cares (i.e., test cubes). The encoding of these test cubes is done by solving a system of linear equations. In streaming decompression, a fixed number of free variables are used to encode each test cube. The non-pivot free variables used in Gaussian Elimination are wasted when the decompressor is reset before encoding the next test cube which is conventionally done to keep computational complexity manageable. In this thesis, a technique for retaining the non-pivot free variables when encoding one test cube and using them in encoding the subsequent test cubes is explored. This approach retains most of the non-pivot free variables with a minimal increase in runtime for solving the equations. Also, no additional control information is needed. Experimental results are presented showing that the encoding efficiency and hence compression, can be significantly boosted. / text
2

AN EFFICIENT APPROACH TO REDUCE TEST APPLICATION TIME THROUGH LIMITED SHIFT OPERATIONS IN SCAN CHAINS

Kuchi, Jayasurya 01 August 2017 (has links)
Scan Chains in DFT has gained more prominence in recent years due to the increase in the complexity of the sequential circuits. As the test time increases along with the number of memory elements in the circuit, new and improved methods came in to prominence. Even though scan chain increases observability and controllability, a big portion of the time is wasted while shifting in and shifting out the test patterns through the scan chain. This thesis focus on reducing the number of clock cycles that are needed to test the circuit. The proposed Algorithm uses modified shift procedures based on 1) Finding hard to detect faults in the circuit. 2) Productive way to generate test patterns for the combinational blocks in between the flip flops. 3) Rearranging test patterns and changing the shift procedures to achieve fault coverage in reduced number of clock cycles. In this model, the selection process is based on calculating the fault value of a fault and total fault value of the vector which is used to find the hard faults and the order in which the vectors are applied. This method reduces the required number of shifts for detecting the faults and thereby reducing the testing time. This thesis concentrates on appropriate utilization of scan chains for testing the sequential circuits. In this context, the proposed method shows promising results in reduction of the number of shifts, thereby reducing the test time. The experimental results are based on the widely cited ISCAS 89 benchmark circuits.
3

Diagnosis Of VLSI circuit defects: defects in scan chain and circuit logic

Tang, Xun 01 December 2010 (has links)
Given a logic circuit that fails a test, diagnosis is the process of narrowing down the possible locations of the defects. Diagnosis to locate defects in VLSI circuits has become very important during the yield ramp up process, especially for 90 nm and below technologies where physical failure analysis machines become less successful due to reduced defect visibility by the smaller feature size and larger leakage currents. Successful defect isolation relies heavily on the guidance from fault diagnosis and will depend even more for the future technologies. To assist a designer or a failure analysis engineer, the diagnosis tool tries to identify the possible locations of the failure effectively and quickly. While many defects reside in the logic part of a chip, defects in scan chains have become more and more common recently as typically 30%-50% logic gates impact the operation of scan chains in a scan design. Logic diagnosis and scan chain diagnosis are the two main fields of diagnosis research. The quality of diagnosis directly impacts the time-to-market and the total product cost. Volume diagnosis with statistical learning is important to discover systematic defects. An accurate diagnosis tool is required to diagnose large numbers of failing devices to aid statistical yield learning. In this work, we propose techniques to improve diagnosis accuracy and resolution, techniques to improve run-time performance. We consider the problem of determining the location of defects in scan chains and logic. We investigate a method to improve the diagnosability of production compressed test patterns for multiple scan chain failures. Then a method to generate special diagnostic patterns for scan chain failures was proposed. The method tries to generate a complete test pattern set to pinpoint the exact faulty scan cell when flush tests tell which scan chain is faulty. Next we studied the problem of diagnosis of multiple faults in the logic of circuits. First we propose a method to diagnose multiple practical physical defects using simple logic fault models. At last we propose a method based on fault-tuple equivalence trees to further improve diagnosis quality.
4

Contributions au guillochage et à l'authentification de photographies / Contributions to guillochage and photograph authentication

Rivoire, Audrey 29 October 2012 (has links)
L'objectif est de développer un guillochage de photographie inspiré de l'holographie numérique en ligne, capable d'encoder la signature issue d'un hachage robuste de l'image (méthode de Mihçak et Venkatesan). Une telle combinaison peut permettre l'authentification de l'image guillochée dans le domaine numérique, le cas échéant après impression. Cette approche contraint le hachage à être robuste au guillochage. La signature est codée en un nuage de formes que l'on fait virtuellement diffracter pour former la marque à insérer (guilloches dites de Fresnel) dans l'image originale. Image dense, cette marque est insérée de façon peu, voire non visible afin de ne pas gêner la perception du contenu de la photographie mais de façon à pouvoir ultérieurement lire la signature encodée en vue de la comparer à la signature de la photographie à vérifier. L'impression-lecture rend la tâche plus difficile. Le guillochage de Fresnel et l'authentification associée sont testés sur une banque (réduite) d'images / This work aims to develop a new type of guilloché pattern to be inserted in a photograph (guillochage), inspired from in-line digital holography and able to encode an image robust hash value (méthode de Mihçak et Venkatesan). Such a combination can allow the authentication of the image including the guilloché pattern in the digital domain and possibly in the print domain. This approach constraints image hashing to be robust to guillochage. The hash value is encoded as a cloud of shapes that virtually produces a diffraction”pattern to be inserted as a mark (named “guilloches de Fresnel“) in the original image. The image insertion results from a trade off : the high-density mark should be quite or even not visible in order to avoid any disturbance in the perception of the image content but detectable in order to be able to compare the decoded hash to the hash of the current photograph. Print and scan makes the task harder. Both the Fresnel guillochage and the associated authentication are tested on a (reduced) image database
5

New tests and test methodologies for scan cell internal faults

Yang, Fan 01 December 2009 (has links)
Semiconductor industry goals for the quality of shipped products continue to get higher to satisfy customer requirements. Higher quality of shipped electronic devices can only be obtained by thorough tests of the manufactured components. Scan chains are universally used in large industrial designs in order to cost effectively test manufactured electronic devices. They contain nearly half of the logic transistors in large industrial designs. Yet, faults in the scan cells are not directly targeted by the existing tests. The main objective of this thesis is to investigate the detectability of the faults internal to scan cells. In this thesis, we analyze the detection of line stuck-at, transistor stuck-on, resistive opens and bridging faults in scan cells. Both synchronous and asynchronous scan cells are considered. We define the notion of half-speed flush test and demonstrate that such new tests increase coverage of internal faults in scan cells. A new set of flush tests is proposed and such tests are applied at higher temperatures to detect scan cell internal opens with a wider range of resistances. We also propose new scan based tests to further increase the coverage of those opens. The proposed tests are shown to achieve the maximum possible coverage of opens in transistors internal to scan cells. For an asynchronous scan cell considered, two new flush tests are added to cover the faults that are not detected by the tests for synchronous scan cells. An analysis of detection of a set of scan cell internal bridging faults is described. Both zero-resistance and nonzero-resistance bridging fault models are considered. We show that the detection of some zero-resistance non-feedback bridging faults requires two-pattern tests. We classify the undetectable faults based on the reasons for their undetectability. We also propose an enhanced logic BIST architecture that accomplishes the new flush tests we propose to detect scan cell internal opens. The effectiveness of these new methods to detect scan cell internal faults is demonstrated by experimental results using some standard scan cells from a large industrial design.
6

Power Modeling and Scheduling of Tests for Core-based System Chips

Samii, Soheil January 2005 (has links)
<p>The technology today makes it possible to integrate a complete system on a single chip, called "System-on-Chip'' (SOC). Nowadays SOC designers use previously designed hardware modules, called cores, together with their user defined logic (UDL), to form a complete system on a single chip. The manufacturing process may result in defect chips, for instance due to the base material, and therefore testing chips after production is important in order to ensure fault-free chips. </p><p>The testing time for a chip will affect its final cost. Thus it is important to minimize the testing time for each chip. For core-based SOCs this can be done by testing several cores at the same time, instead of testing the cores sequentially. However, this will result in a higher activity in the chip, hence higher power consumption. Due to several factors in the manufacturing process there are limitations of the power consumption for a chip. Therefore, the power limitations should be carefully considered when planning the testing of a chip. Otherwise it can be damaged during test, due to overheating. This leads to the problem of minimizing testing time under such power constraints. </p><p>In this thesis we discuss test power modeling and its application to SOC testing. We present previous work in this area and conclude that current power modeling techniques in SOC testing are rather pessimistic. We therefore propose a more accurate power model that is based on the analysis of the test data. Furthermore, we present techniques for test pattern reordering, with the objective of partitioning the test power consumption into low parts and high parts. </p><p>The power model is included in a tool for SOC test architecture design and test scheduling, where the scheduling heuristic is designed for SOCs with fixed- width test bus architectures. Several experiments have been conducted in order to evaluate the proposed approaches. The results show that, by using the presented power modeling techniques in test scheduling algorithms, we will get lower testing times and thus lower test cost.</p>
7

Power Modeling and Scheduling of Tests for Core-based System Chips

Samii, Soheil January 2005 (has links)
The technology today makes it possible to integrate a complete system on a single chip, called "System-on-Chip'' (SOC). Nowadays SOC designers use previously designed hardware modules, called cores, together with their user defined logic (UDL), to form a complete system on a single chip. The manufacturing process may result in defect chips, for instance due to the base material, and therefore testing chips after production is important in order to ensure fault-free chips. The testing time for a chip will affect its final cost. Thus it is important to minimize the testing time for each chip. For core-based SOCs this can be done by testing several cores at the same time, instead of testing the cores sequentially. However, this will result in a higher activity in the chip, hence higher power consumption. Due to several factors in the manufacturing process there are limitations of the power consumption for a chip. Therefore, the power limitations should be carefully considered when planning the testing of a chip. Otherwise it can be damaged during test, due to overheating. This leads to the problem of minimizing testing time under such power constraints. In this thesis we discuss test power modeling and its application to SOC testing. We present previous work in this area and conclude that current power modeling techniques in SOC testing are rather pessimistic. We therefore propose a more accurate power model that is based on the analysis of the test data. Furthermore, we present techniques for test pattern reordering, with the objective of partitioning the test power consumption into low parts and high parts. The power model is included in a tool for SOC test architecture design and test scheduling, where the scheduling heuristic is designed for SOCs with fixed- width test bus architectures. Several experiments have been conducted in order to evaluate the proposed approaches. The results show that, by using the presented power modeling techniques in test scheduling algorithms, we will get lower testing times and thus lower test cost.
8

Metodika aplikace testu obvodu založená na identifikaci testovatelných bloků / Test Application Methodology Based On the Identification of Testable blocks

Herrman, Tomáš Unknown Date (has links)
The PhD thesis deals with the analysis of digital systems described on RT level. The methodology of  data paths analysis is decribed, the data path controller analysis is not solved in the thesis. The methodology is built on the concept of Testable Block (TB) which allows to divide digital component to such segments which can be tested through their inputs/outputs, border registers and primary inputs/outputs are used for this purpose. As a result, lower number of registers is needed to be included into scan  chain - border registers are the only ones which are scanned.  The segmentation allows also to reduce the volume of test vectors, tests are generated for segments, not for the complete component. To identify TBs, two evolutionary algorithms are used, they operate on TB formal model which is also defined in the thesis.
9

SCAN CHAIN FAULT IDENTIFICATION USING WEIGHT-BASED CODES FOR SoC CIRCUITS

GHOSH, SWAROOP 02 July 2004 (has links)
No description available.
10

Testovací rozhraní integrovaných obvodů s malým počtem vývodů / A Test Interface for Integrated Circuits with the Small Number of Pins

Tománek, Jakub January 2017 (has links)
This study explores the possibilities for reducing the number of pins needed for scan mode interface. In the first part of this paper the existing solutions and methods that are usable for this purpose are described. Specific four pin, three pin, two pin, one pin and zero pin interfaces are designed in second part. Advantages and disadvantages of existing solutions and methods as well as designed and proposed interface are summarized in the conclusion.

Page generated in 0.0745 seconds