• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 14
  • 14
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On-Chip Diagnosis of Generalized Delay Failures using Compact Fault Dictionaries

Beckler, Matthew Layne 01 April 2017 (has links)
Integrated Circuits (ICs) are an essential part of nearly every electronic device. From toys to appliances, spacecraft to power plants, modern society truly depends on the reliable operation of billions of ICs around the world. The steady shrinking of IC transistors over past decades has enabled drastic improvements in IC performance while reducing area and power consumption. However, with continued scaling of semiconductor fabrication processes, failure sources of many types are becoming more pronounced and are increasingly affecting system operation. Additionally, increasing variation during fabrication also increases the difficulty of yielding chips in a cost-effective manner. Finally, phenomena such as early-life and wear-out failures pose new challenges to ensuring robustness. One approach for ensuring robustness centers on performing test during run-time, identifying the location of any defects, and repairing, replacing, or avoiding the affected portion of the system. Leveraging the existing design-for-testability (DFT) structures, thorough tests that target these delay defects are applied using the scan logic. Testing is performed periodically to minimize user-perceived performance loss, and if testing detects any failures, on-chip diagnosis is performed to localize the defect to the level of repair, replacement, or avoidance. In this dissertation, an on-chip diagnosis solution using a fault dictionary is described and validated through a large variety of experiments. Conventional fault dictionary approaches can be used to locate failures but are limited to simplistic fail behaviors due to the significant computational resources required for dictionary generation and memory storage. To capture the misbehaviors expected from scaled technologies, including early-life and wear-out failures, the Transition-X (TRAX) fault model is introduced. Similar to a transition fault, a TRAX fault is activated by a signal level transition or glitch, and produces the unknown value X when activated. Recognizing that the limited options for runtime recovery of defective hardware relax the conventional requirements for defect localization, a new fault dictionary is developed to provide diagnosis localization only to the required level of the design hierarchy. On-chip diagnosis using such a hierarchical dictionary is performed using a new scalable hardware architecture. To reduce the computation time required to generate the TRAX hierarchical dictionary for large designs, the incredible parallelism of graphics processing units (GPUs) is harnessed to provide an efficient fault simulation engine for dictionary construction. Finally, the on-chip diagnosis process is evaluated for suitability in providing accurate diagnosis results even when multiple concurrent defects are affecting a circuit.
2

Enhancement of defect diagnosis based on the analysis of CMOS DUT behaviour

Arumí i Delgado, Daniel 11 July 2008 (has links)
Les dimensions dels transistors disminueixen per a cada nova tecnologia CMOS. Aquest alt nivell d'integració complica el procés de fabricació dels circuits integrats, apareixent nous mecanismes de fallada. En aquest sentit, els mètodes de diagnosi actuals no són capaços d'assumir els nous reptes que sorgeixen per a les tecnologies nanomètriques. A més, la inspecció física de fallades (Failure Analysis) no es pot aplicar des d'un bon començament, ja que els costos de la seva utilització són massa alts. Per aquesta raó, conèixer el comportament dels defectes i dels seus mecanismes de fallada és imprescindible per al desenvolupament de noves metodologies de diagnosi que puguin superar aquests nous reptes. En aquest context, aquesta tesi presenta l'anàlisi dels mecanismes de fallada i proposa noves metodologies de diagnosi per millorar la localització de ponts (bridge) i oberts (open). Per a la diagnosi de ponts, alguns treballs s'han beneficiat de la informació obtinguda durant el test de corrent (IDDQ). No obstant no han tingut en compte l'impacte del corrent de dowsntream. Per aquesta raó, en aquesta tesi s'analitza l'impacte d'aquest corrent degut als ponts i la seva dependència amb la tensió d'alimentació (VDD). A més, es presenta una nova metodologia de diagnosi basada en els múltiples nivells de corrent. Aquesta tècnica considera els corrents generats per les diferents xarxes connectades pel pont. Aquesta metodologia s'ha aplicat amb èxit a un conjunt de xips defectuosos de tecnologies de 0.18 µm i 90 nm.Com alternativa a les tècniques basades en corrent, els shmoo plots també poden ser útils per a la diagnosi. Tradicionalment s'ha considerat que valors baixos de VDD són més apropiats per a la detecció de ponts. Tanmateix es demostra en aquesta tesi que en presència de ponts connectant xarxes equilibrades, valors alts de VDD són fins i tot més apropiats que tensions baixes, amb la conseqüent implicació que això té per a la diagnosi.En relació als oberts, s'ha dissenyat i fabricat un xip amb la inclusió intencionada d'oberts complets (full opens) i oberts resistius. Experiments fets amb els xips demostren l'impacte de les capacitats d'acoblament de les línies veïnes. A més, pels oberts resistius s'ha comprovat la influència de l'efecte història i de la localització de l'obert en el retard. Tradicionalment s'ha considerat que el retard màxim s'obté quan un obert resistiu es troba al principi de la línia. No obstant això no es pot generalitzar a oberts poc resistius, ja que en aquests casos es demostra que el màxim retard s'obté per a una localització intermèdia. A partir dels resultats experimentals obtinguts amb el xip, s'ha desenvolupat una nova metodologia per a la diagnosi d'oberts complets a les línies d'interconnexió. Aquest mètode divideix la línia en diferents segments segons la informació de layout de la pròpia línia. Aleshores coneixent els valors de les línies veïnes, es prediu la tensió del node flotant, la qual es compara amb el resultat experimental obtingut a la màquina de test. Aquest mètode s'ha aplicat amb èxit a un seguit de xips defectuosos pertanyents a una tecnologia de 0.18 µm.Finalment, s'ha analitzat l'impacte que tenen els corrents de túnel a través del terminal de porta en presència d'un obert complet. Com les dimensions disminueixen per a cada nova tecnologia, l'òxid de porta és suficientment prim com per generar corrents de túnel que influencien el node flotant. Aquests corrents generen una evolució temporal al node flotant fins fer-lo arribar a un estat quiescent, el qual depèn de la tecnologia. Es comprova que aquestes evolucions temporals són de l'ordre de segons per a una tecnologia de 0.18 µm. Tanmateix les simulacions demostren que aquests temps disminueixen fins a uns quants µs per a tecnologies futures. Degut a l'impacte dels corrents de túnel, un seguit d'oberts complets s'han diagnosticat en xips de 0.18 µm. / Transistor dimensions are scaled down for every new CMOS technology. Such high level of integration has increased the complexity of the Integrated Circuits (ICs) manufacturing process, arising new complex failure mechanisms. However, present diagnosis methodologies cannot afford the challenges arisen for future technologies. Furthermore, physical failure analysis, although indispensable, is not feasible on its own, since it requires high cost equipment, tools and qualified personnel. For this reason, a detailed understanding and knowledge of defect behaviours is a key factor for the development of improved diagnosed methodologies to overcome the challenges of nanometer technologies. In this context, this thesis presents the analysis of existing and new failure mechanisms and proposed new diagnosis methodologies to improve the diagnosis of faults, focused on bridging and open faults.IDDQ is a well known technique for the diagnosis of bridging faults. However, previous works have not considered the impact of the downstream current for the diagnosis of such faults. In this thesis, the impact and the dependence of the downstream current with the power supply voltage (VDD) is analyzed and experimentally measured. Furthermore, a multiple level IDDQ based diagnosis technique is presented. This method takes benefit from the currents generated by the different network excitations. This technique is successfully applied to real defective devices from 0.18 µm and 90 nm technologies.As an alternative to current based techniques, shmoo plots can be also useful for diagnosis purposes. Low voltage has been traditionally considered as an advantageous condition for the detection of bridging faults. However, it is demonstrated that in presence of bridges connecting balanced n- and p-networks, high VDD values are also advantageous for the detection of bridges, which has its direct translation into diagnosis application. Experimental evidence of this fact is presented.Related to open faults, an experimental chip has been designed and fabricated in a 0.35 µm technology, where full and resistive open defects have been intentionally added. Different experiments have been carried out so that the impact of the neighbouring coupling capacitances has been quantified. Furthermore, for resistive opens, experiments have demonstrated the influence of the history effect and the location of the defect on the delay. Traditionally, it has been reported that the highest delay is obtained when the resistive open is located at the beginning of the net. Nevertheless, this thesis demonstrates that this is not true for low resistive open, since the highest delay is obtained for an intermediate location. Experimental measurements prove this behaviour.Derived from the results obtained with the fabricated chip, a new methodology for the diagnosis of interconnect full open defects is developed. The FOS (Full Open Segment) method divides the interconnect line into different segments based on the topology of the faulty line. Knowing the logic state of the neighbouring lines, the floating net voltage is predicted and compared with the experimental results obtained on the tester. This method has been successfully applied to a set of 0.18 µm defective devices. Finally, the impact of the gate tunnelling leakage currents on the behaviour of full open defects has also been analyzed. As technology dimensions are scaled down, the oxide thickness is thin enough so that the gate tunnelling leakage currents influence the behaviour of floating lines. They cause transient evolutions on the floating node until reaching the steady state, which is technology dependent. It is experimentally demonstrated that these evolutions are in the order of seconds for a 0.18µm technology. However, for future technologies, simulations show that the evolutions decrease down to a few µs. Based on this factor, some full open faults present in 0.18 µm technology devices are diagnosed.
3

Untestable Fault Identification Using Implications

Syal, Manan 12 December 2002 (has links)
Untestable faults in circuits are defects/faults for which there exists no test pattern that can either excite the fault or propagate the fault effect to an observable point, which could be either a Primary output (PO) or a scan flip-flop. The current state-of-the-art automatic test pattern generators (ATPGs) spend a lot of time in trying to generate a test sequence for the detection of untestable faults, before aborting on them, or identifying them as untestable, given enough time. Thus, it would be beneficial to quickly identify faults that are redundant/untestable, so that tools such as ATPG engines or fault simulators do not waste time targeting these faults. Our work focuses on the identification of untestable faults at low cost in terms of both memory and execution time. A powerful and memory efficient implication engine, which is used to identify the effect(s) of asserting logic values in a circuit, is used as the basic building block of our tool. Using the knowledge provided by this implication engine, we identify untestable faults using a fault independent, conflict based analysis. We evaluated our tool against several benchmark circuits (ISCAS '85, ISCAS '89 and ISCAS '93), and found that we could identify considerably more untestable faults in sequential circuits compared to similar conflict based algorithms which have been proposed earlier. / Master of Science
4

Funkcinių testų skaitmeniniams įrenginiams projektavimas ir analizė / Design and analysis of functional tests for digital devices

Narvilas, Rolandas 31 August 2011 (has links)
Projekto tikslas – sukurti sistemą, skirtą schemų testinių atvejų atrinkimui naudojant „juodos dėžės“ modelius ir jiems pritaikytus gedimų modelius. Vykdant projektą buvo atlikta kūrino būdų ir technologijų analizė. Sistemos architektūra buvo kuriama atsižvelgiant į reikalavimą, naudoti schemų modelius, kurie yra parašyti c programavimo kalba. Buvo atlikta schemų failų integravimo efektyvumo analizė, tiriamos atsitiktinio testinių atvejų generavimo sekos patobulinimo galimybės, "1" pasiskirstymo 5taka atsitiktinai generuojam7 testini7 atvej7 kokybei. Tyrim7 rezultatai: • Schemų modelių integracijos tipas mažai įtakoja sistemos darbą. • Pusiau deterministinių metodų taikymas parodė, jog atskirų žingsnių optimizacija nepagerina galutinio rezultato. • "1" pasiskirstymas atsitiktinai generuojamose sekose turi įtaką testo kokybei ir gali būti naudojamas testų procesų pagerinimui. / Project objective – to develop a system, which generates functional tests for non-scan synchronous sequential circuits based on functional delay models. During project execution, the analysis of design and technology solutions was performed. The architecture of the developed software is based on the requirement to be able to use the models of the benchmark circuits that are written in C programming language. Analysis of the effectiveness of the model file integration, possibilities of improving random test sequence generation and the influence of distribution of „1“ in randomly generated test patterns was performed. The results of the analysis were: • Type of the model file integration has little effect when using large circuit models. • The implementation of semi deterministic algorithms showed that the optimisation of separate steps by construction of test subsequences doesn’t improve the final outcome. • The distribution of „1“ in randomly generated test patterns has effect on the fault coverage and can be used to improve test generation process.
5

Cryptanalyse physique de circuits cryptographiques à l’aide de sources LASER / Physical cryptanalysis of security chip using LASER sources

Roscian, Cyril 08 October 2013 (has links)
Les circuits cryptographiques, parce qu'ils contiennent des informations confidentielles, font l'objet de manipulations frauduleuses, appelées communément attaques, de la part de personnes mal intentionnées. Plusieurs attaques ont été répertoriées et analysées. L'une des plus efficaces actuellement, appelée cryptanalyse DFA (Differential Fault Analysis), exploite la présence de fautes, injectées volontairement par l’attaquant par exemple à l’aide d’un laser, dans les calculs. Cependant, les modèles de fautes utilisés dans ces attaques sont parfois très restrictifs et conditionnent leur efficacité. Il est donc important de bien connaître quel modèle de faute est pertinent ou réalisable en fonction du circuit cible et du moyen d'injection (dans notre cas le laser). Un première étude portant sur le type de fautes (Bit-set, Bit-reset ou Bit-flip) injectées sur des points mémoires SRAM a mis en évidence la forte dépendance des fautes injectées vis à vis des données manipulées et la quasi inexistence de fautes de type Bit-flip. Ce dernier résultat favorise grandement les attaques de type Safe Error et engendre donc un réel problème de sécurité. La mise en évidence de tels résultats a été possible grâce à des cartographies de sensibilité au laser réalisées sur une cellule SRAM isolée puis sur la mémoire RAM d'un micro-contrôleur 8 bits. Pour confirmer ces résultats expérimentaux, des simulations SPICE d'injection de fautes laser ont été réalisées à partir d'un modèle développé dans l’équipe. Ce modèle prend en compte la topologie de la cible. Des tests ont ensuite été réalisés sur un circuit ASIC implémentant l'algorithme AES. L'analyse des fautes a montré la présence des trois types de fautes mais aussi un faible taux d'injection. En revanche, le taux de répétabilité des fautes était particulièrement élevé. Cela nous a permis d'améliorer une attaque existante et d'obtenir au final une attaque plus efficace que les attaques classiques, nécessitant moins de chiffrements fautés et une analyse des résultats réduite pour retrouver la clef secrète. Enfin, une évaluation des contre-mesures embarquées dans ce circuit a montré leurs inefficacités vis à vis des attaques en fautes par laser. Des pistes d'amélioration ont ensuite été proposées. / Cryptographic circuits, because they contain confidential informations, are subject to fraud from malicious users, commonly known as attacks. Several attacks have been published and analysed. One of the most effective attack, called Differential Fault Analysis (DFA), uses some fault, voluntary injected by the attacker during the computations, for example with a laser. However, fault models used by these attacks can be restrictive and determine the effectiveness of the attack. Thus, it is important to know which fault model is useful or feasible according to the targeted device or injection means (in our case the laser).A first study about the injected fault types (Bit-set, Bit-reset or Bit-flip) on SRAM memory cells highlighted the strong data dependency of the injected faults and the irrelevance of the Bit-flip fault type. This last result allows to mount Safe Error attacks and creates a real security issue. These results were obtain thanks to sensitivity laser map performed on an isolated SRAM cell and on an 8-bits micro-controller RAM memory. To confirm these experimental results, SPICE simulations have been made with a model developed in the department. This model takes into account the topology of the target.Tests were then carried out on an ASIC implementing the AES algorithm. The fault analysis showed the presence of the three types of faults but also a low injection rates. In contrast, the error repeatability was particularly high. This allowed us to simplify an existing attack and to obtain an attack more effective than conventional attacks, requiring fewer faulted cipher text and reducing the complexity of the analysis to find the secret key. Finally, an assessment of the countermeasure of this circuit showed their ineffectiveness with respect to fault laser attacks. Areas for improvement were then proposed.
6

Tailoring Software Inspections for Aspect-Oriented Programs

Watkins, Charlette Ward 01 January 2009 (has links)
Aspect-Oriented Software Development (AOSD) is a new approach that addresses limitations inherent in conventional programming, especially the principle of separation of concerns by emphasizing the encapsulation and modularization of crosscutting concerns through a new abstraction, the "aspect." Aspect-oriented programming is an emerging AOSD programming paradigm that focuses on the modularization of concerns as appropriate for the host language and providing a mechanism for describing concerns that crosscut each other by congealing into a single textual structure behavior that conventional programming would otherwise distribute throughout the code. AspectJ is the most widely used aspect-oriented programming language to date and provides an extension of the Java language that includes several new concepts and constructs that differ from those in procedural and object-oriented programs. These include join points, pointcuts, advice, inter-type declarations, introduction and aspects. In AspectJ, as well as other aspect-oriented programming languages, "aspects" package pointcuts and advice into functional units in much the same way that object-oriented programming uses classes to package fields and methods into cohesive units but they offer a unique set of problems. Software inspections are considered a software engineering "best practice" for ensuring quality, but the introduction of new aspect-oriented programming language mechanisms drives the need for them to be tailored in a similar manner to how they were tailored to support object-oriented programs and the procedural programs. The identification of faults unique to aspect-oriented programming allowed for the design of an aspect fault model and the associated software inspection checklists criteria that provide a description of the typical faults associated with aspects and the clues that aid in betraying their presence. The proposed methodology for this research entailed a mixed methods approach based on a combination of descriptive and exploratory research methodologies using a normative case study. The proposed methodology resulted in the development of an understanding of the AspectJ primitive pointcut construct, identification of the typical faults associated with this construct and the subsequent development of a fault model, a set of programming rules and tailored software inspection checklist. A case study was conducted comparing defects detected by an inspection checklist tailored for AspectJ with one that was not tailored. The results of the case study demonstrated using software inspection checklists not tailored would result in many faults unique to aspect-oriented programming going undetected.
7

Fault Attacks on Embedded Software: New Directions in Modeling, Design, and Mitigation

Yuce, Bilgiday 16 January 2018 (has links)
This research investigates an important class of hardware attacks against embedded software, which uses fault injection as a hacking tool. Fault attacks use well-chosen, targeted fault injection combined with clever system response analysis to break the security of a system. In case of a fault attack on embedded software, faults are injected into the underlying processor hardware and their effects are observed in the executed software's output. This introduces an additional difficulty in mitigation of fault attack risk. Designing efficient countermeasures requires first understanding software, instruction-set, and hardware level components of fault attacks, and then, systematically addressing the vulnerabilities at each level. This research first proposes an instruction fault sensitivity model to capture effects of fault injection on embedded software. Based on the instruction fault sensitivity model, a novel fault attack method called MAFIA (Micro-architecture Aware Fault Injection Attack) is also introduced. MAFIA exploits the vulnerabilities in multiple abstraction layers. This enables an adversary to determine best points to attack during the execution as well as pinpoint the desired fault effects. It has been shown that MAFIA breaks the existing countermeasures with significantly fewer fault injections than the traditional fault attacks. Another contribution of the research is a fault attack simulator, MESS (Micro-architectural Embedded System Simulator). MESS enables a user to model hardware, instruction-set, and software level components of fault attacks in a simulation environment. Thus, software designers can use MESS to evaluate their programs against several real-world fault attack scenarios. The final contribution of this research is the fault-attack-resistant FAME (Fault-attack Aware Microprocessor Extensions) processor, which is suited for embedded, constrained systems. FAME combines fault detection in hardware and fault response in software. This allows low-cost, performance-efficient, flexible, and backward-compatible integration of hardware and software techniques to mitigate fault attack risk. FAME has been designed as an architectural concept as well as implemented as a chip prototype. In addition to protection mechanisms, the chip prototype also includes fault injection and analysis features to ease fault attack research. The findings of this research indicate that considering multiple abstraction layers together is essential for efficient fault attacks, countermeasures, and evaluation techniques. / Ph. D. / Today, we trust a range of embedded computers to process and protect our sensitive data. For instance, credit cards process sensitive financial data during electronic payment. Similarly, smartphones use and store private user data. This research investigates fault attacks, a serious threat to the security of embedded computers. In a fault attack, an adversary breaches the security by injecting intentional faults in an embedded computer. To induce faults, the adversary deliberately manipulates the operating conditions of the computer such as the supply voltage and ambient temperature. These faults interfere with the correct operation of the computer and cause temporary malfunctions in its hardware. The adversary then exploits the malfunctions to break the security. Although fault injection is a powerful hacking tool that may affect any security mechanism, there is no generic technique to deal with the security threat of faults. This research seeks a broader, deeper understanding of fault attacks and appropriate countermeasures for them. Our contributions include a novel fault modeling method, efficient fault attacks, a fault attack simulator, and a low-cost fault-attack-aware microprocessor. This research also provides a deeper understanding of causes and effects of faults, which can be utilized in the design of fault attacks, countermeasures, and metrics.
8

The Thermo-Mechanical Dynamics of DNA Self-Assembled Nanostructures

Mao, Vincent Chi Ann January 2010 (has links)
<p>The manufacturing of molecular-scale computing systems requires a scalable, reliable, and economic approach to create highly interconnected, dense arrays of devices. As a candidate substrate for nanoscale logic circuits, DNA self-assembled nanostructures have the potential to fulfill these requirements. However, a number of open challenges remain, including the scalability of DNA self-assembly, long-range signal propagation, and precise patterning of functionalized components. These challenges motivate the development of theory and experimental techniques to illuminate the connections among the physical, optical, and thermodynamic properties of DNA self-assembled nanostructures. </p> <p>In this thesis, three tools are developed, validated, and applied to study the thermo-mechanical properties of DNA nanostructures: 1) a method to quantitatively measure the quality of DNA grid self-assembly, 2) a spectrofluorometer capable of capturing fluorescence and absorbance data under simultaneous multi-wavelength excitation, and 3) a Monte Carlo simulator that models the ensemble response of DNA nanostructures as simple harmonic oscillators. </p> <p>The broad contributions of this dissertation are as follows: 1) insight into the thermo-mechanical properties of DNA grid nanostructures, and 2) a categorization of self-assembly defects and their impact on proposed logic circuits. </p> <p>The results of the work presented in this dissertation show that: 1) the quality of self-assembly of DNA grid nanostructures can be quantitatively calculated to demonstrate the impact of changes in temperature or structure, 2) the optical absorbance of complex DNA nanostructures can be modeled to capture their thermo-mechanical properties (i.e., worst case within 10% of experimental melting temperatures and 70% of experimental thermodynamic parameters), 3) the structural resilience of DNA nanostructures can be quantifiably improved by chemical cross-linking with up to 60% retaining their original structure, and 4) DNA self-assembly introduces structural defects which create new fault models with respect to conventional technologies for logic circuits.</p> / Dissertation
9

Samočinný test ALU za provozu / ALU Built-In Self Test

Bednář, Jaroslav January 2010 (has links)
This work deals with faults, errors and failures, which can occur during manufacturing and long term operation. Work describes the various failures and fault models. There are some approaches to get fault tolerant systems, mainly in hardware. The thesis continues with a summary of methods for ALU software testing. The last chapter is about tests generation for microcontroller MSP430.
10

Incorporating the effect of delay variability in path based delay testing

Tayade, Rajeshwary G. 19 October 2009 (has links)
Delay variability poses a formidable challenge in both design and test of nanometer circuits. While process parameter variability is increasing with technology scaling, as circuits are becoming more complex, the dynamic or vector dependent variability is also increasing steadily. In this research, we develop solutions to incorporate the effect of delay variability in delay testing. We focus on two different applications of delay testing. In the first case, delay testing is used for testing the timing performance of a circuit using path based fault models. We show that if dynamic delay variability is not accounted for during the path selection phase, then it can result in targeting a wrong set of paths for test. We have developed efficient techniques to model the effect of two different dynamic effects namely multiple-input switching noise and coupling noise. The basic strategy to incorporate the effect of dynamic delay variability is to estimate the maximum vector delay of a path without being too pessimistic. In the second case, the objective was to increase the defect coverage of reliability defects in the presence of process variations. Such defects cause very small delay changes and hence can easily escape regular tests. We develop a circuit that facilitates accurate control over the capture edge and thus enable faster than at-speed testing. We further develop an efficient path selection algorithm that can select a path that detects the smallest detectable defect at any node in the presence of process variations. / text

Page generated in 0.0309 seconds