• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 19
  • 19
  • 9
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Impact of charge collection mechanisms on single event effects in SiGe HBT circuits

Zhang, Tong, Niu, Guofu, January 2009 (has links)
Thesis--Auburn University, 2009. / Abstract. Vita. Includes bibliographical references (p. 77-81).
2

Sensitivity of Feedforward Neural Networks to Harsh Computing Environments

Arechiga, Austin Podoll 08 August 2018 (has links)
Neural Networks have proven themselves very adept at solving a wide variety of problems, in particular they accel at image processing. However, it remains unknown how well they perform under memory errors. This thesis focuses on the robustness of neural networks under memory errors, specifically single event upset style errors where single bits flip in a network's trained parameters. The main goal of these experiments is to determine if different neural network architectures are more robust than others. Initial experiments show that MLPs are more robust than CNNs. Within MLPs, deeper MLPs are more robust and for CNNs larger kernels are more robust. Additionally, the CNNs displayed bimodal failure behavior, where memory errors would either not affect the performance of the network, or they would degrade its performance to be on par with random guessing. VGG16, ResNet50, and InceptionV3 were also tested for their robustness. ResNet50 and InceptionV3 were both more robust than VGG16. This could be due to their use of Batch Normalization or the fact that ResNet50 and InceptionV3 both use shortcut connections in their hidden layers. After determining which networks were most robust, some estimated error rates from neutrons were calculated for space environments to determine if these architectures were robust enough to survive. It was determined that large MLPs, ResNet50, and InceptionV3 could survive in Low Earth Orbit on commercial memory technology and only use software error correction. / Master of Science / Neural networks are a new kind of algorithm that are revolutionizing the field of computer vision. Neural networks can be used to detect and classify objects in pictures or videos with accuracy on par with human performance. Neural networks achieve such good performance after a long training process during which many parameters are adjusted until the network can correctly identify objects such as cats, dogs, trucks, and more. These trained parameters are then stored in a computers memory and then recalled whenever the neural network is used for a computer vision task. Some computer vision tasks are safety critical, such as a self-driving car’s pedestrian detector. An error in that detector could lead to loss of life, so neural networks must be robust against a wide variety of errors. This thesis will focus on a specific kind of error: bit flips in the parameters of a neural networks stored in a computer’s memory. The main goal of these bit flip experiments is to determine if certain kinds of neural networks are more robust than others. Initial experiments show that MLP (Multilayer Perceptions) style networks are more robust than CNNs (Convolutional Neural Network). For MLP style networks, making the network deeper with more layers increases the accuracy and the robustness of the network. However, for the CNNs increasing the depth only increased the accuracy, not the robustness. The robustness of the CNNs displayed an interesting trend of bimodal failure behavior, where memory errors would either not affect the performance of the network, or they would degrade its performance to be on par with random guessing. A second set of experiments were run to focus more on CNN robustness because CNNs are much more capable than MLPs. The second set of experiments focused on the robustness of VGG16, ResNet50, and InceptionV3. These CNNs are all very large and have very good performance on real world datasets such as ImageNet. Bit flip experiments showed that ResNet50 and InceptionV3 were both more robust than VGG16. This could be due to their use of Batch Normalization or the fact that ResNet50 and InceptionV3 both use shortcut connections within their network architecture. However, all three networks still displayed the bimodal failure mode seen previously. After determining which networks were most robust, some estimated error rates were calculated for a real world environment. The chosen environment was the space environment because it naturally causes a high amount of bit flips in memory, so if NASA were to use neural networks on any rovers they would need to make sure the neural networks are robust enough to survive. It was determined that large MLPs, ResNet50, and InceptionV3 could survive in Low Earth Orbit on commercial memory technology and only use software error correction. Using only software error correction will allow satellite makers to build more advanced satellites without paying extra money for radiation-hardened electronics.
3

3D device simulation of SEU-induced charge collection in 200 GHz SiGe HBTs

Yang, Hua, January 2005 (has links) (PDF)
Thesis--Auburn University, 2005. / Abstract. Vita. Includes bibliographic references.
4

Soft error rate determination for nanometer CMOS VLSI circuits

Wang, Fan, Agrawal, Vishwani D., January 2008 (has links)
Thesis--Auburn University, 2008. / Abstract. Vita. Includes bibliographical references (p. 78-89).
5

Hardware Assertions for Mitigating Single-Event Upsets in FPGAs

Dumitrescu, Stefan January 2020 (has links)
The memory cells used in modern field programmable gate arrays (FPGAs) are highly susceptible to single event upsets (SEUs). The typical mitigation strategy in the industry is some form of hardware redundancy in the form of duplication with comparison (DWC) or triple modular redundancy (TMR). While this strategy is highly effective in masking out the effect of faults, it incurs a large hardware cost. In this thesis, we explore a different approach to hardware redundancy. The core idea of our approach is to exploit the conflict-driven clause learning (CDCL) mechanism in modern Boolean satisfiability (SAT) solvers to provide us with invariants which can be realized as hardware checkers. Furthermore, we develop the algorithms required to select a subset of these invariants to be included as part of a checker circuit. This checker circuit then augments the original FPGA design. We find which look-up table (LUT) memory cells are sensitive to bitflips, then we automatically generate a checker circuit consisting of hardware invariants targeted towards those faults. We aim to reach 100% coverage of sensitizable faults. After extensive experimentation, we conclude that this approach is not competitive with DWC with respect to hardware area. However, we demonstrate that many bitflips will have reduced a detection latency compared to DWC. / Thesis / Master of Applied Science (MASc)
6

Evaluation de la sensibilité face aux SEE et méthodologie pour la prédiction de taux d’erreurs d’applications implémentées dans des processeurs Multi-cœur et Many-cœur / Evaluation of the SEE sensitivity and methodology for error rate prediction of applications implemented in Multi-core and Many-core processors

Ramos Vargas, Pablo Francisco 18 April 2017 (has links)
La présente thèse vise à évaluer la sensibilité statique et dynamique face aux SEE de trois dispositifs COTS différents. Le premier est le processeur multi-cœurs P2041 de Freescale fabriqué en technologie 45nm SOI qui met en œuvre ECC et la parité dans leurs mémoires cache. Le second est le processeur multifonction Kalray MPPA-256 fabriqué en technologie CMOS 28nm TSMC qui intègre 16 clusters de calcul chacun avec 16 cœurs, et met en œuvre ECC dans ses mémoires statiques et parité dans ses mémoires caches. Le troisième est le microprocesseur Adapteva E16G301 fabriqué en 65nm CMOS processus qui intègre 16 cœurs de processeur et ne pas mettre en œuvre des mécanismes de protection. L'évaluation a été réalisée par des expériences de rayonnement avec des neutrons de 14 Mev dans des accélérateurs de particules pour émuler un environnement de rayonnement agressif, et par injection de fautes dans des mémoires cache, des mémoires partagées ou des registres de processeur pour simuler les conséquences des SEU dans l'exécution du programme. Une analyse approfondie des erreurs observées a été effectuée pour identifier les vulnérabilités dans les mécanismes de protection. Des zones critiques telles que des Tag adresses et des registres à usage général ont été affectées pendant les expériences de rayonnement. De plus, l'approche Code Emulating Upset (CEU), développée au Laboratoire TIMA, a été étendue pour des processeurs multi-cœur et many-cœur pour prédire le taux d'erreur d'application en combinant les résultats issus des campagnes d'injection de fautes avec ceux issus des expériences de rayonnement. / The present thesis aims at evaluating the SEE static and dynamic sensitivity of three different COTS multi-core and many-core processors. The first one is the Freescale P2041 multi-core processor manufactured in 45nm SOI technology which implements ECC and parity in their cache memories. The second one is the Kalray MPPA-256 many-core processor manufactured in 28nm TSMC CMOS technology which integrates 16 compute clusters each one with 16 processor cores, and implements ECC in its static memories and parity in its cache memories. The third one is the Adapteva Epiphany E16G301 microprocessor manufactured in 65nm CMOS process which integrates 16 processor cores and do not implement protection mechanisms. The evaluation was accomplished through radiation experiments with 14 Mev neutrons in particle accelerators to emulate a harsh radiation environment, and by fault injection in cache memories, shared memories or processor registers, to simulate the consequences of SEUs in the execution of the program. A deep analysis of the observed errors was carried out to identify vulnerabilities in the protection mechanisms. Critical zones such as address tag and general purpose registers were affected during the radiation experiments. In addition, The Code Emulating Upset (CEU) approach, developed at TIMA Laboratory was extended to multi-core and many core processors for predicting the application error rate by combining the results issued from fault injection campaigns with those coming from radiation experiments.
7

Méthodes et outils pour l'analyse tôt dans le flot de conception de la sensibilité aux soft-erreurs des applications et des circuits intégrés / Methods and tools for the early analysis in the design flow of the sensitivity to soft-errors of applications and integrated circuits

Mansour, Wassim 31 October 2012 (has links)
La miniaturisation des gravures des transistors résulte en une augmentation de la sensibilité aux soft-erreurs des circuits intégrés face aux particules énergétiques présentes dans l’environnement dans lequel ils opèrent. Une expérimentation, présentée au cours de cette thèse, concernant l'étude de la sensibilité face aux soft-erreurs, dans l'environnement réel, des mémoires SRAM provenant de deux générations de technologies successives, a mis en évidence la criticité de cette thématique. Cela pour montrer la nécessité de l'évaluation des circuits faces aux effets des radiations, surtout les circuits commerciaux qui sont de plus en plus utilisés dans les applications spatiales et avioniques et même dans les hautes altitudes, afin de trouver les méthodologies permettant leurs durcissements. Plusieurs méthodes d'injection de fautes, ayant pour but l'évaluation de la sensibilité des circuits intégrés face aux soft-erreurs, ont été le sujet de plusieurs recherches. Les travaux réalisés au cours de cette thèse ont eu pour but le développement d'une méthode automatisable, avec son outil, permettant l'émulation des effets des radiations sur des circuits dont on dispose de leurs codes HDL. Cette méthode, appelée NETFI (NETlist Fault Injection), est basée sur la manipulation de la netlist du circuit synthétisé pour permettre l'injection de fautes de types SEU, SET et Stuck_at. NETFI a été appliquée sur plusieurs architectures pour étudier ses potentialités ainsi que son efficacité. Une étude sur un algorithme tolérant aux fautes, dit self-convergent, exécuté par un processeur LEON3, a été aussi présenté dans le but d'effectuer une comparaison des résultats issus de NETFI avec ceux issus d'une méthode de l'état de l'art appelée CEU (Code Emulated Upset). / Reducing the dimensions of transistors increases the soft-errors sensitivity of integrated circuits to energetic particles present in the environments in which they operate. An experiment, presented in this thesis, aiming to study soft-errors sensitivity, in real environment, of SRAM memories issued from two successive technologies, put in evidence the criticality of this thematic. This is to show the need to evaluate circuit's sensitivity to radiation effects, especially commercial circuits that are used more and more for space and avionic applications and even at high altitudes, in order to find the appropriate hardening methodologies. Several fault-injection methods, aiming at evaluating the sensitivity to soft-errors of integrated circuits, were goals for many researches. In this thesis was developed an automated method, and its corresponding tool, allowing the emulation of radiation effects on HDL-based circuits. This method, so-called NETFI (NETlist Fault-Injection), is based on modifying the netlist of the synthesized circuit to allow injecting faults of different types (SEU, SET and Stuck_at). NETFI was applied on different architectures in order to assess its efficiency and put in evidence its capabilities. A study on a fault-tolerant algorithm, so-called self-convergent, executed by a LEON3 processor, was also presented in order to perform an objective comparison between the results issued from NETFI and those issued from another state-of-the-art method, called CEU (Code Emulated Upset).
8

Reliability-centric probabilistic analysis of VLSI circuits

Rejimon, Thara 01 June 2006 (has links)
Reliability is one of the most serious issues confronted by microelectronics industry as feature sizes scale down from deep submicron to sub-100-nanometer and nanometer regime. Due to processing defects and increased noise effects, it is almost impractical to come up with error-free circuits. As we move beyond 22nm, devices will be operating very close to their thermal limit making the gates error-prone and every gate will have a finite propensity of providing erroneous outputs. Additional factors increasing the erroneous behaviors are low operating voltages and extremely high frequencies. These types of errors are not captured by current defect and fault tolerant mechanisms as they might not be present during the testing and reconfiguration. Hence Reliability-centric CAD analysis tool is becoming more essential not only to combat defect and hard faults but also errors that are transient and probabilistic in nature.In this dissertation, we address three broad categories of errors. First, we focus on random pattern testability of logic circuits with respect to hard or permanent faults. Second, we model the effect of single-event-upset (SEU) at an internal node to primary outputs. We capture the temporal nature of SEUs by adding timing information to our model. Finally, we model the dynamic error in nano-domain computing, where reliable computation has to be achieved with "systemic" unreliable devices, thus making the entire computation process probabilistic rather than deterministic in nature.Our central theoretical scheme relies on Bayesian Belief networks that are compact efficient models representing joint probability distribution in a minimal graphical structure that not only uses conditional independencies to model the underlying probabilistic dependence but also uses them for computational advantage. We used both exact and approximate inference which has let us achieve order of magnitude improvements in both accuracy and speed and have enabled us t o study larger benchmarks than the state-of-the-art. We are also able to study error sensitivities, explore design space, and characterize the input space with respect to errors and finally, evaluate the effect of redundancy schemes.
9

Trickle flow multiple hydrodynamic states : the effect of flow history, surface tension and transient upsets

Van der Westhuizen, Ina 05 May 2008 (has links)
The existence of multiple hydrodynamic states (MHS) in trickle bed operation has been proved by hysteresis observed in flow loops, as well as variation between different prewetting modes. The most common theory presented as explanation for the existence of MHS, is the film vs. rivulet concept. Based on this concept, it was suspected that in-situ upsets might promote the formations of films, thereby providing a method through which the hydrodynamic states of the Dry and Levec modes can be manipulated to perform like the Kan Liquid and Super modes. Large performance enhancements can be obtained by altering the prewetting procedure, even for systems with a low surface tension. For the water system, the gas liquid mass transfer coefficient of the Kan Liquid and Super modes could be as much as 6 times greater than that of the Dry mode. For the low surface tension system, the gas liquid mass transfer of the Kan Liquid and Super modes could be up to 8 times greater than that of the Dry mode. Through a thorough investigation of various types of transient upsets and manipulation strategies, it was confirmed that prewetting is indeed the only way by which drastic variation in hydrodynamic states may be obtained. None of the investigated upsets (hysteresis, periodic operation or surface tension doping) resulted in changes in the liquid morphology that could compare to the significant variation that was observed by varying the prewetting mode. Two methods were identified by which the hydrodynamic gaps between the less uniform (Dry and Levec) modes and the more uniform modes (Kan Liquid and Super) could be bridged. The first is to reduce the Levec draining time, while the second method may be seen as an in-situ type of Kan Liquid prewetting. This type of prewetting was obtained during doping with a low surface tension liquid, at a flow rate associated with the high interaction regime for the low surface tension system. Though the hysteresis cycles did not drastically alter the predominant flow type, interesting trends were observed, some of which raised doubt about the application of the films vs. rivulet concept. One mode in particular displayed behaviour which contributed to this doubt, namely the Kan Gas mode; • Gas liquid mass transfer on this mode decreased with an increase in liquid flow rate • Relatively low pressure drops on this mode corresponded to relatively high liquid holdup • It was the only mode that exhibited no hysteresis with gas flow variation, on any of the hydrodynamic parameters The various trends and variations observed with the different types of upsets, leads to the conclusion that the concept of films vs. rivulets simply does not provide adequate explanation of the observed results. In general, two flow types may be distinguished. That which is caused by an initial increase in liquid flow rate as opposed to that which is caused by an initial increase in gas flow rate An investigation to determine the behaviour of each of the investigated parameters near the transition boundaries on all the modes, as well as a repetition of this study with non-intrusive visual techniques is recommended. / Dissertation (MEng (Chemical Engineering))--University of Pretoria, 2008. / Chemical Engineering / unrestricted
10

Techniques pour l'évaluation et l'amélioration du comportement des technologies émergentes face aux fautes aléatoires / Techniques for the evaluation and the improvement of emergent technologies’ behavior facing random errors

Costenaro, Enrico 09 December 2015 (has links)
L'objectif principal de cette thèse est de développer des techniques d'analyse et mitigation capables à contrer les effets des Evènements Singuliers (Single Event Effects) - perturbations externes et internes produites par les particules radioactives, affectant la fiabilité et la sureté en fonctionnement des circuits microélectroniques complexes. Cette thèse à la vocation d'offrir des solutions et méthodologies industrielles pour les domaines d'applications terrestres exigeant une fiabilité ultime (télécommunications, dispositifs médicaux, ...) en complément des travaux précédents sur les Soft Errors, traditionnellement orientés vers les applications aérospatiales, nucléaires et militaires.Les travaux présentés utilisent une décomposition de sources d'erreurs dans les circuits actuels, visant à mettre en évidence les contributeurs les plus importants.Les upsets (SEU) - Evènements Singuliers (ES) dans les cellules logiques séquentielles représentent actuellement la cible principale pour les efforts d'analyse et d'amélioration à la fois dans l'industrie et dans l'académie. Cette thèse présente une méthodologie d'analyse basée sur la prise en compte de la sensibilité de chaque état logique d'une cellule (state-awareness), approche qui améliore considérablement la précision des résultats concernant les taux des évènements pour les instances séquentielles individuelles. En outre, le déséquilibre intrinsèque entre la susceptibilité des différents états des bascules est exploité pour mettre en œuvre une stratégie d'amélioration SER à très faible coût.Les fautes transitoires (SET) affectant la logique combinatoire sont beaucoup plus difficiles à modéliser, à simuler et à analyser que les SEUs. L'environnement radiatif peut provoquer une multitude d'impulsions transitoires dans les divers types de cellules qui sont utilisés en configurations multiples. Cette thèse présente une approche pratique pour l'analyse SET, applicable à des circuits industriels très complexes. Les principales étapes de ce processus consiste à: a) caractériser complètement la bibliothèque de cellules standard, b) évaluer les SET dans les réseaux logiques du circuit en utilisant des méthodes statiques et dynamiques et c) calculer le taux SET global en prenant en compte les particularités de l'implémentation du circuit et de son environnement.L'injection de fautes reste la principale méthode d'analyse pour étudier l'impact des fautes, erreurs et disfonctionnements causés par les évènements singuliers. Ce document présente les résultats d'une analyse fonctionnelle d'un processeur complexe dans la présence des fautes et pour une sélection d'applications (benchmarks) représentatifs. Des techniques d'accélération de la simulation (calculs probabilistes, clustering, simulations parallèles) ont été proposées et évalués afin d'élaborer un environnement de validation industriel, capable à prendre en compte des circuits très complexes. Les résultats obtenus ont permis l'élaboration et l'évaluation d'un hypothétique scénario de mitigation qui vise à améliorer sensiblement, et cela au moindre coût, la fiabilité du circuit sous test. Les résultats obtenus montrent que les taux d'erreur, SDC (Silent Data Corruption) et DUE (Detectable Uncorrectable Errors) peuvent être considérablement réduits par le durcissement d'un petite partie du circuit (protection sélective). D'autres techniques spécifiques ont été également déployées: mitigation du taux de soft-errors des Flip-Flips grâce à une optimisation du Temporal De-Rating par l'insertion sélective de retard sur l'entrée ou la sortie des bascules et biasing du circuit pour privilégier les états moins sensibles.Les méthodologies, algorithmes et outils CAO proposés et validés dans le cadre de ces travaux sont destinés à un usage industriel et ont été valorisés dans le cadre de plateforme CAO commerciale visant à offrir une solution complète pour l'évaluation de la fiabilité des circuits et systèmes électroniques complexes. / The main objective of this thesis is to develop analysis and mitigation techniques that can be used to face the effects of radiation-induced soft errors - external and internal disturbances produced by radioactive particles, affecting the reliability and safety in operation complex microelectronic circuits. This thesis aims to provide industrial solutions and methodologies for the areas of terrestrial applications requiring ultimate reliability (telecommunications, medical devices, ...) to complement previous work on Soft Errors traditionally oriented aerospace, nuclear and military applications.The work presented uses a decomposition of the error sources, inside the current circuits, to highlight the most important contributors.Single Event Effects in sequential logic cells represent the current target for analysis and improvement efforts in both industry and academia. This thesis presents a state-aware analysis methodology that improves the accuracy of Soft Error Rate data for individual sequential instances based on the circuit and application. Furthermore, the intrinsic imbalance between the SEU susceptibility of different flip-flop states is exploited to implement a low-cost SER improvement strategy.Single Event Transients affecting combinational logic are considerably more difficult to model, simulate and analyze than the closely-related Single Event Upsets. The working environment may cause a myriad of distinctive transient pulses in various cell types that are used in widely different configurations. This thesis presents practical approach to a possible exhaustive Single Event Transient evaluation flow in an industrial setting. The main steps of this process consists in: a) fully characterize the standard cell library using a process and library-aware SER tool, b) evaluate SET effects in the logic networks of the circuit using a variety dynamic (simulation-based) and static (probabilistic) methods and c) compute overall SET figures taking into account the particularities of the implementation of the circuit and its environment.Fault-injection remains the primary method for analyzing the effects of soft errors. This document presents the results of functional analysis of a complex CPU. Three representative benchmarks were considered for this analysis. Accelerated simulation techniques (probabilistic calculations, clustering, parallel simulations) have been proposed and evaluated in order to develop an industrial validation environment, able to take into account very complex circuits. The results obtained allowed the development and evaluation of a hypothetical mitigation scenario that aims to significantly improve the reliability of the circuit at the lowest cost.The results obtained show that the error rate, SDC (Silent Data Corruption) and DUE (Detectable Uncorrectable Errors) can be significantly reduced by hardening a small part of the circuit (Selective mitigation).In addition to the main axis of research, some tangential topics were studied in collaboration with other teams. One of these consisted in the study of a technique for the mitigation of flip-flop soft-errors through an optimization of the Temporal De-Rating (TDR) by selectively inserting delay on the input or output of flip-flops.The Methodologies, the algorithms and the CAD tools proposed and validated as part of the work are intended for industrial use and have been included in a commercial CAD framework that offers a complete solution for assessing the reliability of circuits and complex electronic systems.

Page generated in 0.0359 seconds