Spelling suggestions: "subject:"singleevent upset"" "subject:"complexevent upset""
1 |
Impact of charge collection mechanisms on single event effects in SiGe HBT circuitsZhang, Tong, Niu, Guofu, January 2009 (has links)
Thesis--Auburn University, 2009. / Abstract. Vita. Includes bibliographical references (p. 77-81).
|
2 |
Sensitivity of Feedforward Neural Networks to Harsh Computing EnvironmentsArechiga, Austin Podoll 08 August 2018 (has links)
Neural Networks have proven themselves very adept at solving a wide variety of problems, in particular they accel at image processing. However, it remains unknown how well they perform under memory errors. This thesis focuses on the robustness of neural networks under memory errors, specifically single event upset style errors where single bits flip in a network's trained parameters. The main goal of these experiments is to determine if different neural network architectures are more robust than others. Initial experiments show that MLPs are more robust than CNNs. Within MLPs, deeper MLPs are more robust and for CNNs larger kernels are more robust. Additionally, the CNNs displayed bimodal failure behavior, where memory errors would either not affect the performance of the network, or they would degrade its performance to be on par with random guessing. VGG16, ResNet50, and InceptionV3 were also tested for their robustness. ResNet50 and InceptionV3 were both more robust than VGG16. This could be due to their use of Batch Normalization or the fact that ResNet50 and InceptionV3 both use shortcut connections in their hidden layers. After determining which networks were most robust, some estimated error rates from neutrons were calculated for space environments to determine if these architectures were robust enough to survive. It was determined that large MLPs, ResNet50, and InceptionV3 could survive in Low Earth Orbit on commercial memory technology and only use software error correction. / Master of Science / Neural networks are a new kind of algorithm that are revolutionizing the field of computer vision. Neural networks can be used to detect and classify objects in pictures or videos with accuracy on par with human performance. Neural networks achieve such good performance after a long training process during which many parameters are adjusted until the network can correctly identify objects such as cats, dogs, trucks, and more. These trained parameters are then stored in a computers memory and then recalled whenever the neural network is used for a computer vision task. Some computer vision tasks are safety critical, such as a self-driving car’s pedestrian detector. An error in that detector could lead to loss of life, so neural networks must be robust against a wide variety of errors. This thesis will focus on a specific kind of error: bit flips in the parameters of a neural networks stored in a computer’s memory. The main goal of these bit flip experiments is to determine if certain kinds of neural networks are more robust than others. Initial experiments show that MLP (Multilayer Perceptions) style networks are more robust than CNNs (Convolutional Neural Network). For MLP style networks, making the network deeper with more layers increases the accuracy and the robustness of the network. However, for the CNNs increasing the depth only increased the accuracy, not the robustness. The robustness of the CNNs displayed an interesting trend of bimodal failure behavior, where memory errors would either not affect the performance of the network, or they would degrade its performance to be on par with random guessing. A second set of experiments were run to focus more on CNN robustness because CNNs are much more capable than MLPs. The second set of experiments focused on the robustness of VGG16, ResNet50, and InceptionV3. These CNNs are all very large and have very good performance on real world datasets such as ImageNet. Bit flip experiments showed that ResNet50 and InceptionV3 were both more robust than VGG16. This could be due to their use of Batch Normalization or the fact that ResNet50 and InceptionV3 both use shortcut connections within their network architecture. However, all three networks still displayed the bimodal failure mode seen previously. After determining which networks were most robust, some estimated error rates were calculated for a real world environment. The chosen environment was the space environment because it naturally causes a high amount of bit flips in memory, so if NASA were to use neural networks on any rovers they would need to make sure the neural networks are robust enough to survive. It was determined that large MLPs, ResNet50, and InceptionV3 could survive in Low Earth Orbit on commercial memory technology and only use software error correction. Using only software error correction will allow satellite makers to build more advanced satellites without paying extra money for radiation-hardened electronics.
|
3 |
3D device simulation of SEU-induced charge collection in 200 GHz SiGe HBTsYang, Hua, January 2005 (has links) (PDF)
Thesis--Auburn University, 2005. / Abstract. Vita. Includes bibliographic references.
|
4 |
Soft error rate determination for nanometer CMOS VLSI circuitsWang, Fan, Agrawal, Vishwani D., January 2008 (has links)
Thesis--Auburn University, 2008. / Abstract. Vita. Includes bibliographical references (p. 78-89).
|
5 |
Hardware Assertions for Mitigating Single-Event Upsets in FPGAsDumitrescu, Stefan January 2020 (has links)
The memory cells used in modern field programmable gate arrays (FPGAs) are highly
susceptible to single event upsets (SEUs). The typical mitigation strategy in the industry is some form of hardware redundancy in the form of duplication with comparison (DWC) or triple modular redundancy (TMR). While this strategy is highly effective in masking out the effect of faults, it incurs a large hardware cost. In this thesis, we explore a different approach to hardware redundancy. The core idea of our approach is to exploit the conflict-driven clause learning (CDCL) mechanism in modern Boolean satisfiability (SAT) solvers to provide us with
invariants which can be realized as hardware checkers. Furthermore, we develop the algorithms required to select a subset of these invariants to be included as part of a checker circuit. This checker circuit then augments the original FPGA design. We find which look-up table (LUT) memory cells are sensitive to bitflips, then we automatically generate a checker circuit consisting of hardware invariants targeted towards those faults. We aim to reach 100% coverage of sensitizable faults. After extensive experimentation, we conclude that this approach is not competitive with DWC with respect to hardware area. However, we demonstrate that many bitflips will have reduced a detection latency compared to DWC. / Thesis / Master of Applied Science (MASc)
|
6 |
Evaluation de la sensibilité face aux SEE et méthodologie pour la prédiction de taux d’erreurs d’applications implémentées dans des processeurs Multi-cœur et Many-cœur / Evaluation of the SEE sensitivity and methodology for error rate prediction of applications implemented in Multi-core and Many-core processorsRamos Vargas, Pablo Francisco 18 April 2017 (has links)
La présente thèse vise à évaluer la sensibilité statique et dynamique face aux SEE de trois dispositifs COTS différents. Le premier est le processeur multi-cœurs P2041 de Freescale fabriqué en technologie 45nm SOI qui met en œuvre ECC et la parité dans leurs mémoires cache. Le second est le processeur multifonction Kalray MPPA-256 fabriqué en technologie CMOS 28nm TSMC qui intègre 16 clusters de calcul chacun avec 16 cœurs, et met en œuvre ECC dans ses mémoires statiques et parité dans ses mémoires caches. Le troisième est le microprocesseur Adapteva E16G301 fabriqué en 65nm CMOS processus qui intègre 16 cœurs de processeur et ne pas mettre en œuvre des mécanismes de protection. L'évaluation a été réalisée par des expériences de rayonnement avec des neutrons de 14 Mev dans des accélérateurs de particules pour émuler un environnement de rayonnement agressif, et par injection de fautes dans des mémoires cache, des mémoires partagées ou des registres de processeur pour simuler les conséquences des SEU dans l'exécution du programme. Une analyse approfondie des erreurs observées a été effectuée pour identifier les vulnérabilités dans les mécanismes de protection. Des zones critiques telles que des Tag adresses et des registres à usage général ont été affectées pendant les expériences de rayonnement. De plus, l'approche Code Emulating Upset (CEU), développée au Laboratoire TIMA, a été étendue pour des processeurs multi-cœur et many-cœur pour prédire le taux d'erreur d'application en combinant les résultats issus des campagnes d'injection de fautes avec ceux issus des expériences de rayonnement. / The present thesis aims at evaluating the SEE static and dynamic sensitivity of three different COTS multi-core and many-core processors. The first one is the Freescale P2041 multi-core processor manufactured in 45nm SOI technology which implements ECC and parity in their cache memories. The second one is the Kalray MPPA-256 many-core processor manufactured in 28nm TSMC CMOS technology which integrates 16 compute clusters each one with 16 processor cores, and implements ECC in its static memories and parity in its cache memories. The third one is the Adapteva Epiphany E16G301 microprocessor manufactured in 65nm CMOS process which integrates 16 processor cores and do not implement protection mechanisms. The evaluation was accomplished through radiation experiments with 14 Mev neutrons in particle accelerators to emulate a harsh radiation environment, and by fault injection in cache memories, shared memories or processor registers, to simulate the consequences of SEUs in the execution of the program. A deep analysis of the observed errors was carried out to identify vulnerabilities in the protection mechanisms. Critical zones such as address tag and general purpose registers were affected during the radiation experiments. In addition, The Code Emulating Upset (CEU) approach, developed at TIMA Laboratory was extended to multi-core and many core processors for predicting the application error rate by combining the results issued from fault injection campaigns with those coming from radiation experiments.
|
7 |
Méthodes et outils pour l'analyse tôt dans le flot de conception de la sensibilité aux soft-erreurs des applications et des circuits intégrés / Methods and tools for the early analysis in the design flow of the sensitivity to soft-errors of applications and integrated circuitsMansour, Wassim 31 October 2012 (has links)
La miniaturisation des gravures des transistors résulte en une augmentation de la sensibilité aux soft-erreurs des circuits intégrés face aux particules énergétiques présentes dans l’environnement dans lequel ils opèrent. Une expérimentation, présentée au cours de cette thèse, concernant l'étude de la sensibilité face aux soft-erreurs, dans l'environnement réel, des mémoires SRAM provenant de deux générations de technologies successives, a mis en évidence la criticité de cette thématique. Cela pour montrer la nécessité de l'évaluation des circuits faces aux effets des radiations, surtout les circuits commerciaux qui sont de plus en plus utilisés dans les applications spatiales et avioniques et même dans les hautes altitudes, afin de trouver les méthodologies permettant leurs durcissements. Plusieurs méthodes d'injection de fautes, ayant pour but l'évaluation de la sensibilité des circuits intégrés face aux soft-erreurs, ont été le sujet de plusieurs recherches. Les travaux réalisés au cours de cette thèse ont eu pour but le développement d'une méthode automatisable, avec son outil, permettant l'émulation des effets des radiations sur des circuits dont on dispose de leurs codes HDL. Cette méthode, appelée NETFI (NETlist Fault Injection), est basée sur la manipulation de la netlist du circuit synthétisé pour permettre l'injection de fautes de types SEU, SET et Stuck_at. NETFI a été appliquée sur plusieurs architectures pour étudier ses potentialités ainsi que son efficacité. Une étude sur un algorithme tolérant aux fautes, dit self-convergent, exécuté par un processeur LEON3, a été aussi présenté dans le but d'effectuer une comparaison des résultats issus de NETFI avec ceux issus d'une méthode de l'état de l'art appelée CEU (Code Emulated Upset). / Reducing the dimensions of transistors increases the soft-errors sensitivity of integrated circuits to energetic particles present in the environments in which they operate. An experiment, presented in this thesis, aiming to study soft-errors sensitivity, in real environment, of SRAM memories issued from two successive technologies, put in evidence the criticality of this thematic. This is to show the need to evaluate circuit's sensitivity to radiation effects, especially commercial circuits that are used more and more for space and avionic applications and even at high altitudes, in order to find the appropriate hardening methodologies. Several fault-injection methods, aiming at evaluating the sensitivity to soft-errors of integrated circuits, were goals for many researches. In this thesis was developed an automated method, and its corresponding tool, allowing the emulation of radiation effects on HDL-based circuits. This method, so-called NETFI (NETlist Fault-Injection), is based on modifying the netlist of the synthesized circuit to allow injecting faults of different types (SEU, SET and Stuck_at). NETFI was applied on different architectures in order to assess its efficiency and put in evidence its capabilities. A study on a fault-tolerant algorithm, so-called self-convergent, executed by a LEON3 processor, was also presented in order to perform an objective comparison between the results issued from NETFI and those issued from another state-of-the-art method, called CEU (Code Emulated Upset).
|
8 |
Reliability-centric probabilistic analysis of VLSI circuitsRejimon, Thara 01 June 2006 (has links)
Reliability is one of the most serious issues confronted by microelectronics industry as feature sizes scale down from deep submicron to sub-100-nanometer and nanometer regime. Due to processing defects and increased noise effects, it is almost impractical to come up with error-free circuits. As we move beyond 22nm, devices will be operating very close to their thermal limit making the gates error-prone and every gate will have a finite propensity of providing erroneous outputs. Additional factors increasing the erroneous behaviors are low operating voltages and extremely high frequencies. These types of errors are not captured by current defect and fault tolerant mechanisms as they might not be present during the testing and reconfiguration. Hence Reliability-centric CAD analysis tool is becoming more essential not only to combat defect and hard faults but also errors that are transient and probabilistic in nature.In this dissertation, we address three broad categories of
errors. First, we focus on random pattern testability of logic circuits with respect to hard or permanent faults. Second, we model the effect of single-event-upset (SEU) at an internal node to primary outputs. We capture the temporal nature of SEUs by adding timing information to our model. Finally, we model the dynamic error in nano-domain computing, where reliable computation has to be achieved with "systemic" unreliable devices, thus making the entire computation process probabilistic rather than deterministic in nature.Our central theoretical scheme relies on Bayesian Belief networks that are compact efficient models representing joint probability distribution in a minimal graphical structure that not only uses conditional independencies to model the underlying probabilistic dependence but also uses them for computational advantage. We used both exact and approximate inference which has let us achieve order of magnitude improvements in both accuracy and speed and have enabled us t
o study larger benchmarks than the state-of-the-art. We are also able to study error sensitivities, explore design space, and characterize the input space with respect to errors and finally, evaluate the effect of redundancy schemes.
|
9 |
Frame-level redundancy scrubbing technique for SRAM-based FPGAs / Técnica de correção usando a redudância a nível de quadro para FPGAs baseados em SRAMSeclen, Jorge Lucio Tonfat January 2015 (has links)
Confiabilidade é um parâmetro de projeto importante para aplicações criticas tanto na Terra como também no espaço. Os FPGAs baseados em memoria SRAM são atrativos para implementar aplicações criticas devido a seu alto desempenho e flexibilidade. No entanto, estes FPGAs são susceptíveis aos efeitos da radiação tais como os erros transientes na memoria de configuração. Além disso, outros efeitos como o envelhecimento (aging) ou escalonamento da tensão de alimentação (voltage scaling) incrementam a sensibilidade à radiação dos FPGAs. Nossos resultados experimentais mostram que o envelhecimento e o escalonamento da tensão de alimentação podem aumentar ao menos duas vezes a susceptibilidade de FPGAs baseados em SRAM a erros transientes. Estes resultados são inovadores porque estes combinam três efeitos reais que acontecem em FPGAs baseados em SRAM. Os resultados podem guiar aos projetistas a prever os efeitos dos erros transientes durante o tempo de operação do dispositivo em diferentes níveis de tensão. A correção da memoria usando a técnica de scrubbing é um método efetivo para corrigir erros transientes em memorias SRAM, mas este método impõe custos adicionais em termos de área e consumo de energia. Neste trabalho, nos propomos uma nova técnica de scrubbing usando a redundância interna a nível de quadros chamada FLR- scrubbing. Esta técnica possui mínimo consumo de energia sem comprometer a capacidade de correção. Como estudo de caso, a técnica foi implementada em um FPGA de tamanho médio Xilinx Virtex-5, ocupando 8% dos recursos disponíveis e consumindo seis vezes menos energia que um circuito corretor tradicional chamado blind scrubber. Além, a técnica proposta reduz o tempo de reparação porque evita o uso de uma memoria externa como referencia. E como outra contribuição deste trabalho, nos apresentamos os detalhes de uma plataforma de injeção de falhas múltiplas que permite emular os erros transientes na memoria de configuração do FPGA usando reconfiguração parcial dinâmica. Resultados de campanhas de injeção são apresentados e comparados com experimentos de radiação acelerada. Finalmente, usando a plataforma de injeção de falhas proposta, nos conseguimos analisar a efetividade da técnica FLR-scrubbing. Nos também confirmamos estes resultados com experimentos de radiação acelerada. / Reliability is an important design constraint for critical applications at ground-level and aerospace. SRAM-based FPGAs are attractive for critical applications due to their high performance and flexibility. However, they are susceptible to radiation effects such as soft errors in the configuration memory. Furthermore, the effects of aging and voltage scaling increment the sensitivity of SRAM-based FPGAs to soft errors. Experimental results show that aging and voltage scaling can increase at least two times the susceptibility of SRAM-based FPGAs to Soft Error Rate (SER). These findings are innovative because they combine three real effects that occur in SRAM-based FPGAs. Results can guide designers to predict soft error effects during the lifetime of devices operating at different power supply voltages. Memory scrubbing is an effective method to correct soft errors in SRAM memories, but it imposes an overhead in terms of silicon area and energy consumption. In this work, it is proposed a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLRscrubbing) with minimum energy consumption overhead without compromising the correction capabilities. As a case study, the FLR-scrubbing controller was implemented on a mid-size Xilinx Virtex-5 FPGA device, occupying 8% of available slices and consumes six times less energy per scrubbed frame than a classic blind scrubber. Also, the technique reduces the repair time by avoiding the use of an external golden memory for reference. As another contribution, this work presents the details of a Multiple Fault Injection Platform that emulates the configuration memory upsets of an FPGA using dynamic partial reconfiguration. Results of fault injection campaigns are presented and compared with accelerated ground-level radiation experiments. Finally, using our proposed fault injection platform it was possible to analyze the effectiveness of the FLR-scrubbing technique. Accelerated radiation tests confirmed these results.
|
10 |
Analysis and Design of Resilient VLSI CircuitsGarg, Rajesh 2009 May 1900 (has links)
The reliable operation of Integrated Circuits (ICs) has become increasingly difficult to
achieve in the deep sub-micron (DSM) era. With continuously decreasing device feature
sizes, combined with lower supply voltages and higher operating frequencies, the noise
immunity of VLSI circuits is decreasing alarmingly. Thus, VLSI circuits are becoming
more vulnerable to noise effects such as crosstalk, power supply variations and radiation-induced
soft errors. Among these noise sources, soft errors (or error caused by radiation
particle strikes) have become an increasingly troublesome issue for memory arrays as well
as combinational logic circuits. Also, in the DSM era, process variations are increasing
at an alarming rate, making it more difficult to design reliable VLSI circuits. Hence, it
is important to efficiently design robust VLSI circuits that are resilient to radiation particle
strikes and process variations. The work presented in this dissertation presents several
analysis and design techniques with the goal of realizing VLSI circuits which are tolerant
to radiation particle strikes and process variations.
This dissertation consists of two parts. The first part proposes four analysis and two
design approaches to address radiation particle strikes. The analysis techniques for the
radiation particle strikes include: an approach to analytically determine the pulse width
and the pulse shape of a radiation induced voltage glitch in combinational circuits, a technique
to model the dynamic stability of SRAMs, and a 3D device-level analysis of the
radiation tolerance of voltage scaled circuits. Experimental results demonstrate that the proposed techniques for analyzing radiation particle strikes in combinational circuits and
SRAMs are fast and accurate compared to SPICE. Therefore, these analysis approaches
can be easily integrated in a VLSI design flow to analyze the radiation tolerance of such
circuits, and harden them early in the design flow. From 3D device-level analysis of the radiation
tolerance of voltage scaled circuits, several non-intuitive observations are made and
correspondingly, a set of guidelines are proposed, which are important to consider to realize
radiation hardened circuits. Two circuit level hardening approaches are also presented
to harden combinational circuits against a radiation particle strike. These hardening approaches
significantly improve the tolerance of combinational circuits against low and very
high energy radiation particle strikes respectively, with modest area and delay overheads.
The second part of this dissertation addresses process variations. A technique is developed
to perform sensitizable statistical timing analysis of a circuit, and thereby improve the
accuracy of timing analysis under process variations. Experimental results demonstrate that
this technique is able to significantly reduce the pessimism due to two sources of inaccuracy
which plague current statistical static timing analysis (SSTA) tools. Two design approaches
are also proposed to improve the process variation tolerance of combinational circuits and
voltage level shifters (which are used in circuits with multiple interacting power supply
domains), respectively. The variation tolerant design approach for combinational circuits
significantly improves the resilience of these circuits to random process variations, with a
reduction in the worst case delay and low area penalty. The proposed voltage level shifter
is faster, requires lower dynamic power and area, has lower leakage currents, and is more
tolerant to process variations, compared to the best known previous approach.
In summary, this dissertation presents several analysis and design techniques which
significantly augment the existing work in the area of resilient VLSI circuit design.
|
Page generated in 0.1476 seconds