• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 50
  • 50
  • 23
  • 18
  • 15
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Soft Error Resistant Design of the AES Cipher Using SRAM-based FPGA

Ghaznavi, Solmaz January 2011 (has links)
This thesis presents a new architecture for the reliable implementation of the symmetric-key algorithm Advanced Encryption Standard (AES) in Field Programmable Gate Arrays (FPGAs). Since FPGAs are prone to soft errors caused by radiation, and AES is highly sensitive to errors, reliable architectures are of significant concern. Energetic particles hitting a device can flip bits in FPGA SRAM cells controlling all aspects of the implementation. Unlike previous research, heterogeneous error detection techniques based on properties of the circuit and functionality are used to provide adequate reliability at the lowest possible cost. The use of dual ported block memory for SubBytes, duplication for the control circuitry, and a new enhanced parity technique for MixColumns is proposed. Previous parity techniques cover single errors in datapath registers, however, soft errors can occur in the control circuitry as well as in SRAM cells forming the combinational logic and routing. In this research, propagation of single errors is investigated in the routed netlist. Weaknesses of the previous parity techniques are identified. Architectural redesign at the register-transfer level is introduced to resolve undetected single errors in both the routing and the combinational logic. Reliability of the AES implementation is not only a critical issue in large scale FPGA-based systems but also at both higher altitudes and in space applications where there are a larger number of energetic particles. Thus, this research is important for providing efficient soft error resistant design in many current and future secure applications.
2

Soft Error Resistant Design of the AES Cipher Using SRAM-based FPGA

Ghaznavi, Solmaz January 2011 (has links)
This thesis presents a new architecture for the reliable implementation of the symmetric-key algorithm Advanced Encryption Standard (AES) in Field Programmable Gate Arrays (FPGAs). Since FPGAs are prone to soft errors caused by radiation, and AES is highly sensitive to errors, reliable architectures are of significant concern. Energetic particles hitting a device can flip bits in FPGA SRAM cells controlling all aspects of the implementation. Unlike previous research, heterogeneous error detection techniques based on properties of the circuit and functionality are used to provide adequate reliability at the lowest possible cost. The use of dual ported block memory for SubBytes, duplication for the control circuitry, and a new enhanced parity technique for MixColumns is proposed. Previous parity techniques cover single errors in datapath registers, however, soft errors can occur in the control circuitry as well as in SRAM cells forming the combinational logic and routing. In this research, propagation of single errors is investigated in the routed netlist. Weaknesses of the previous parity techniques are identified. Architectural redesign at the register-transfer level is introduced to resolve undetected single errors in both the routing and the combinational logic. Reliability of the AES implementation is not only a critical issue in large scale FPGA-based systems but also at both higher altitudes and in space applications where there are a larger number of energetic particles. Thus, this research is important for providing efficient soft error resistant design in many current and future secure applications.
3

RADIATION INDUCED TRANSIENT PULSE PROPAGATION USING THE WEIBULL DISTRIBUTION FUNCTION

Watkins, Adam Christopher 01 May 2012 (has links)
In recent years, studying soft errors has become an issue of greater importance. There have been many methods developed that estimate the Soft Error Rate. Those methods are either deterministic or statistical. The proposed deterministic model aims to improve Soft Error Rate estimation by accurately approximating the generated pulse and all subsequent pulses. The generated pulse is approximated by a piecewise function consisting of two Weibull cumulative distribution functions. This method is an improvement over existing methods as it offers high accuracy while requiring less pre-characterization. The proposed algorithm reduces pre-characterization by allowing the beta Weibull parameter to be calculated during runtime using gate parameters such as the gate delay.
4

A Tool For Run Time Soft Error Fault Injection Into FPGA Circuits

Zuzarte, Marvin 06 1900 (has links)
Safety and mission critical systems are currently deployed in many different fields where there is a greater presence of high energy particles (e.g.\ aerospace). The use of field programmable gate arrays (FPGAs) within safety critical systems is becoming more prevalent due to the design and cost benefits their use provides. The effects of externally caused faults on these safety critical systems cannot be neglected. In particular, high energy particle striking a circuit can cause a voltage change in the circuit known as a soft error. The effects these soft errors will have on the circuit needs to be understood in order to ensure that the systems will function properly in the event soft errors do occur. In this thesis a tool is designed to facilitate the run-time injection of soft errors into a hardware circuit running on a FPGA. The tool allows for the control over the number of injections that can be performed and control over the rate that the injections will occur at. Additionally the tool records time stamps of when injections occur and time stamps of when errors are detected. This recorded data allows for the analysis of designs in conditions prone to soft errors. The implemented tool allows for design time parametrization and run time configuration, allowing a multitude of tests to be run for a single compiled design. The tool also eliminates the need for a host computer after configuration by generating the injection locations and times on the FPGA. Eliminating the host computer allows for faster testing when compared to other methods as data transfer times are greatly reduced. The implemented tool was run on classical examples of redundant structures, such as duplication with comparison and triple modular redundancy as well as a non-redundant structure to establish a baseline. The results of multiple tests run on each structure are analyzed to illustrate the uses of the tool and how the tool may be used to test other designs. / Thesis / Master of Applied Science (MASc)
5

Circuit-level approaches to mitigate the process variability and soft errors in FinFET logic cells / Approches au niveau du circuit pour atténuer la variabilité de fabrication et les soft errors dans les cellules logiques FinFET

Lackmann-Zimpeck, Alexandra 24 September 2019 (has links)
Les contraintes imposées par la roadmap technologique nanométrique imposent aux fabricants de microélectronique une réduction de la variabilité de fabrication mais également de durcissement vis-à-vis des erreurs logiques induits par l’environnement radiatif naturel afin d’assurer un haut niveau de fiabilité. Certains travaux ont mis en évidence l'influence de la variabilité de fabrication et SET sur les circuits basés sur les technologies FinFET. Cependant jusqu’à lors, aucune approche pour les atténuer n’ont pu être présenté pour les technologies FinFET. Pour ces raisons, du point de vue de la conception, des efforts considérables doivent être déployés pour comprendre et réduire les impacts générés par ces deux problématiques de fiabilité. Dans ce contexte, les contributions principales de cette thèse sont: 1) étudier le comportement des cellules logiques FinFET en fonction des variations de fabrication et des effets de rayonnement; 2) évaluer quatre approches des durcissement au niveau du circuit afin de limiter les effets de variabilité (work-function fluctuation, WFF) de fabrication et des soft errors (SE); 3) fournir une comparaison entre toutes les techniques appliquées dans ce travail; 4) proposer le meilleur compromis entre performance, consommation, surface, et sensibilité aux corruptions de données et erreurs transitoires. Transistor reordering, decoupling cells, Schmitt Trigger, et sleep transistor sont quatre techniques prometteuses d’optimisation au niveau de circuit, explorées dans ce travail. Le potentiel de chacune d'elles pour rendre les cellules logiques plus robustes vis-à-vis variabilité de fabrication et de SE a été évalué. Cette thèse propose également une estimation des tendances comportementales en fonction du niveau de variabilité, des dimensionnements des transistors et des caractéristiques énergétique de particule ionisante comme transfert d'énergie linéaire. Lors de cette thèse, la variabilité de fabrication a été évaluée par des simulations Monte Carlo (MC) avec une WFF modélisé par une fonction Gaussienne utilisant le SPICE. La susceptibilité SE a été estimée à partir de d’outil de génération MC de radiations, MUSCA SEP3. Cet outil est basé sur des calculs MC afin de rendre compte des caractéristiques de l’environnement radiatif du design et des paramètres électriques des composants analysés. Les approches proposées par cette thèse améliorent l'état-de-l'art actuel en fournissant des options d’optimisation au niveau du circuit pour réduire les effets de variabilité de fabrication et la susceptibilité aux SE. La Transistor reordering peut augmenter la robustesse des cellules logiques pour une variabilité allant jusqu’à 8%, cependant cette approche n’est pas idéale pour la mitigation des SE. L’utilisation de decoupling cells permet de meilleurs résultats pour le contrôle de la variabilité de consommation avec des niveaux de variation supérieurs à 4%, et atténuant jusqu'à 10% la variabilité du délai pour la variabilité de fabrication de 3% de la WFF. D’un point de vue SE, cette technique permet une diminution de 10% de la sensibilité des cellules logiques étudiées. L’utilisation de structure Schmitt Triggers en sortie de cellule logique permet une amélioration allant jusqu’à 5% de la sensibilité à la variabilité de fabrication. Enfin, l’utilisation de sleep transistors améliore la variabilité de fabrication d'environ 12% pour 5% de WFF. La variabilité du délai dépend de la manière dont les transistors sont disposés au circuit. Cette méthode permet une immunité totale de la cellule logique y compris en régime near-threshold. En résumé, la meilleure approche de mitigation de la variabilité de fabrication semble être l’utilisation de structure Schmitt Triggers alors que l’utilisation de sleep transistors est le plus adapté pour l’optimisation de SE. Ainsi, selon les applications et contraintes, la méthode de durcissement par sleep transistors semble proposer le meilleur compromis. / Process variability mitigation and radiation hardness are relevant reliability requirements as chip manufacturing advances more in-depth into the nanometer regime. The parameter yield loss and critical failures on system behavior are the major consequences of these issues. Some related works explore the influence of process variability and single event transients (SET) on the circuits based on FinFET technologies, but there is a lack of approaches to mitigate them. For these reasons, from a design standpoint, considerable efforts should be made to understand and reduce the impacts introduced by reliability challenges. In this regard, the main contributions of this PhD thesis are to: 1) investigate the behavior of FinFET logic cells under process variations and radiation effects; 2) evaluate four circuit-level approaches to attenuate the impact caused by work-function fluctuations (WFF) and soft errors (SE); 3) provide an overall comparison between all techniques applied in this work; 4) trace a trade-off between the gains and penalties of each approach regarding performance, power, area, SET cross-section, and SET pulse width. Transistor reordering, decoupling cells, Schmitt Triggers, and sleep transistors are the four circuit-level mitigation techniques explored in this work. The potential of each one to make the logic cells more robust to the process variability and radiation-induced soft errors are assessed comparing the standard version results with the design using each approach. This PhD thesis also establishes the mitigation tendency when different levels of variation, transistor sizing, and radiation particles characteristics such as linear energy transfer (LET) are applied in the design with these techniques.The process variability is evaluated through Monte Carlo (MC) simulations with the WFF modeled as a Gaussian function using SPICE simulation while the SE susceptibility is estimated using the radiation event generator tool MUSCA SEP3 (developed at ONERA) also based on a MC method that deals both with radiation environment characteristics, layout features and the electrical properties of devices. In general, the proposed approaches improve the state-of-the-art by providing circuit-level options to reduce the process variability effects and SE susceptibility, at fewer penalties and design complexity. The transistor reordering technique can increase the robustness of logic cells under process variations up to 8%, but this method is not favorable for SE mitigation. The insertion of decoupling cells shows interesting outcomes for power variability control with levels of variation above 4%, and it can attenuate until 10% the delay variability considering manufacturing process with 3% of WFF. Depending on the LET, the design with decoupling cells can decrease until 10% of SE susceptibility of logic cells. The use of Schmitt Triggers in the output of FinFET cells can improve the variability sensitivity by up to 50%. The sleep transistor approach improves the power variability reaching around 12% for WFF of 5%, but the advantages of this method to delay variability depends how the transistors are arranged with the sleep transistor in the pull-down network. The addition of a sleep transistor become all logic cells studied free of faults even at the near-threshold regime. In this way, the best approach to mitigate the process variability is the use of Schmitt Triggers, as well as the sleep transistor technique is the most efficient for the SE mitigation. However, the Schmitt Trigger technique presents the highest penalties in area, performance, and power. Therefore, depending on the application, the sleep transistor technique can be the most appropriate to mitigate the process variability effects.
6

Reliability evaluation and error mitigation in pedestrian detection algorithms for embedded GPUs / Validação da confiabilidade e tolerância a falhas em algoritmos de detecção de pedestres para GPUs embarcadas

Santos, Fernando Fernandes dos January 2017 (has links)
A confiabilidade de algoritmos para detecção de pedestres é um problema fundamental para carros auto dirigíveis ou com auxílio de direção. Métodos que utilizam algoritmos de detecção de objetos como Histograma de Gradientes Orientados (HOG - Histogram of Oriented Gradients) ou Redes Neurais de Convolução (CNN – Convolutional Neural Network) são muito populares em aplicações automotivas. Unidades de Processamento Gráfico (GPU – Graphics Processing Unit) são exploradas para executar detecção de objetos de uma maneira eficiente. Infelizmente, as arquiteturas das atuais GPUs tem se mostrado particularmente vulneráveis a erros induzidos por radiação. Este trabalho apresenta uma validação e um estudo analítico sobre a confiabilidade de duas classes de algoritmos de detecção de objetos, HOG e CNN. Esta pesquisa almeja não somente quantificar, mas também qualificar os erros produzidos por radiação em aplicações de detecção de objetos em GPUs embarcadas. Os resultados experimentais com HOG foram obtidos usando duas arquiteturas de GPU embarcadas diferentes (Tegra e AMD APU), cada uma foi exposta por aproximadamente 100 horas em um feixe de nêutrons em Los Alamos National Lab (LANL). As métricas Precision e Recall foram usadas para validar a criticalidade do erro. Uma análise final mostrou que por um lado HOG é intrinsecamente resiliente a falhas (65% a 85% dos erros na saída tiveram um pequeno impacto na detecção), do outro lado alguns erros críticos aconteceram, tais que poderiam resultar em pedestres não detectados ou paradas desnecessárias do veículo. Este trabalho também avaliou a confiabilidade de duas Redes Neurais de Convolução para detecção de Objetos:Darknet e Faster RCNN. Três arquiteturas diferentes de GPUs foram expostas em um feixe de nêutrons controlado (Kepler, Maxwell, e Pascal), com as redes detectando objetos em dois data sets, Caltech e Visual Object Classes. Através da análise das saídas corrompidas das redes neurais, foi possível distinguir entre erros toleráveis e erros críticos, ou seja, erros que poderiam impactar na detecção de objetos. Adicionalmente, extensivas injeções de falhas no nível da aplicação (GDB) e em nível arquitetural (SASSIFI) foram feitas, para identificar partes críticas do código para o HOG e as CNNs. Os resultados mostraram que não são todos os estágios da detecção de objetos que são críticos para a confiabilidade da detecção final. Graças a injeção de falhas foi possível identificar partes do HOG e da Darknet, que se protegidas, irão com uma maior probabilidade aumentar a sua confiabilidade, sem adicionar um overhead desnecessário. A estratégia de tolerância a falhas proposta para o HOG foi capaz de detectar até 70% dos erros com 12% de overhead de tempo. / Pedestrian detection reliability is a fundamental problem for autonomous or aided driving. Methods that use object detection algorithms such as Histogram of Oriented Gradients (HOG) or Convolutional Neural Networks (CNN) are today very popular in automotive applications. Embedded Graphics Processing Units (GPUs) are exploited to make object detection in a very efficient manner. Unfortunately, GPUs architecture has been shown to be particularly vulnerable to radiation-induced failures. This work presents an experimental evaluation and analytical study of the reliability of two types of object detection algorithms: HOG and CNNs. This research aim is not just to quantify but also to qualify the radiation-induced errors on object detection applications executed in embedded GPUs. HOG experimental results were obtained using two different architectures of embedded GPUs (Tegra and AMD APU), each exposed for about 100 hours to a controlled neutron beam at Los Alamos National Lab (LANL). Precision and Recall metrics are considered to evaluate the error criticality. The reported analysis shows that, while being intrinsically resilient (65% to 85% of output errors only slightly impact detection), HOG experienced some particularly critical errors that could result in undetected pedestrians or unnecessary vehicle stops. This works also evaluates the reliability of two Convolutional Neural Networks for object detection: You Only Look Once (YOLO) and Faster RCNN. Three different GPU architectures were exposed to controlled neutron beams (Kepler, Maxwell, and Pascal) detecting objects in both Caltech and Visual Object Classes data sets. By analyzing the neural network corrupted output, it is possible to distinguish between tolerable errors and critical errors, i.e., errors that could impact detection. Additionally, extensive GDB-level and architectural-level fault-injection campaigns were performed to identify HOG and YOLO critical procedures. Results show that not all stages of object detection algorithms are critical to the final classification reliability. Thanks to the fault injection analysis it is possible to identify HOG and Darknet portions that, if hardened, are more likely to increase reliability without introducing unnecessary overhead. The proposed HOG hardening strategy is able to detect up to 70% of errors with a 12% execution time overhead.
7

Software Techniques For Dependable Execution

January 2018 (has links)
abstract: Advances in semiconductor technology have brought computer-based systems intovirtually all aspects of human life. This unprecedented integration of semiconductor based systems in our lives has significantly increased the domain and the number of safety-critical applications – application with unacceptable consequences of failure. Software-level error resilience schemes are attractive because they can provide commercial-off-the-shelf microprocessors with adaptive and scalable reliability. Among all software-level error resilience solutions, in-application instruction replication based approaches have been widely used and are deemed to be the most effective. However, existing instruction-based replication schemes only protect some part of computations i.e. arithmetic and logical instructions and leave the rest as unprotected. To improve the efficacy of instruction-level redundancy-based approaches, we developed several error detection and error correction schemes. nZDC (near Zero silent Data Corruption) is an instruction duplication scheme which protects the execution of whole application. Rather than detecting errors on register operands of memory and control flow operations, nZDC checks the results of such operations. nZDC en sures the correct execution of memory write instruction by reloading stored value and checking it against redundantly computed value. nZDC also introduces a novel control flow checking mechanism which replicates compare and branch instructions and detects both wrong direction branches as well as unwanted jumps. Fault injection experiments show that nZDC can improve the error coverage of the state-of-the-art schemes by more than 10x, without incurring any more performance penalty. Further more, we introduced two error recovery solutions. InCheck is our backward recovery solution which makes light-weighted error-free checkpoints at the basic block granularity. In the case of error, InCheck reverts the program execution to the beginning of last executed basic block and resumes the execution by the aid of preserved in formation. NEMESIS is our forward recovery scheme which runs three versions of computation and detects errors by checking the results of all memory write and branch operations. In the case of a mismatch, NEMESIS diagnosis routine decides if the error is recoverable. If yes, NEMESIS recovery routine reverts the effect of error from the program state and resumes program normal execution from the error detection point. / Dissertation/Thesis / Doctoral Dissertation Computer Engineering 2018
8

Equipment for measuring cosmic-ray effects on DRAM

Jonsson, Per-Axel January 2007 (has links)
<p>Nuclear particles hitting the silicon in a electronic device can cause a change in the data in a memory bit cell or in a flip-flop. The device is still working, but the data is corrupted and this is called a soft error. A soft error caused by a single nuclear particle is called a single event upset and is a growing problem. Research is ongoing at Saab aiming at how susceptible random access memories are to protons and neutrons.</p><p>This thesis describes the development of equipment for measuring cosmic-ray effects on DRAM in laboratories. The system is built on existing hardware with a FPGA as the core unit. A short history of soft errors is also given and what causes it. How a DRAM works and basic operation is explained and the difference between a SRAM. The result is a working system ready to be used.</p>
9

Circuit Level Techniques for Power and Reliability Optimization of CMOS Logic

Diril, Abdulkadir Utku 21 April 2005 (has links)
Technology scaling trends lead to shrinking of the individual elements like transistors and wires in digital systems. The main driving force behind this is cutting the cost of the systems while the systems are filled with extra functionalities. This is the reason why a 3 GHz Intel processor now is priced less than what a 50MHz processor was priced 10 years ago. As in most cases, this comes with a price. This price is the complex design process and problems that stem from the reduction in physical dimensions. As the transistors became smaller in size and the systems became faster, issues like power consumption, signal integrity, soft error tolerance, and testing became serious challenges. There is an increasing demand to put CAD tools in the design flow to address these issues at every step of the design process. First part of this research investigates circuit level techniques to reduce power consumption in digital systems. In second part, improving soft error tolerance of digital systems is considered as a trade off problem between power and reliability and a power aware dynamic soft error tolerance control strategy is developed. The objective of this research is to provide CAD tools and circuit design techniques to optimize power consumption and to increase soft error tolerance of digital circuits. Multiple supply and threshold voltages are used to reduce power consumption. Variable supply and threshold voltages are used together with variable capacitances to develop a dynamic soft error tolerance control scheme.
10

Hierarchical Optimization of Digital CMOS Circuits for Power, Performance and Reliability

Dhillon, Yuvraj Singh 20 April 2005 (has links)
Power consumption and soft-error tolerance have become major constraints in the design of DSM CMOS circuits. With continued technology scaling, the impact of these parameters is expected to gain in significance. Furthermore, the design complexity continues to increase rapidly due to the tremendous increase in number of components (gates/transistors) on an IC every technology generation. This research describes an efficient and general CAD framework for the optimization of critical circuit characteristics such as power consumption and soft-error tolerance under delay constraints with supply/threshold voltages and/or gate sizes as variables. A general technique called Delay-Assignment-Variation (DAV) based optimization was formulated for the delay-constrained optimization of directed acyclic graphs. Exact mathematical conditions on the supply and threshold voltages of circuit modules were developed that lead to minimum overall dynamic and static power consumption of the circuit under delay constraints. A DAV search based method was used to obtain the optimal supply and threshold voltages that minimized power consumption. To handle the complexity of design of reliable, low-power circuits at the gate level, a hierarchical application of DAV based optimization was explored. The effectiveness of the hierarchical approach in reducing circuit power and unreliability, while being highly efficient is demonstrated. The usage of the technique for improving upon already optimized designs is described. An accurate and efficient model for analyzing the soft-error tolerance of CMOS circuits is also developed.

Page generated in 0.0529 seconds