• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 37
  • 18
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 127
  • 127
  • 46
  • 36
  • 34
  • 27
  • 26
  • 26
  • 24
  • 23
  • 21
  • 21
  • 21
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Estimating the Dynamic Sensitive Cross Section of an FPGA Design through Fault injection

Johnson, Darrel E. 15 April 2005 (has links) (PDF)
A fault injection tool has been created to emulate single event upset (SEU) behavior within the configuration memory of an FPGA. This tool is able to rapidly and accurately determine the dynamic sensitive cross section of the configuration memory for a given FPGA design. This tool enables the reliability of FPGA designs and fault tolerance schemes to be quickly and accurately tested. The validity of testing performed with this fault injection tool has been confirmed through radiation testing. A radiation test was conducted at Crocker Nuclear Laboratory using a proton accelerator in order to determine the actual dynamic sensitive cross section for specific FPGA designs. The results of this radiation testing were then analyzed and compared with similar fault injection tests, with results suggesting that the fault injection tool behavior is indeed accurate and valid. The fault injection tool can be used to determine the sensitivity of an FPGA design to configuration memory upsets. Additionally, fault mitigation techniques designed to increase the reliability of an FPGA design in spite of upsets within the configuration memory, can be thoroughly tested through fault injection. Fault injection testing should help to increase the feasibility of reconfigurable computing in space. FPGAs are well suited to the computational demands of space based signal processing applications; however, without appropriate mitigation or redundancy techniques, FPGAs are unreliable in a radiation environment. Because the fault injection tool has been shown to reliably model the effects of single event upsets within the configuration memory, it can be used to accurately evaluate the effectiveness of fault tolerance techniques in FPGAs.
112

A Study on Fault Tolerance of Image Sensor-based Object Detection in Indoor Navigation / En studie om feltolerans för bildsensorbaserad objektdetektering i inomhusnavigering

Wang, Yang January 2022 (has links)
With the fast development of embedded deep-learning computing systems, applications powered by deep learning are moving from the cloud to the edge. When deploying NN onto the devices under complex environments, there are various types of possible faults: soft errors caused by cosmic radiation and radioactive impurities, voltage instability, aging, temperature variations, etc. Thus, more attention is drawn on the reliability of the NN embedded system. In this project, we build a virtual simulation system in Gazebo to simulate and test the working of an embedded NN system in the virtual environment in indoor navigation. The system can detect objects in the virtual environment with the help of the virtual camera(the image sensor) and the object detection module, which is based on YOLO v3, and make corresponding control decisions. We also designed and simulated the corresponding error injection module according to the working principle of the image sensor, and tested the functionality, and fault tolerance of the YOLO network. At the same time, network pruning algorithm is also introduced to study the relationship between different degrees of network pruning and network fault tolerance to sensor faults. / Med den snabba utvecklingen av inbyggda datorsystem för djupinlärning flyttas applikationer som drivs av djupinlärning från molnet till kanten. När man distribuerar NN på enheterna under komplexa miljöer finns det olika typer av möjliga fel: mjuka fel orsakade av kosmisk strålning och radioaktiva föroreningar, spänningsinstabilitet, åldrande, temperaturvariationer, illvilliga angripare, etc. Därför är mer uppmärksamhet ritade om tillförlitligheten hos det inbyggda NN-systemet. I det här projektet bygger vi ett virtuellt simuleringssystem för att simulera och testa hur ett inbäddat NN-system fungerar i den virtuella miljö vi ställer upp. Systemet kan upptäcka objekt i den virtuella miljön enligt den virtuella kameran och objektdetekteringsmodulen, som är baserad på YOLO v3, och göra motsvarande kontrollstrategier. Vi designade och simulerade också motsvarande felinsprutningsmodul enligt bildsensorns arbetsprincip och testade funktionalitet, tillförlitlighet och feltolerans hos YOLO-nätverket. Samtidigt nätverk beskärningsalgoritm introduceras också för att studera sambandet mellan olika grader av nätverksbeskärning och nätverksfeltolerans.
113

Etude de la vulnérabilité des circuits cryptographiques l'injection de fautes par laser. / Study of the vulnerability of cryptographic circuits by laser fault injection.

Mirbaha, Amir-Pasha 20 December 2011 (has links)
Les circuits cryptographiques peuvent etre victimes d'attaques en fautes visant leur implementation materielle. elles consistent a creer des fautes intentionnelles lors des calculs cryptographiques afin d'en deduire des informations confidentielles. dans le contexte de la caracterisation securitaire des circuits, nous avons ete amenes a nous interroger sur la faisabilite experimentale de certains modeles theoriques d'attaques. nous avons utilise un banc laser comme moyen d'injection de fautes.dans un premier temps, nous avons effectue des attaques en fautes dfa par laser sur un microcontroleur implementant un algorithme de cryptographie aes. nous avons reussi a exclure l'effet logique des fautes ne correspondants pas aux modeles d’attaque par un jeu precis sur l'instant et le lieu d'injection. en outre, nous avons identifie de nouvelles attaques dfa plus elargies.ensuite, nous avons etendu nos recherches a la decouverte et la mise en place de nouveaux modeles d'attaques en fautes. grace a la precision obtenue lors de nos premiers travaux, nous avons developpe ces nouvelles attaques de modification de rondes.en conclusion, les travaux precedents constituent un avertissement sur la faisabilite averee des attaques par laser decrites dans la litterature scientifique. nos essais ont temoigne de la faisabilite toujours actuelle de la mise en place des attaques mono-octets ou mono-bits avec un faisceau de laser qui rencontre plusieurs octets ; et egalement reveler de nouvelles possibilites d’attaque. cela nous a amenes a etudier des contre-mesures adaptees. / Cryptographic circuits may be victims of fault attacks on their hardware implementations. fault attacks consist of creating intentional faults during cryptographic calculations in order to infer secrets. in the context of security characterization of circuits, we have examined practical feasibility of some theoretical models of fault attacks. we used a laser bench as a means of the fault injection.at the beginning, we performed laser fault injections on a microcontroller implementing an aes cryptographic algorithm. we succeeded to exclude the logical effect of mismatched faults by temporal and spatial accuracy in fault injection. moreover, we identified extended new dfa attacks.then, we extended our research to identify and to implement new fault attack models. with the precision obtained in our earlier work, we developed new round modification analysis (rma) attacks.in conclusion, the experiments give a warning for the feasibility of described attacks in the literature by laser. our tests have demonstrated that single-byte or single-bit attacks are still feasible with a laser beam that hits additional bytes on the circuit when the laser emission is accurate and associated with other techniques. they also revealed new attack possibilities. therefore, it conducted us to study of appropriate countermeasures.
114

Techniques pour l'évaluation et l'amélioration du comportement des technologies émergentes face aux fautes aléatoires / Techniques for the evaluation and the improvement of emergent technologies’ behavior facing random errors

Costenaro, Enrico 09 December 2015 (has links)
L'objectif principal de cette thèse est de développer des techniques d'analyse et mitigation capables à contrer les effets des Evènements Singuliers (Single Event Effects) - perturbations externes et internes produites par les particules radioactives, affectant la fiabilité et la sureté en fonctionnement des circuits microélectroniques complexes. Cette thèse à la vocation d'offrir des solutions et méthodologies industrielles pour les domaines d'applications terrestres exigeant une fiabilité ultime (télécommunications, dispositifs médicaux, ...) en complément des travaux précédents sur les Soft Errors, traditionnellement orientés vers les applications aérospatiales, nucléaires et militaires.Les travaux présentés utilisent une décomposition de sources d'erreurs dans les circuits actuels, visant à mettre en évidence les contributeurs les plus importants.Les upsets (SEU) - Evènements Singuliers (ES) dans les cellules logiques séquentielles représentent actuellement la cible principale pour les efforts d'analyse et d'amélioration à la fois dans l'industrie et dans l'académie. Cette thèse présente une méthodologie d'analyse basée sur la prise en compte de la sensibilité de chaque état logique d'une cellule (state-awareness), approche qui améliore considérablement la précision des résultats concernant les taux des évènements pour les instances séquentielles individuelles. En outre, le déséquilibre intrinsèque entre la susceptibilité des différents états des bascules est exploité pour mettre en œuvre une stratégie d'amélioration SER à très faible coût.Les fautes transitoires (SET) affectant la logique combinatoire sont beaucoup plus difficiles à modéliser, à simuler et à analyser que les SEUs. L'environnement radiatif peut provoquer une multitude d'impulsions transitoires dans les divers types de cellules qui sont utilisés en configurations multiples. Cette thèse présente une approche pratique pour l'analyse SET, applicable à des circuits industriels très complexes. Les principales étapes de ce processus consiste à: a) caractériser complètement la bibliothèque de cellules standard, b) évaluer les SET dans les réseaux logiques du circuit en utilisant des méthodes statiques et dynamiques et c) calculer le taux SET global en prenant en compte les particularités de l'implémentation du circuit et de son environnement.L'injection de fautes reste la principale méthode d'analyse pour étudier l'impact des fautes, erreurs et disfonctionnements causés par les évènements singuliers. Ce document présente les résultats d'une analyse fonctionnelle d'un processeur complexe dans la présence des fautes et pour une sélection d'applications (benchmarks) représentatifs. Des techniques d'accélération de la simulation (calculs probabilistes, clustering, simulations parallèles) ont été proposées et évalués afin d'élaborer un environnement de validation industriel, capable à prendre en compte des circuits très complexes. Les résultats obtenus ont permis l'élaboration et l'évaluation d'un hypothétique scénario de mitigation qui vise à améliorer sensiblement, et cela au moindre coût, la fiabilité du circuit sous test. Les résultats obtenus montrent que les taux d'erreur, SDC (Silent Data Corruption) et DUE (Detectable Uncorrectable Errors) peuvent être considérablement réduits par le durcissement d'un petite partie du circuit (protection sélective). D'autres techniques spécifiques ont été également déployées: mitigation du taux de soft-errors des Flip-Flips grâce à une optimisation du Temporal De-Rating par l'insertion sélective de retard sur l'entrée ou la sortie des bascules et biasing du circuit pour privilégier les états moins sensibles.Les méthodologies, algorithmes et outils CAO proposés et validés dans le cadre de ces travaux sont destinés à un usage industriel et ont été valorisés dans le cadre de plateforme CAO commerciale visant à offrir une solution complète pour l'évaluation de la fiabilité des circuits et systèmes électroniques complexes. / The main objective of this thesis is to develop analysis and mitigation techniques that can be used to face the effects of radiation-induced soft errors - external and internal disturbances produced by radioactive particles, affecting the reliability and safety in operation complex microelectronic circuits. This thesis aims to provide industrial solutions and methodologies for the areas of terrestrial applications requiring ultimate reliability (telecommunications, medical devices, ...) to complement previous work on Soft Errors traditionally oriented aerospace, nuclear and military applications.The work presented uses a decomposition of the error sources, inside the current circuits, to highlight the most important contributors.Single Event Effects in sequential logic cells represent the current target for analysis and improvement efforts in both industry and academia. This thesis presents a state-aware analysis methodology that improves the accuracy of Soft Error Rate data for individual sequential instances based on the circuit and application. Furthermore, the intrinsic imbalance between the SEU susceptibility of different flip-flop states is exploited to implement a low-cost SER improvement strategy.Single Event Transients affecting combinational logic are considerably more difficult to model, simulate and analyze than the closely-related Single Event Upsets. The working environment may cause a myriad of distinctive transient pulses in various cell types that are used in widely different configurations. This thesis presents practical approach to a possible exhaustive Single Event Transient evaluation flow in an industrial setting. The main steps of this process consists in: a) fully characterize the standard cell library using a process and library-aware SER tool, b) evaluate SET effects in the logic networks of the circuit using a variety dynamic (simulation-based) and static (probabilistic) methods and c) compute overall SET figures taking into account the particularities of the implementation of the circuit and its environment.Fault-injection remains the primary method for analyzing the effects of soft errors. This document presents the results of functional analysis of a complex CPU. Three representative benchmarks were considered for this analysis. Accelerated simulation techniques (probabilistic calculations, clustering, parallel simulations) have been proposed and evaluated in order to develop an industrial validation environment, able to take into account very complex circuits. The results obtained allowed the development and evaluation of a hypothetical mitigation scenario that aims to significantly improve the reliability of the circuit at the lowest cost.The results obtained show that the error rate, SDC (Silent Data Corruption) and DUE (Detectable Uncorrectable Errors) can be significantly reduced by hardening a small part of the circuit (Selective mitigation).In addition to the main axis of research, some tangential topics were studied in collaboration with other teams. One of these consisted in the study of a technique for the mitigation of flip-flop soft-errors through an optimization of the Temporal De-Rating (TDR) by selectively inserting delay on the input or output of flip-flops.The Methodologies, the algorithms and the CAD tools proposed and validated as part of the work are intended for industrial use and have been included in a commercial CAD framework that offers a complete solution for assessing the reliability of circuits and complex electronic systems.
115

The transactional HW/SW stack for fault tolerant embedded computing / Pilha HW/SW transacional para computacao embarcada tolerante a falhas

Ferreira, Ronaldo Rodrigues January 2015 (has links)
O desafio de implementar tolerância a falhas em sistemas embarcados advém das restrições físicas de ocupação de área, dissipação de potência e consumo de energia desses sistemas. A necessidade de otimizar essas três restrições de projeto concomitante à computação dentro dos requisitos de desempenho e de tempo-real cria um problema difícil de ser resolvido. Soluções clássicas de tolerância a falhas tais como redundância modular dupla e tripla não são factíveis devido ao alto custo em potência e a falta de um mecanismo para se recuperar erros. Apesar de algumas técnicas existentes reduzirem o overhead de potência e área, essas incorrem em alta degradação de desempenho e muitas vezes assumem um modelo de falhas que não é factível. Essa tese introduz a Pilha de HW/SW Transacional, ou simplesmente Pilha, para gerenciar de maneira eficiente as restrições de área, potência, cobertura de falhas e desempenho. A Pilha introduz uma nova estratégia de compilação que organiza os programas em Blocos Básicos Transacionais (BBT), juntamente com um novo processador, a Arquitetura de Blocos Básicos Transacionais (ABBT), a qual provê detecção e recuperação de erros de grão fino e determinística ao usar o BBT como um contâiner de erros e como unidade de checkpointing. Duas soluções para prover a semântica de execução do BBT em hardware são propostas, uma baseada em software e a outra em hardware. A área, potência, desempenho e cobertura de falhas foram avaliadas através do modelo de hardware do ABBT. A Pilha provê uma cobertura de falhas de 99,35%, com overhead de 2,05 em potência e 2,65 de área. A Pilha apresenta overhead de desempenho de 1,33 e 1,54, dependento do modelo de hardware usado para suportar a semântica de execução do BBT. / Fault tolerance implementation in embedded systems is challenging because the physical constraints of area occupation, power dissipation, and energy consumption of these systems. The need for optimizing these three physical constraints while doing computation within the available performance goals and real-time deadlines creates a conundrum that is hard to solve. Classical fault tolerance solutions such as triple and dual modular redundancy are not feasible due to their high power overhead or lack of efficient and deterministic error recovery. Existing techniques, although some of them reduce the power and area overhead, incur heavy perfor- mance penalties and most of the time do not assume a feasible fault model. This dissertation introduces the Transactional HW/SW Stack, or simply Stack, to effi- ciently manage the area, power, fault coverage, and performance conundrum. The Stack introduces a new compilation strategy that assembles programs into Transac- tional Basic Blocks, together with a novel microprocessor, the TransactiOnal Basic Block Architecture (ToBBA), which provides fine-grained error detection and deter- ministic error rollback and elimination using the Transactional Basic Blocks (TBBs) both as a container for errors and as a small unit of data checkpointing. Two so- lutions to sustain the TBB semantics in hardware are introduced: software- and hardware-based. Stack’s area, power, performance, and coverage were evaluated using ToBBA’s hardware implementation model. The Stack attains an error correc- tion coverage of 99.35% with 2.05 power overhead within an area overhead of 2.65. The Stack also presents a performance overhead of 1.33 or 1.54, depending on the hardware model adopted to support the TBB.
116

Etude de la fiabilité des algorithmes self-convergeants face aux soft-erreurs / Study of reliability of self-convergent algorithms with respect to soft errors

Marques, Greicy Costa 24 October 2014 (has links)
Cette thèse est consacrée à l'étude de la robustesse/sensibilité d'un algorithme auto-convergeant face aux SEU's. Ces phénomènes appelés aussi bit-flips qui se traduit par le basculement intempestif du contenu d'un élément mémoire comme conséquence de l'ionisation produite par le passage d'une particule chargée avec le matériel. Cette étude pourra avoir un impact important vu la conjoncture de miniaturisation qui permettra bientôt de disposer de circuits avec des centaines à des milliers de cœurs de traitement sur une seule puce, pour cela il faudra faire les cœurs communiquer de manière efficace et robustes. Dans ce contexte les algorithme dits auto-convergeants peuvent être utilis afin que la communication entre les cœurs soit fiable et sans intervention extérieure. Une étude par injection de fautes de la robustesse de l'algorithme étudié a été effectuée, cet algorithme a été initialement exécuté par un processeur LEON3 implémenté dans un FPGA embarqué dans une plateforme de test spécifique. Les campagnes préliminaires d'injection de fautes issus d'une méthode de l'état de l'art appelée CEU (Code Emulated Upset) ont mis en évidence une certaine sensibilité aux SEUs de l'algorithme. Pour y faire face des modifications du logiciel ont été effectuées et des techniques de tolérance aux fautes ont été implémentés au niveau logiciel dans le programme implémentant l'algorithme. Des expériences d'injection de fautes ont été effectués pour mettre en évidence la robustesse face aux SEUs et ses potentiels « Tallons d'Achille » de l'algorithme modifié. L'impact des SEUs a été aussi exploré sur l'algorithme auto-convergeant implémenté dans une version hardware dans un FPGA. L'évaluation de cette méthodologie a été effectuée par des expériences d'injection de fautes au niveau RTL du circuit. Ces résultats obtenus avec cette méthode ont montré une amélioration significative de la robustesse de l'algorithme en comparaison avec sa version logicielle. / This thesis is devoted to the study of the robustness/sensitivity of a self-converging algorithm with respect to SEU's. These phenomenon also called bit-flips which may modify the content of memory elements as the result of the silicon ionization resulting from the impact of a charged particles. This study may have a significant impact given the conditions of miniaturization that will soon have circuits with hundreds to thousands of processing cores on a single chip, this will require make the cores communicate effectively and robust manner. In this context the so-called self-converging algorithm can be used to ensure that communication between cores is reliable and without external intervention. A fault injection study of the robustness of the algorithm was performed, this algorithm was initially executed by a processor LEON3 implemented in the FPGA embedded in a specific platform test. Preliminary fault injection from a method the state of the art called CEU showed some sensitivity to SEUs of algorithm. To cope with the software changes were made and techniques for fault tolerance have been implemented in software in the program implementing the self-converging algorithm. The fault injection experiments were made to demonstrate the robustness to SEU's and potential problems of the modified algorithm. The impact of SEUs was explored on a hardware-implemented self-converging algorithm in a FPGA. The evaluation of this method was performed by fault injection at RTL level circuit. These results obtained with this method have shown a significant improvement of the robustness of the algorithm in comparison with its software version.
117

The transactional HW/SW stack for fault tolerant embedded computing / Pilha HW/SW transacional para computacao embarcada tolerante a falhas

Ferreira, Ronaldo Rodrigues January 2015 (has links)
O desafio de implementar tolerância a falhas em sistemas embarcados advém das restrições físicas de ocupação de área, dissipação de potência e consumo de energia desses sistemas. A necessidade de otimizar essas três restrições de projeto concomitante à computação dentro dos requisitos de desempenho e de tempo-real cria um problema difícil de ser resolvido. Soluções clássicas de tolerância a falhas tais como redundância modular dupla e tripla não são factíveis devido ao alto custo em potência e a falta de um mecanismo para se recuperar erros. Apesar de algumas técnicas existentes reduzirem o overhead de potência e área, essas incorrem em alta degradação de desempenho e muitas vezes assumem um modelo de falhas que não é factível. Essa tese introduz a Pilha de HW/SW Transacional, ou simplesmente Pilha, para gerenciar de maneira eficiente as restrições de área, potência, cobertura de falhas e desempenho. A Pilha introduz uma nova estratégia de compilação que organiza os programas em Blocos Básicos Transacionais (BBT), juntamente com um novo processador, a Arquitetura de Blocos Básicos Transacionais (ABBT), a qual provê detecção e recuperação de erros de grão fino e determinística ao usar o BBT como um contâiner de erros e como unidade de checkpointing. Duas soluções para prover a semântica de execução do BBT em hardware são propostas, uma baseada em software e a outra em hardware. A área, potência, desempenho e cobertura de falhas foram avaliadas através do modelo de hardware do ABBT. A Pilha provê uma cobertura de falhas de 99,35%, com overhead de 2,05 em potência e 2,65 de área. A Pilha apresenta overhead de desempenho de 1,33 e 1,54, dependento do modelo de hardware usado para suportar a semântica de execução do BBT. / Fault tolerance implementation in embedded systems is challenging because the physical constraints of area occupation, power dissipation, and energy consumption of these systems. The need for optimizing these three physical constraints while doing computation within the available performance goals and real-time deadlines creates a conundrum that is hard to solve. Classical fault tolerance solutions such as triple and dual modular redundancy are not feasible due to their high power overhead or lack of efficient and deterministic error recovery. Existing techniques, although some of them reduce the power and area overhead, incur heavy perfor- mance penalties and most of the time do not assume a feasible fault model. This dissertation introduces the Transactional HW/SW Stack, or simply Stack, to effi- ciently manage the area, power, fault coverage, and performance conundrum. The Stack introduces a new compilation strategy that assembles programs into Transac- tional Basic Blocks, together with a novel microprocessor, the TransactiOnal Basic Block Architecture (ToBBA), which provides fine-grained error detection and deter- ministic error rollback and elimination using the Transactional Basic Blocks (TBBs) both as a container for errors and as a small unit of data checkpointing. Two so- lutions to sustain the TBB semantics in hardware are introduced: software- and hardware-based. Stack’s area, power, performance, and coverage were evaluated using ToBBA’s hardware implementation model. The Stack attains an error correc- tion coverage of 99.35% with 2.05 power overhead within an area overhead of 2.65. The Stack also presents a performance overhead of 1.33 or 1.54, depending on the hardware model adopted to support the TBB.
118

The transactional HW/SW stack for fault tolerant embedded computing / Pilha HW/SW transacional para computacao embarcada tolerante a falhas

Ferreira, Ronaldo Rodrigues January 2015 (has links)
O desafio de implementar tolerância a falhas em sistemas embarcados advém das restrições físicas de ocupação de área, dissipação de potência e consumo de energia desses sistemas. A necessidade de otimizar essas três restrições de projeto concomitante à computação dentro dos requisitos de desempenho e de tempo-real cria um problema difícil de ser resolvido. Soluções clássicas de tolerância a falhas tais como redundância modular dupla e tripla não são factíveis devido ao alto custo em potência e a falta de um mecanismo para se recuperar erros. Apesar de algumas técnicas existentes reduzirem o overhead de potência e área, essas incorrem em alta degradação de desempenho e muitas vezes assumem um modelo de falhas que não é factível. Essa tese introduz a Pilha de HW/SW Transacional, ou simplesmente Pilha, para gerenciar de maneira eficiente as restrições de área, potência, cobertura de falhas e desempenho. A Pilha introduz uma nova estratégia de compilação que organiza os programas em Blocos Básicos Transacionais (BBT), juntamente com um novo processador, a Arquitetura de Blocos Básicos Transacionais (ABBT), a qual provê detecção e recuperação de erros de grão fino e determinística ao usar o BBT como um contâiner de erros e como unidade de checkpointing. Duas soluções para prover a semântica de execução do BBT em hardware são propostas, uma baseada em software e a outra em hardware. A área, potência, desempenho e cobertura de falhas foram avaliadas através do modelo de hardware do ABBT. A Pilha provê uma cobertura de falhas de 99,35%, com overhead de 2,05 em potência e 2,65 de área. A Pilha apresenta overhead de desempenho de 1,33 e 1,54, dependento do modelo de hardware usado para suportar a semântica de execução do BBT. / Fault tolerance implementation in embedded systems is challenging because the physical constraints of area occupation, power dissipation, and energy consumption of these systems. The need for optimizing these three physical constraints while doing computation within the available performance goals and real-time deadlines creates a conundrum that is hard to solve. Classical fault tolerance solutions such as triple and dual modular redundancy are not feasible due to their high power overhead or lack of efficient and deterministic error recovery. Existing techniques, although some of them reduce the power and area overhead, incur heavy perfor- mance penalties and most of the time do not assume a feasible fault model. This dissertation introduces the Transactional HW/SW Stack, or simply Stack, to effi- ciently manage the area, power, fault coverage, and performance conundrum. The Stack introduces a new compilation strategy that assembles programs into Transac- tional Basic Blocks, together with a novel microprocessor, the TransactiOnal Basic Block Architecture (ToBBA), which provides fine-grained error detection and deter- ministic error rollback and elimination using the Transactional Basic Blocks (TBBs) both as a container for errors and as a small unit of data checkpointing. Two so- lutions to sustain the TBB semantics in hardware are introduced: software- and hardware-based. Stack’s area, power, performance, and coverage were evaluated using ToBBA’s hardware implementation model. The Stack attains an error correc- tion coverage of 99.35% with 2.05 power overhead within an area overhead of 2.65. The Stack also presents a performance overhead of 1.33 or 1.54, depending on the hardware model adopted to support the TBB.
119

Teste de robustez de uma infraestrutura confiável para arquiteturas baseadas em serviços Web / Robustness testing of a reliable infrastructure for web service-based architectures

Maja, Willian Yabusame, 1986- 19 August 2018 (has links)
Orientador: Eliane Martins / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-19T00:06:46Z (GMT). No. of bitstreams: 1 Maja_WillianYabusame_M.pdf: 5488949 bytes, checksum: 89d142ebb211bdb6d1eec333a99c6727 (MD5) Previous issue date: 2011 / Resumo: Os sistemas baseados em serviços Web estão suscetíveis a diversos tipos de falhas, entre elas, as causadas pelo ambiente em que operam, a Internet, que está sujeita a sofrer com problemas como, atrasos de entrega de mensagem, queda de conexão, mensagens inválidas entre outros. Para que estas falhas não causem um problema maior para quem está interagindo com o serviço Web, existem soluções, como é o caso do Archmeds, que fornece uma infraestrutura confiável que melhora a confiabilidade e disponibilidade dos sistemas baseados em serviços Web. Mas, para o Archmeds ser uma solução confiável, ele também deve ser testado, pois ele também é um sistema que está sujeito a ter defeitos. Por isso, este trabalho propõe uma abordagem para teste de robustez no Archmeds e para isso, contou com o desenvolvimento de uma ferramenta de injeção de falhas chamada WSInject, que utiliza falhas de comunicação e dados de entrada inválidos nos parâmetros das chamadas aos serviços. Com isso espera-se emular as falhas do ambiente real de operação dos serviços Web e revelar os defeitos do sistema sob teste. Este trabalho também levou em conta que o Archmeds é uma composição de serviços Web, por isso também propõe uma abordagem para testar composições de serviços. Com os resultados deste estudo de caso, espera-se que esta abordagem de teste de robustez possa ser reutilizada para outros sistemas baseados em serviços Web / Abstract: Web service-based systems are subject to different types of faults, among them, the ones caused by the environment in which they work, which is the Internet. These faults could be problems like delay of message, connection loss, invalid message request, and others. To avoid that these faults do not become a bigger problem for the clients who are interacting with the Web service, a solution can be the use of a reliable infrastructure, like Archmeds, to increases the reliability and availability of the Web-service-based systems. Although Archmeds is a solution with the aim to increase the reliability of Web services, it is also subject to faults and for this reason, it should also be tested. This work proposes an approach to test the robustness of Archmeds and to reach this goal, a fault injection tool, called WSInject, was developed. It uses communication faults and invalid inputs into services calls. In order to reveal the failures, these faults aim to emulate the real ones that affect the Web services in the real operational environment. This work also took into account that Archmeds is a Web service composition and for this reason, it was created an approach to test it. With the results of this case study, it is expected that this approach can be adapted to others applications based in Web services technology / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
120

Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems

Tuzov, Ilya 25 January 2021 (has links)
[ES] La utilización de sistemas empotrados en cada vez más ámbitos de aplicación está llevando a que su diseño deba enfrentarse a mayores requisitos de rendimiento, consumo de energía y área (PPA). Asimismo, su utilización en aplicaciones críticas provoca que deban cumplir con estrictos requisitos de confiabilidad para garantizar su correcto funcionamiento durante períodos prolongados de tiempo. En particular, el uso de dispositivos lógicos programables de tipo FPGA es un gran desafío desde la perspectiva de la confiabilidad, ya que estos dispositivos son muy sensibles a la radiación. Por todo ello, la confiabilidad debe considerarse como uno de los criterios principales para la toma de decisiones a lo largo del todo flujo de diseño, que debe complementarse con diversos procesos que permitan alcanzar estrictos requisitos de confiabilidad. Primero, la evaluación de la robustez del diseño permite identificar sus puntos débiles, guiando así la definición de mecanismos de tolerancia a fallos. Segundo, la eficacia de los mecanismos definidos debe validarse experimentalmente. Tercero, la evaluación comparativa de la confiabilidad permite a los diseñadores seleccionar los componentes prediseñados (IP), las tecnologías de implementación y las herramientas de diseño (EDA) más adecuadas desde la perspectiva de la confiabilidad. Por último, la exploración del espacio de diseño (DSE) permite configurar de manera óptima los componentes y las herramientas seleccionados, mejorando así la confiabilidad y las métricas PPA de la implementación resultante. Todos los procesos anteriormente mencionados se basan en técnicas de inyección de fallos para evaluar la robustez del sistema diseñado. A pesar de que existe una amplia variedad de técnicas de inyección de fallos, varias problemas aún deben abordarse para cubrir las necesidades planteadas en el flujo de diseño. Aquellas soluciones basadas en simulación (SBFI) deben adaptarse a los modelos de nivel de implementación, teniendo en cuenta la arquitectura de los diversos componentes de la tecnología utilizada. Las técnicas de inyección de fallos basadas en FPGAs (FFI) deben abordar problemas relacionados con la granularidad del análisis para poder localizar los puntos débiles del diseño. Otro desafío es la reducción del coste temporal de los experimentos de inyección de fallos. Debido a la alta complejidad de los diseños actuales, el tiempo experimental dedicado a la evaluación de la confiabilidad puede ser excesivo incluso en aquellos escenarios más simples, mientras que puede ser inviable en aquellos procesos relacionados con la evaluación de múltiples configuraciones alternativas del diseño. Por último, estos procesos orientados a la confiabilidad carecen de un soporte instrumental que permita cubrir el flujo de diseño con toda su variedad de lenguajes de descripción de hardware, tecnologías de implementación y herramientas de diseño. Esta tesis aborda los retos anteriormente mencionados con el fin de integrar, de manera eficaz, estos procesos orientados a la confiabilidad en el flujo de diseño. Primeramente, se proponen nuevos métodos de inyección de fallos que permiten una evaluación de la confiabilidad, precisa y detallada, en diferentes niveles del flujo de diseño. Segundo, se definen nuevas técnicas para la aceleración de los experimentos de inyección que mejoran su coste temporal. Tercero, se define dos estrategias DSE que permiten configurar de manera óptima (desde la perspectiva de la confiabilidad) los componentes IP y las herramientas EDA, con un coste experimental mínimo. Cuarto, se propone un kit de herramientas que automatiza e incorpora con eficacia los procesos orientados a la confiabilidad en el flujo de diseño semicustom. Finalmente, se demuestra la utilidad y eficacia de las propuestas mediante un caso de estudio en el que se implementan tres procesadores empotrados en un FPGA de Xilinx serie 7. / [CA] La utilització de sistemes encastats en cada vegada més àmbits d'aplicació està portant al fet que el seu disseny haja d'enfrontar-se a majors requisits de rendiment, consum d'energia i àrea (PPA). Així mateix, la seua utilització en aplicacions crítiques provoca que hagen de complir amb estrictes requisits de confiabilitat per a garantir el seu correcte funcionament durant períodes prolongats de temps. En particular, l'ús de dispositius lògics programables de tipus FPGA és un gran desafiament des de la perspectiva de la confiabilitat, ja que aquests dispositius són molt sensibles a la radiació. Per tot això, la confiabilitat ha de considerar-se com un dels criteris principals per a la presa de decisions al llarg del tot flux de disseny, que ha de complementar-se amb diversos processos que permeten aconseguir estrictes requisits de confiabilitat. Primer, l'avaluació de la robustesa del disseny permet identificar els seus punts febles, guiant així la definició de mecanismes de tolerància a fallades. Segon, l'eficàcia dels mecanismes definits ha de validar-se experimentalment. Tercer, l'avaluació comparativa de la confiabilitat permet als dissenyadors seleccionar els components predissenyats (IP), les tecnologies d'implementació i les eines de disseny (EDA) més adequades des de la perspectiva de la confiabilitat. Finalment, l'exploració de l'espai de disseny (DSE) permet configurar de manera òptima els components i les eines seleccionats, millorant així la confiabilitat i les mètriques PPA de la implementació resultant. Tots els processos anteriorment esmentats es basen en tècniques d'injecció de fallades per a poder avaluar la robustesa del sistema dissenyat. A pesar que existeix una àmplia varietat de tècniques d'injecció de fallades, diverses problemes encara han d'abordar-se per a cobrir les necessitats plantejades en el flux de disseny. Aquelles solucions basades en simulació (SBFI) han d'adaptar-se als models de nivell d'implementació, tenint en compte l'arquitectura dels diversos components de la tecnologia utilitzada. Les tècniques d'injecció de fallades basades en FPGAs (FFI) han d'abordar problemes relacionats amb la granularitat de l'anàlisi per a poder localitzar els punts febles del disseny. Un altre desafiament és la reducció del cost temporal dels experiments d'injecció de fallades. A causa de l'alta complexitat dels dissenys actuals, el temps experimental dedicat a l'avaluació de la confiabilitat pot ser excessiu fins i tot en aquells escenaris més simples, mentre que pot ser inviable en aquells processos relacionats amb l'avaluació de múltiples configuracions alternatives del disseny. Finalment, aquests processos orientats a la confiabilitat manquen d'un suport instrumental que permeta cobrir el flux de disseny amb tota la seua varietat de llenguatges de descripció de maquinari, tecnologies d'implementació i eines de disseny. Aquesta tesi aborda els reptes anteriorment esmentats amb la finalitat d'integrar, de manera eficaç, aquests processos orientats a la confiabilitat en el flux de disseny. Primerament, es proposen nous mètodes d'injecció de fallades que permeten una avaluació de la confiabilitat, precisa i detallada, en diferents nivells del flux de disseny. Segon, es defineixen noves tècniques per a l'acceleració dels experiments d'injecció que milloren el seu cost temporal. Tercer, es defineix dues estratègies DSE que permeten configurar de manera òptima (des de la perspectiva de la confiabilitat) els components IP i les eines EDA, amb un cost experimental mínim. Quart, es proposa un kit d'eines (DAVOS) que automatitza i incorpora amb eficàcia els processos orientats a la confiabilitat en el flux de disseny semicustom. Finalment, es demostra la utilitat i eficàcia de les propostes mitjançant un cas d'estudi en el qual s'implementen tres processadors encastats en un FPGA de Xilinx serie 7. / [EN] Embedded systems are steadily extending their application areas, dealing with increasing requirements in performance, power consumption, and area (PPA). Whenever embedded systems are used in safety-critical applications, they must also meet rigorous dependability requirements to guarantee their correct operation during an extended period of time. Meeting these requirements is especially challenging for those systems that are based on Field Programmable Gate Arrays (FPGAs), since they are very susceptible to Single Event Upsets. This leads to increased dependability threats, especially in harsh environments. In such a way, dependability should be considered as one of the primary criteria for decision making throughout the whole design flow, which should be complemented by several dependability-driven processes. First, dependability assessment quantifies the robustness of hardware designs against faults and identifies their weak points. Second, dependability-driven verification ensures the correctness and efficiency of fault mitigation mechanisms. Third, dependability benchmarking allows designers to select (from a dependability perspective) the most suitable IP cores, implementation technologies, and electronic design automation (EDA) tools. Finally, dependability-aware design space exploration (DSE) allows to optimally configure the selected IP cores and EDA tools to improve as much as possible the dependability and PPA features of resulting implementations. The aforementioned processes rely on fault injection testing to quantify the robustness of the designed systems. Despite nowadays there exists a wide variety of fault injection solutions, several important problems still should be addressed to better cover the needs of a dependability-driven design flow. In particular, simulation-based fault injection (SBFI) should be adapted to implementation-level HDL models to take into account the architecture of diverse logic primitives, while keeping the injection procedures generic and low-intrusive. Likewise, the granularity of FPGA-based fault injection (FFI) should be refined to the enable accurate identification of weak points in FPGA-based designs. Another important challenge, that dependability-driven processes face in practice, is the reduction of SBFI and FFI experimental effort. The high complexity of modern designs raises the experimental effort beyond the available time budgets, even in simple dependability assessment scenarios, and it becomes prohibitive in presence of alternative design configurations. Finally, dependability-driven processes lack an instrumental support covering the semicustom design flow in all its variety of description languages, implementation technologies, and EDA tools. Existing fault injection tools only partially cover the individual stages of the design flow, being usually specific to a particular design representation level and implementation technology. This work addresses the aforementioned challenges by efficiently integrating dependability-driven processes into the design flow. First, it proposes new SBFI and FFI approaches that enable an accurate and detailed dependability assessment at different levels of the design flow. Second, it improves the performance of dependability-driven processes by defining new techniques for accelerating SBFI and FFI experiments. Third, it defines two DSE strategies that enable the optimal dependability-aware tuning of IP cores and EDA tools, while reducing as much as possible the robustness evaluation effort. Fourth, it proposes a new toolkit (DAVOS) that automates and seamlessly integrates the aforementioned dependability-driven processes into the semicustom design flow. Finally, it illustrates the usefulness and efficiency of these proposals through a case study consisting of three soft-core embedded processors implemented on a Xilinx 7-series SoC FPGA. / Tuzov, I. (2020). Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/159883 / TESIS

Page generated in 0.1131 seconds