51 |
Exploring the use of multiple modular redundancies for masking accumulated faults in SRAM-based FPGAs / Explorando redundância modular múltipla para mascarar falhas acumuladas em FPGAs baseados em SRAMOlano, Jimmy Fernando Tarrillo January 2014 (has links)
Os erros transientes nos bits de memória de configuração dos FPGAs baseados em SRAM são um tema importante devido ao efeito de persistência e a possibilidade de gerar falhas de funcionamento no circuito implementado. Sempre que um bit de memória de configuração é invertido, o erro transiente será corrigido apenas recarregando o bitstream correto da memória de configuração. Se o bitstream correto não for recarregando, erros transientes persistentes podem se acumular nos bits de memória de configuração provocando uma falha funcional do sistema, o que consequentemente, pode causar uma situação catastrófica. Este cenário se agrava no caso de falhas múltiplas, cuja probabilidade de ocorrência é cada vez maior em novas tecnologias nano-métricas. As estratégias tradicionais para lidar com erros transientes na memória de configuração são baseadas no uso de redundância modular tripla (TMR), e na limpeza da memória (scrubbing) para reparar e evitar a acumulação de erros. A alta eficiência desta técnica para mascarar perturbações tem sido demonstrada em vários estudos, no entanto o TMR visa apenas mascarar falhas individuais. Porém, a tendência tecnológica conduz à redução das dimensões dos transistores o que causa o aumento da susceptibilidade a falhos. Neste novo cenário, as falhas multiplas são mais comuns que as falhas individuais e consequentemente o uso de TMR pode ser inapropriado para ser usado em aplicações de alta confiabilidade. Além disso, sendo que a taxa de falhas está aumentando, é necessário usar altas taxas de reconfiguração o que implica em um elevado custo no consumo de potência. Com o objetivo de lidar com falhas massivas acontecidas na mem[oria de configuração, este trabalho propõe a utilização de um sistema de redundância múltipla composto de n módulos idênticos que operam em conjunto, conhecido como (nMR), e um inovador votador auto-adaptativo que permite mascarar múltiplas falhas no sistema. A principal desvantagem do uso de redundância modular é o seu elevado custo em termos de área e o consumo de energia. No entanto, o problema da sobrecarga em área é cada vez menor devido à maior densidade de componentes em novas tecnologias. Por outro lado, o alto consumo de energia sempre foi um problema nos dispositivos FPGA. Neste trabalho também propõe-se um modelo para prever a sobrecarga de potência causada pelo uso de redundância múltipla em FPGAs baseados em SRAM. A capacidade de tolerar múltiplas falhas pela técnica proposta tem sido avaliada através de experimentos de radiação e campanhas de injeção de falhas de circuitos para um estudo de caso implementado em um FPGA comercial de tecnologia de 65nm. Finalmente, é demostrado que o uso de nMR em FPGAs é uma atrativa e possível solução em termos de potencia, área e confiabilidade medida em unidades de FIT e Mean Time between Failures (MTBF). / Soft errors in the configuration memory bits of SRAM-based FPGAs are an important issue due to the persistence effect and its possibility of generating functional failures in the implemented circuit. Whenever a configuration memory bit cell is flipped, the soft error will be corrected only by reloading the correct configuration memory bitstream. If the correct bitstream is not loaded, persistent soft errors can accumulate in the configuration memory bits provoking a system functional failure in the user’s design, and consequently can cause a catastrophic situation. This scenario gets worse in the event of multi-bit upset, whose probability of occurrence is increasing in new nano-metric technologies. Traditional strategies to deal with soft errors in configuration memory are based on the use of any type of triple modular redundancy (TMR) and the scrubbing of the memory to repair and avoid the accumulation of faults. The high reliability of this technique has been demonstrated in many studies, however TMR is aimed at masking single faults. The technology trend makes lower the dimensions of the transistors, and this leads to increased susceptibility to faults. In this new scenario, it is commoner to have multiple to single faults in the configuration memory of the FPGA, so that the use of TMR is inappropriate in high reliability applications. Furthermore, since the fault rate is increasing, scrubbing rate also needs to be incremented, leading to the increase in power consumption. Aiming at coping with massive upsets between sparse scrubbing, this work proposes the use of a multiple redundancy system composed of n identical modules, known as nmodular redundancy (nMR), operating in tandem and an innovative self-adaptive voter to be able to mask multiple upsets in the system. The main drawback of using modular redundancy is its high cost in terms of area and power consumption. However, area overhead is less and less problem due the higher density in new technologies. On the other hand, the high power consumption has always been a handicap of FPGAs. In this work we also propose a model to prevent power overhead caused by the use of multiple redundancy in SRAM-based FPGAs. The capacity of the proposal to tolerate multiple faults has been evaluated by radiation experiments and fault injection campaigns of study case circuits implemented in a 65nm technology commercial FPGA. Finally we demonstrate that the power overhead generated by the use of nMR in FPGAs is much lower than it is discussed in the literature.
|
52 |
Exploring the use of multiple modular redundancies for masking accumulated faults in SRAM-based FPGAs / Explorando redundância modular múltipla para mascarar falhas acumuladas em FPGAs baseados em SRAMOlano, Jimmy Fernando Tarrillo January 2014 (has links)
Os erros transientes nos bits de memória de configuração dos FPGAs baseados em SRAM são um tema importante devido ao efeito de persistência e a possibilidade de gerar falhas de funcionamento no circuito implementado. Sempre que um bit de memória de configuração é invertido, o erro transiente será corrigido apenas recarregando o bitstream correto da memória de configuração. Se o bitstream correto não for recarregando, erros transientes persistentes podem se acumular nos bits de memória de configuração provocando uma falha funcional do sistema, o que consequentemente, pode causar uma situação catastrófica. Este cenário se agrava no caso de falhas múltiplas, cuja probabilidade de ocorrência é cada vez maior em novas tecnologias nano-métricas. As estratégias tradicionais para lidar com erros transientes na memória de configuração são baseadas no uso de redundância modular tripla (TMR), e na limpeza da memória (scrubbing) para reparar e evitar a acumulação de erros. A alta eficiência desta técnica para mascarar perturbações tem sido demonstrada em vários estudos, no entanto o TMR visa apenas mascarar falhas individuais. Porém, a tendência tecnológica conduz à redução das dimensões dos transistores o que causa o aumento da susceptibilidade a falhos. Neste novo cenário, as falhas multiplas são mais comuns que as falhas individuais e consequentemente o uso de TMR pode ser inapropriado para ser usado em aplicações de alta confiabilidade. Além disso, sendo que a taxa de falhas está aumentando, é necessário usar altas taxas de reconfiguração o que implica em um elevado custo no consumo de potência. Com o objetivo de lidar com falhas massivas acontecidas na mem[oria de configuração, este trabalho propõe a utilização de um sistema de redundância múltipla composto de n módulos idênticos que operam em conjunto, conhecido como (nMR), e um inovador votador auto-adaptativo que permite mascarar múltiplas falhas no sistema. A principal desvantagem do uso de redundância modular é o seu elevado custo em termos de área e o consumo de energia. No entanto, o problema da sobrecarga em área é cada vez menor devido à maior densidade de componentes em novas tecnologias. Por outro lado, o alto consumo de energia sempre foi um problema nos dispositivos FPGA. Neste trabalho também propõe-se um modelo para prever a sobrecarga de potência causada pelo uso de redundância múltipla em FPGAs baseados em SRAM. A capacidade de tolerar múltiplas falhas pela técnica proposta tem sido avaliada através de experimentos de radiação e campanhas de injeção de falhas de circuitos para um estudo de caso implementado em um FPGA comercial de tecnologia de 65nm. Finalmente, é demostrado que o uso de nMR em FPGAs é uma atrativa e possível solução em termos de potencia, área e confiabilidade medida em unidades de FIT e Mean Time between Failures (MTBF). / Soft errors in the configuration memory bits of SRAM-based FPGAs are an important issue due to the persistence effect and its possibility of generating functional failures in the implemented circuit. Whenever a configuration memory bit cell is flipped, the soft error will be corrected only by reloading the correct configuration memory bitstream. If the correct bitstream is not loaded, persistent soft errors can accumulate in the configuration memory bits provoking a system functional failure in the user’s design, and consequently can cause a catastrophic situation. This scenario gets worse in the event of multi-bit upset, whose probability of occurrence is increasing in new nano-metric technologies. Traditional strategies to deal with soft errors in configuration memory are based on the use of any type of triple modular redundancy (TMR) and the scrubbing of the memory to repair and avoid the accumulation of faults. The high reliability of this technique has been demonstrated in many studies, however TMR is aimed at masking single faults. The technology trend makes lower the dimensions of the transistors, and this leads to increased susceptibility to faults. In this new scenario, it is commoner to have multiple to single faults in the configuration memory of the FPGA, so that the use of TMR is inappropriate in high reliability applications. Furthermore, since the fault rate is increasing, scrubbing rate also needs to be incremented, leading to the increase in power consumption. Aiming at coping with massive upsets between sparse scrubbing, this work proposes the use of a multiple redundancy system composed of n identical modules, known as nmodular redundancy (nMR), operating in tandem and an innovative self-adaptive voter to be able to mask multiple upsets in the system. The main drawback of using modular redundancy is its high cost in terms of area and power consumption. However, area overhead is less and less problem due the higher density in new technologies. On the other hand, the high power consumption has always been a handicap of FPGAs. In this work we also propose a model to prevent power overhead caused by the use of multiple redundancy in SRAM-based FPGAs. The capacity of the proposal to tolerate multiple faults has been evaluated by radiation experiments and fault injection campaigns of study case circuits implemented in a 65nm technology commercial FPGA. Finally we demonstrate that the power overhead generated by the use of nMR in FPGAs is much lower than it is discussed in the literature.
|
53 |
Exploring the use of multiple modular redundancies for masking accumulated faults in SRAM-based FPGAs / Explorando redundância modular múltipla para mascarar falhas acumuladas em FPGAs baseados em SRAMOlano, Jimmy Fernando Tarrillo January 2014 (has links)
Os erros transientes nos bits de memória de configuração dos FPGAs baseados em SRAM são um tema importante devido ao efeito de persistência e a possibilidade de gerar falhas de funcionamento no circuito implementado. Sempre que um bit de memória de configuração é invertido, o erro transiente será corrigido apenas recarregando o bitstream correto da memória de configuração. Se o bitstream correto não for recarregando, erros transientes persistentes podem se acumular nos bits de memória de configuração provocando uma falha funcional do sistema, o que consequentemente, pode causar uma situação catastrófica. Este cenário se agrava no caso de falhas múltiplas, cuja probabilidade de ocorrência é cada vez maior em novas tecnologias nano-métricas. As estratégias tradicionais para lidar com erros transientes na memória de configuração são baseadas no uso de redundância modular tripla (TMR), e na limpeza da memória (scrubbing) para reparar e evitar a acumulação de erros. A alta eficiência desta técnica para mascarar perturbações tem sido demonstrada em vários estudos, no entanto o TMR visa apenas mascarar falhas individuais. Porém, a tendência tecnológica conduz à redução das dimensões dos transistores o que causa o aumento da susceptibilidade a falhos. Neste novo cenário, as falhas multiplas são mais comuns que as falhas individuais e consequentemente o uso de TMR pode ser inapropriado para ser usado em aplicações de alta confiabilidade. Além disso, sendo que a taxa de falhas está aumentando, é necessário usar altas taxas de reconfiguração o que implica em um elevado custo no consumo de potência. Com o objetivo de lidar com falhas massivas acontecidas na mem[oria de configuração, este trabalho propõe a utilização de um sistema de redundância múltipla composto de n módulos idênticos que operam em conjunto, conhecido como (nMR), e um inovador votador auto-adaptativo que permite mascarar múltiplas falhas no sistema. A principal desvantagem do uso de redundância modular é o seu elevado custo em termos de área e o consumo de energia. No entanto, o problema da sobrecarga em área é cada vez menor devido à maior densidade de componentes em novas tecnologias. Por outro lado, o alto consumo de energia sempre foi um problema nos dispositivos FPGA. Neste trabalho também propõe-se um modelo para prever a sobrecarga de potência causada pelo uso de redundância múltipla em FPGAs baseados em SRAM. A capacidade de tolerar múltiplas falhas pela técnica proposta tem sido avaliada através de experimentos de radiação e campanhas de injeção de falhas de circuitos para um estudo de caso implementado em um FPGA comercial de tecnologia de 65nm. Finalmente, é demostrado que o uso de nMR em FPGAs é uma atrativa e possível solução em termos de potencia, área e confiabilidade medida em unidades de FIT e Mean Time between Failures (MTBF). / Soft errors in the configuration memory bits of SRAM-based FPGAs are an important issue due to the persistence effect and its possibility of generating functional failures in the implemented circuit. Whenever a configuration memory bit cell is flipped, the soft error will be corrected only by reloading the correct configuration memory bitstream. If the correct bitstream is not loaded, persistent soft errors can accumulate in the configuration memory bits provoking a system functional failure in the user’s design, and consequently can cause a catastrophic situation. This scenario gets worse in the event of multi-bit upset, whose probability of occurrence is increasing in new nano-metric technologies. Traditional strategies to deal with soft errors in configuration memory are based on the use of any type of triple modular redundancy (TMR) and the scrubbing of the memory to repair and avoid the accumulation of faults. The high reliability of this technique has been demonstrated in many studies, however TMR is aimed at masking single faults. The technology trend makes lower the dimensions of the transistors, and this leads to increased susceptibility to faults. In this new scenario, it is commoner to have multiple to single faults in the configuration memory of the FPGA, so that the use of TMR is inappropriate in high reliability applications. Furthermore, since the fault rate is increasing, scrubbing rate also needs to be incremented, leading to the increase in power consumption. Aiming at coping with massive upsets between sparse scrubbing, this work proposes the use of a multiple redundancy system composed of n identical modules, known as nmodular redundancy (nMR), operating in tandem and an innovative self-adaptive voter to be able to mask multiple upsets in the system. The main drawback of using modular redundancy is its high cost in terms of area and power consumption. However, area overhead is less and less problem due the higher density in new technologies. On the other hand, the high power consumption has always been a handicap of FPGAs. In this work we also propose a model to prevent power overhead caused by the use of multiple redundancy in SRAM-based FPGAs. The capacity of the proposal to tolerate multiple faults has been evaluated by radiation experiments and fault injection campaigns of study case circuits implemented in a 65nm technology commercial FPGA. Finally we demonstrate that the power overhead generated by the use of nMR in FPGAs is much lower than it is discussed in the literature.
|
54 |
Flexible Fault Tolerance for the Robot Operating SystemMarok, Sukhman S. 01 June 2020 (has links)
The introduction of autonomous vehicles has the potential to reduce the number of accidents and save countless lives. These benefits can only be realized if autonomous vehicles can prove to be safer than human drivers. There is a large amount of active research around developing robust algorithms for all parts of the autonomous vehicle stack including sensing, localization, mapping, perception, prediction, planning, and control. Additionally, some of these research projects have involved the use of the Robot Operating System (ROS). However, another key aspect of realizing an autonomous vehicle is a fault-tolerant design that can ensure the safe operation of the vehicle under unfavorable conditions.
The goal of this thesis is to evaluate the feasibility of adding a dedicated fault tolerance module into a ROS based architecture. The fault tolerance module is used to implement a safety controller that can take over safety-critical operations of the system when a fault is detected in the main computer. A Xilinx Zynq-7000 SoC with a dual-core ARM Cortex-A9 and an FPGA programmable logic region is chosen as the platform. The platform works in the Asymmetric Multiprocessing (AMP) configuration with a Linux based operating system on one core and a real-time operating system (RTOS) on the other. Results are gathered from an implementation done on a ROS based mobile robot platform.
|
55 |
Scalable Byzantine State Machine Replication: Designs, Techniques, and ImplementationsArun, Balaji 02 July 2021 (has links)
State machine replication (SMR) is one of the most widely studied and used methodology for building highly available distributed applications and services. SMR replicates a service across a set of computing hosts, and executes client operations on the replicas in an agreed- upon total order, ensuring linearizability of the replicated shared state. The problem of determining a total order reduces to one of computing consensus.
State-of-the-art consensus protocols are inadequate for newer classes of applications such as Blockchains and for geographically distributed infrastructures. The widely used Crash Fault Tolerance (CFT) fault model of consensus protocols is prone to malicious and adversarial behaviors as well as non-crash faults such as software bugs. The Byzantine fault-tolerance (BFT) model and its trust-based variant, the hybrid model, permit stronger failure adversaries. However, state-of-the-art Byzantine and hybrid consensus protocols have performance limitations in geographically distributed environments: they designate a primary replica for proposing total-orders, which becomes a bottleneck and yields sub-optimal latencies for faraway clients. Additionally, they do not scale to hundreds of replicas and provide consistent performance as the system size grows.
To overcome these limitations and develop highly scalable SMR solutions, this dissertation presents two leaderless consensus protocols, namely ezBFT and Dester, for the Byzantine and hybrid models, respectively. These protocols enable every replica to receive and order client commands. Additionally, they exchange command dependencies to collectively order commands without relying on a primary. Our experimental evaluations in a 7-node geographically distributed setup reveals that ezBFT improves client-side latency by as much as 40% over state-of-the-art BFT protocols including PBFT, FaB, and Zyzzyva. Dester, for the hybrid model, reduces latency by as much as 30% over ezBFT.
Next, the dissertation presents a new paradigm called DQBFT for designing consensus protocols that can scale to hundreds of nodes in geographically distributed environments. Since leaderless protocols exchange command dependencies, they do not scale to hundreds of nodes. DQBFT overcomes this scalability limitation by decentralizing only the heavy task of replicating commands and centralizing the process of ordering the commands. While DQBFT can be used to enhance existing primary-based protocols, Destiny is a hybrid instantiation of the DQBFT paradigm using linear communication for better scalability than naive instantiations. Experimental evaluations in a 193-node geographically distributed setup reveal that Destiny achieves ≈ 3× better throughput and ≈50% better latency than state-of-the-art BFT protocols including Hotstuff, SBFT, and Hybster.
Lastly, the dissertation presents two techniques for designing and implementing BFT protocols with reduced development costs. The dissertation presents Bumblebee, a methodology for manually transforming CFT protocols to tolerate Byzantine faults using trusted execution environments that are increasingly available in commodity hardware. Bumblebee is based on the observation that CFT protocols are incapable of tolerating non-malicous non-crash faults, but they are nevertheless deployed in many production systems. Bumblebee provides a Generic Algorithm that can represent protocols in both CFT and hybrid fault models, thus allowing easy construction of hybrid protocols using CFT protocols as baselines. The dissertation constructs hybrid instantiations of CFT protocols including Paxos, Raft, and M2Paxos. Experimental evaluations of the hybrid variants reveal that they perform at par with native hybrid protocols, but incur a 30% overhead over their CFT counterparts.
Hybrid protocols rely on the integrity of trusted execution environments, which are increasingly subject to security exploits. To withstand exploits, the dissertation presents DuoBFT, a protocol that exposes both the BFT and hybrid fault models within a single consensus protocol. This enables consensus under both fault models within the same protocol and without additional redundancy, allowing DuoBFT to achieve the performance of hybrid protocols and the security of BFT protocols. Experimental evaluations reveal that DuoBFT achieves the best of both hybrid and BFT fault models with less than 10% overhead. / Doctor of Philosophy / Computers are ubiquitous; they perform some of the most complex and safety-critical tasks such as controlling aircraft, managing the financial markets, and maintaining sensitive medical records. The undeniable fact is that computers are faulty. They are prone to crash and can behave arbitrarily. Even the most robust computers such as those that are sent to the outer space eventually fail. External phenomenon such as power outages and network disruptions affect their operation.
To make computing systems reliable, researchers and practitioners have long focussed on interconnecting many individual computers and programming them to effectively be duplicates of one another. This way when one computer fails in a system, the rest of the computers still ensure that the system as a whole is operational. Duplication requires that multiple computers effectively perform the same task. In order for multiple computers to perform the same task together, they should first agree on the task. More generally, since computing systems perform multiple tasks, they should agree on the sequence of tasks that they will individually perform and follow the agreement. This is what is known as the State Machine Replication technique.
State Machine Replication (SMR) is a powerful technique that is applicable to numerous computing applications. Blockchain systems, the technology behind the cryptocurrencies such as Bitcoin and Ethereum, uses the SMR technique. In the context of Blockchain, the added challenge in that some of the computers involved in SMR can be programmed by adversarial parties and could act in a way to jeopardize the integrity of the whole system. For Bitcoin and Ethereum, this could mean embezzlement of hundreds or even millions of dollars worth of cryptocurrencies. Certain SMR systems are capable of tolerating such intrusions and ensure system integrity. Such systems are deemed to be Byzantine tolerant.
This dissertation presents designs, techniques, and implementations of Byzantine State Machine Replication systems. The problems addressed in this dissertation are those that plague existing Byzantine SMR systems making them suboptimal for newer applications such as Blockchains. First, when computers that participate in SMR are spread around the world, their performance is dependent on the communication latencies between any two pair of computers. Second, the number of computers required is proportional the number of adversarial computers that need to be tolerated. Consequently, certain SMR systems for Blockchains require hundreds of computers to tolerate heavy adversarial behavior. Many existing SMR technique perform poorly under these scenarios. The techniques presented in this dissertation address various permutations of these challenges.
|
56 |
Bringing Fault Tolerance to Hardware Managers in PESNetLee, Yoon-Soo 25 September 2006 (has links)
The goal of this research is to improve the communications protocol for Dual Ring Power Electronics Systems called PESNet. The thesis will focus on making the protocol operate in a more reliable manner by tolerating Hardware Manager failures and allowing failover among duplicate Hardware Managers within PEBB-based systems. In order to make this possible, two new features must be added to PESNet: utilization of the secondary ring for fault-tolerant communication, and dynamic reconfiguration of the network. Many ideas for supporting fault tolerance have been discussed in previous work and the hardware for PEBB-based systems was designed so support fault tolerance. However, in spite of the capabilities of the hardware, fault tolerance is not supported yet by existing firmware or software. Improving the PESNet protocol to tolerate Hardware Manager failures will increase the reliability of power electronics systems. Moreover, the additional features that are needed to perform failover also allow recovery from link failures and make hot-swap or plug-and-play of PEBBs possible. Since power electronics systems are real-time systems, it is critical that packets be delivered as soon as possible to their destination. The network latency will limit the granularity of time that the control application can operate on. As a result, methods to implement the required features to meet real-time system requirements are discussed and changes to the protocol are proposed. Changing PESNet will provide reliability gains, depending on the reliability of the components that are used to construct the system. / Master of Science
|
57 |
Metamori: A library for Incremental File CheckpointingJeyakumar, Ashwin Raju 21 June 2004 (has links)
The advent of cluster computing has resulted in a thrust towards providing software mechanisms for reliability on clusters. The prevalent model for such mechanisms is to take a snapshot of the state of an application, called a checkpoint and commit it to stable storage. This checkpoint has sufficient meta-data, so that if the application fails, it can be restarted from the checkpoint. This operation is called a restore. In order to record a process' complete state, both its volatile and persistent state must be checkpointed. Several libraries exist for checkpointing volatile state. Some of these libraries feature incremental checkpointing, where only the changes since the last checkpoint are recorded in the next checkpoint. Such incremental checkpointing is advantageous since otherwise, the time taken for each successive checkpoint becomes larger and larger. Also, when checkpointing is done in increments, we can restore state to any of the previous checkpoints; a vital feature for adaptive applications. This thesis presents a user-level incremental checkpointing library for files: Metamori. This brings the advantages of incremental memory checkpointing to files as well, thereby providing a low-overhead approach to checkpoint persistent state. Thus, the complete state of an application can now be incrementally checkpointed, as compared to earlier approaches where volatile state was checkpointed incrementally but persistent state had no such facilities. / Master of Science
|
58 |
Sensitivity of Feedforward Neural Networks to Harsh Computing EnvironmentsArechiga, Austin Podoll 08 August 2018 (has links)
Neural Networks have proven themselves very adept at solving a wide variety of problems, in particular they accel at image processing. However, it remains unknown how well they perform under memory errors. This thesis focuses on the robustness of neural networks under memory errors, specifically single event upset style errors where single bits flip in a network's trained parameters. The main goal of these experiments is to determine if different neural network architectures are more robust than others. Initial experiments show that MLPs are more robust than CNNs. Within MLPs, deeper MLPs are more robust and for CNNs larger kernels are more robust. Additionally, the CNNs displayed bimodal failure behavior, where memory errors would either not affect the performance of the network, or they would degrade its performance to be on par with random guessing. VGG16, ResNet50, and InceptionV3 were also tested for their robustness. ResNet50 and InceptionV3 were both more robust than VGG16. This could be due to their use of Batch Normalization or the fact that ResNet50 and InceptionV3 both use shortcut connections in their hidden layers. After determining which networks were most robust, some estimated error rates from neutrons were calculated for space environments to determine if these architectures were robust enough to survive. It was determined that large MLPs, ResNet50, and InceptionV3 could survive in Low Earth Orbit on commercial memory technology and only use software error correction. / Master of Science / Neural networks are a new kind of algorithm that are revolutionizing the field of computer vision. Neural networks can be used to detect and classify objects in pictures or videos with accuracy on par with human performance. Neural networks achieve such good performance after a long training process during which many parameters are adjusted until the network can correctly identify objects such as cats, dogs, trucks, and more. These trained parameters are then stored in a computers memory and then recalled whenever the neural network is used for a computer vision task. Some computer vision tasks are safety critical, such as a self-driving car’s pedestrian detector. An error in that detector could lead to loss of life, so neural networks must be robust against a wide variety of errors. This thesis will focus on a specific kind of error: bit flips in the parameters of a neural networks stored in a computer’s memory. The main goal of these bit flip experiments is to determine if certain kinds of neural networks are more robust than others. Initial experiments show that MLP (Multilayer Perceptions) style networks are more robust than CNNs (Convolutional Neural Network). For MLP style networks, making the network deeper with more layers increases the accuracy and the robustness of the network. However, for the CNNs increasing the depth only increased the accuracy, not the robustness. The robustness of the CNNs displayed an interesting trend of bimodal failure behavior, where memory errors would either not affect the performance of the network, or they would degrade its performance to be on par with random guessing. A second set of experiments were run to focus more on CNN robustness because CNNs are much more capable than MLPs. The second set of experiments focused on the robustness of VGG16, ResNet50, and InceptionV3. These CNNs are all very large and have very good performance on real world datasets such as ImageNet. Bit flip experiments showed that ResNet50 and InceptionV3 were both more robust than VGG16. This could be due to their use of Batch Normalization or the fact that ResNet50 and InceptionV3 both use shortcut connections within their network architecture. However, all three networks still displayed the bimodal failure mode seen previously. After determining which networks were most robust, some estimated error rates were calculated for a real world environment. The chosen environment was the space environment because it naturally causes a high amount of bit flips in memory, so if NASA were to use neural networks on any rovers they would need to make sure the neural networks are robust enough to survive. It was determined that large MLPs, ResNet50, and InceptionV3 could survive in Low Earth Orbit on commercial memory technology and only use software error correction. Using only software error correction will allow satellite makers to build more advanced satellites without paying extra money for radiation-hardened electronics.
|
59 |
SIMD-Swift: Improving Performance of Swift Fault DetectionOleksenko, Oleksii 20 January 2016 (has links) (PDF)
The general tendency in modern hardware is an increase in fault rates, which is caused by the decreased operation voltages and feature sizes. Previously, the issue of hardware faults was mainly approached only in high-availability enterprise servers and in safety-critical applications, such as transport or aerospace domains. These fields generally have very tight requirements, but also higher budgets. However, as fault rates are increasing, fault tolerance solutions are starting to be also required in applications that have much smaller profit margins. This brings to the front the idea of software-implemented hardware fault tolerance, that is, the ability to detect and tolerate hardware faults using software-based techniques in commodity CPUs, which allows to get resilience almost for free. Current solutions, however, are lacking in performance, even though they show quite good fault tolerance results.
This thesis explores the idea of using the Single Instruction Multiple Data (SIMD) technology for executing all program\'s operations on two copies of the same data. This idea is based on the observation that SIMD is ubiquitous in modern CPUs and is usually an underutilized resource. It allows us to detect bit-flips in hardware by a simple comparison of two copies under the assumption that only one copy is affected by a fault.
We implemented this idea as a source-to-source compiler which performs hardening of a program on the source code level. The evaluation of our several implementations shows that it is beneficial to use it for applications that are dominated by arithmetic or logical operations, but those that have more control-flow or memory operations are actually performing better with the regular instruction replication. For example, we managed to get only 15% performance overhead on Fast Fourier Transformation benchmark, which is dominated by arithmetic instructions, but memory-access-dominated Dijkstra algorithm has shown a high overhead of 200%.
|
60 |
SIMD-Swift: Improving Performance of Swift Fault DetectionOleksenko, Oleksii 02 December 2015 (has links)
The general tendency in modern hardware is an increase in fault rates, which is caused by the decreased operation voltages and feature sizes. Previously, the issue of hardware faults was mainly approached only in high-availability enterprise servers and in safety-critical applications, such as transport or aerospace domains. These fields generally have very tight requirements, but also higher budgets. However, as fault rates are increasing, fault tolerance solutions are starting to be also required in applications that have much smaller profit margins. This brings to the front the idea of software-implemented hardware fault tolerance, that is, the ability to detect and tolerate hardware faults using software-based techniques in commodity CPUs, which allows to get resilience almost for free. Current solutions, however, are lacking in performance, even though they show quite good fault tolerance results.
This thesis explores the idea of using the Single Instruction Multiple Data (SIMD) technology for executing all program\'s operations on two copies of the same data. This idea is based on the observation that SIMD is ubiquitous in modern CPUs and is usually an underutilized resource. It allows us to detect bit-flips in hardware by a simple comparison of two copies under the assumption that only one copy is affected by a fault.
We implemented this idea as a source-to-source compiler which performs hardening of a program on the source code level. The evaluation of our several implementations shows that it is beneficial to use it for applications that are dominated by arithmetic or logical operations, but those that have more control-flow or memory operations are actually performing better with the regular instruction replication. For example, we managed to get only 15% performance overhead on Fast Fourier Transformation benchmark, which is dominated by arithmetic instructions, but memory-access-dominated Dijkstra algorithm has shown a high overhead of 200%.
|
Page generated in 0.0635 seconds