• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 292
  • 135
  • 54
  • 27
  • 6
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 623
  • 623
  • 161
  • 150
  • 138
  • 116
  • 107
  • 102
  • 74
  • 73
  • 72
  • 71
  • 66
  • 61
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

A Multi-leader Approach to Byzantine Fault Tolerance : Achieving Higher Throughput Using Concurrent Consensus

Abid, Muhammad Zeeshan January 2015 (has links)
Byzantine Fault Tolerant protocols are complicated and hard to implement.Today’s software industry is reluctant to adopt these protocols because of thehigh overhead of message exchange in the agreement phase and the high resourceconsumption necessary to tolerate faults (as 3 f + 1 replicas are required totolerate f faults). Moreover, total ordering of messages is needed by mostclassical protocols to provide strong consistency in both agreement and executionphases. Research has improved throughput of the execution phase by introducingconcurrency using modern multicore infrastructures in recent years. However,improvements to the agreement phase remains an open area. Byzantine Fault Tolerant systems use State Machine Replication to tolerate awide range of faults. The approach uses leader based consensus algorithms for thedeterministic execution of service on all replicas to make sure all correct replicasreach same state. For this purpose, several algorithms have been proposed toprovide total ordering of messages through an elected leader. Usually, a singleleader is considered to be a bottleneck as it cannot provide the desired throughputfor real-time software services. In order to achieve a higher throughput there is aneed for a solution which can execute multiple consensus rounds concurrently. We present a solution that enables multiple consensus rounds in parallel bychoosing multiple leaders. By enabling concurrent consensus, our approach canexecute several requests in parallel. In our approach we incorporate applicationspecific knowledge to split the total order of events into multiple partial orderswhich are causally consistent in order to ensure safety. Furthermore, a dependencycheck is required for every client request before it is assigned to a particular leaderfor agreement. This methodology relies on optimistic prediction of dependenciesto provide higher throughput. We also propose a solution to correct the course ofexecution without rollbacking if dependencies were wrongly predicted. Our evaluation shows that in normal cases this approach can achieve upto 100% higher throughput than conventional approaches for large numbers ofclients. We also show that this approach has the potential to perform better incomplex scenarios
112

Intersection Collision Avoidance For Autonomous Vehicles Using Petri Nets

Shankar Kumar, Valli Sanghami 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Autonomous vehicles currently dominate the automobile field for their impact on humanity and society. Connected and Automated Vehicles (CAV’s) are vehicles that use different communication technologies to communicate with other vehicles, infrastructure, the cloud, etc. With the information received from the sensors present, the vehicles analyze and take necessary steps for smooth, collision-free driving. This the sis talks about the cruise control system along with the intersection collision avoidance system based on Petri net models. It consists of two internal controllers for velocity and distance control, respectively, and three external ones for collision avoidance. Fault-tolerant redundant controllers are designed to keep these three controllers in check. The model is built using a PN toolbox and tested for various scenarios. The model is also validated, and its distinct properties are analyzed.
113

Fault tolerance and re-training analysis on neural networks

George, Abhinav Kurian 09 July 2019 (has links)
No description available.
114

Hardware Assertions for Mitigating Single-Event Upsets in FPGAs

Dumitrescu, Stefan January 2020 (has links)
The memory cells used in modern field programmable gate arrays (FPGAs) are highly susceptible to single event upsets (SEUs). The typical mitigation strategy in the industry is some form of hardware redundancy in the form of duplication with comparison (DWC) or triple modular redundancy (TMR). While this strategy is highly effective in masking out the effect of faults, it incurs a large hardware cost. In this thesis, we explore a different approach to hardware redundancy. The core idea of our approach is to exploit the conflict-driven clause learning (CDCL) mechanism in modern Boolean satisfiability (SAT) solvers to provide us with invariants which can be realized as hardware checkers. Furthermore, we develop the algorithms required to select a subset of these invariants to be included as part of a checker circuit. This checker circuit then augments the original FPGA design. We find which look-up table (LUT) memory cells are sensitive to bitflips, then we automatically generate a checker circuit consisting of hardware invariants targeted towards those faults. We aim to reach 100% coverage of sensitizable faults. After extensive experimentation, we conclude that this approach is not competitive with DWC with respect to hardware area. However, we demonstrate that many bitflips will have reduced a detection latency compared to DWC. / Thesis / Master of Applied Science (MASc)
115

Adaptive Algorithms for Fault Tolerant Re-Routing in Wireless Sensor Networks

Gregoire, Michael S 01 January 2007 (has links) (PDF)
No description available.
116

The G-Network and Its Inherent Fault Tolerant Properties

Haynes, Teresa, Dutton, Ronald D. 01 January 1990 (has links)
This paper presents the G-network, a new topological design which is a suitable architecture for point-to-point communication and interconnection networks, We show that the G-network has the following desirable characteristics: Efficient routing, small number of links, and fault tolerance. The performance of the G-network is compared to that of the Barrel Shifter and Illiac Mesh networks.
117

COPING WITH DISCREPANCIES OF THE MANUFACTURED WEIGHTS IN THRESHOLD LOGIC GATES

Goparaju, Manoj Kumar 01 December 2009 (has links)
Threshold Logic technology is conceived as the crucial alternate emerging technology to CMOS implementation in nanoelectronic era and the realization of complex functionalities is becoming an increasingly promising approach in the deep sub-micron design era. The gate that is implemented with threshold logic is called a Threshold Logic Gate (TLG). The logic output value of a Threshold Logic Gate (TLG) depends on the weighted sum of its inputs. Manufactured weights in the threshold logic gates (TLGs) may differ from the designed values and significantly affects the fault coverage. A novel fault model for weight defects is proposed. Also an Automatic Test Pattern Generation (ATPG) tool has been implemented that uses the fault model to detect whether the circuit is malfunctioning due to such weight-related defects. A novel design methodology is presented in this work to design complex TLG networks that are tolerant to manufacturing shortcomings. It uses a procedure to identify the optimum fault tolerant design of any given k-input TLG. Extensive research has been done in the development of synthesis methodologies in the past, predominantly greedy. A fault tolerance aware synthesis methodology is proposed.
118

Preemptive Placement and Routing for In-Field FPGA Repair

Jensen, Joshua E. 01 March 2015 (has links) (PDF)
With the growing density and shrinking feature size of modern semiconductors, it is increasingly difficult to manufacture defect free semiconductors that maintain acceptable levels of reliability for long periods of time. These systems are increasingly susceptible to wear-out by failing to meet their operational specifications for an extended period of time. The reconfigurability of FPGAs can be used to repair post-manufacturing faults by configuring the FPGA to avoid a damaged resource. This thesis presents a method for preemptively preparing to repair FPGA devices with wear-out faults by precomputing a set of repair circuits that, collectively, can repair a fault found in any logic block of the FPGA. This approach relies on logic placement and routing to create “repair” circuits that avoid specific logic blocks. These repairs can be used when a specific resource has failed. New placement and routing algorithms are proposed for generating such repair circuits. The number of repairs needed to create a complete repair set depends heavily on the utilization of the FPGA resources. The algorithms are tested against several benchmarks and with multiple area constraints for each benchmark. Using this work, on average 20 repair configurations were needed to repair 99% of permanent faults.
119

Fault-tolerance in HLA-based distributed simulations

Eklöf, Martin January 2006 (has links)
Successful integration of simulations within the Network-Based Defence (NBD), specifically use of simulations within Command and Control (C2) environments, enforces a number of requirements. Simulations must be reliable and be able to respond in a timely manner. Otherwise the commander will have no confidence in using simulation as a tool. An important aspect of these requirements is the provision of fault-tolerant simulations in which failures are detected and resolved in a consistent manner. Given the distributed nature of many military simulations systems, services for fault-tolerance in distributed simulations are desirable. The main architecture for distributed simulations within the military domain, the High Level Architecture (HLA), does not provide support for development of fault-tolerant simulations. A common approach for fault-tolerance in distributed systems is check-pointing. In this approach, states of the system are persistently stored through-out its operation. In case a failure occurs, the system is restored using a previously saved state. Given the abovementioned shortcomings of the HLA standard this thesis explores development of fault-tolerant mechanisms in the context of the HLA. More specifically, the design, implementation and evaluation of fault-tolerance mechanisms, based on check-pointing, are described and discussed. / QC 20101111
120

A Competitive Reconfiguration Approach To Autonomous Fault Handling Using Genetic Algorithms

Zhang, Kening 01 January 2008 (has links)
In this dissertation, a novel self-repair approach based on Consensus Based Evaluation (CBE) for autonomous repair of SRAM-based Field Programmable Gate Arrays (FPGAs) is developed, evaluated, and refined. An initial population of functionally identical (same input-output behavior), yet physically distinct (alternative design or place-and-route realization) FPGA configurations is produced at design time. During run-time, the CBE approach ranks these alternative configurations after evaluating their discrepancy relative to the consensus formed by the population. Through runtime competition, faults in the logical resources become occluded from the visibility of subsequent FPGA operations. Meanwhile, offspring formed through crossover and mutation of faulty and viable configurations are selected at a controlled re-introduction rate for evaluation and refurbishment. Refurbishments are evolved in-situ, with online real-time input-based performance evaluation, enhancing system availability and sustainability, creating an Organic Embedded System (OES). A fault tolerance model called N Modular Redundancy with Standby (NMRSB) is developed which combines the two popular fault tolerance techniques of NMR and Standby fault tolerance in order to facilitate the CBE approach. This dissertation develops two of instances of the NMRSB system - Triple Modular Redundancy with Standby (TMRSB) and Duplex with Standby (DSB). A hypothetical Xilinx Virtex-II Pro FPGA model demonstrates their viability for various applications including a 3-bit x 3-bit multiplier, and the MCNC91 benchmark circuits. Experiments conducted on the model iii evaluate the performance of three new genetic operators and demonstrate progress towards a completely self-contained single-chip implementation so that the FPGA can refurbish itself without requiring a PC host to execute the Genetic Algorithm. This dissertation presents results from the simulations of multiple applications with a CBE model implemented in the C++ programming language. Starting with an initial population of 20 and 30 viable configurations for TMRSB and DSB respectively, a single stuck-at fault is introduced in the logic resources. Fault refurbishment experiments are conducted under supervision of CBE using a fitness state evaluation function based on competing outputs, fitness adjustment, and different level threshold. The device remains online throughout the process by which a complete repair is realized with Hamming Distance and Bitweight voting schemes. The results indicate a Hamming Distance TMRSB approach can prevent the most pervasive fault impacts and realize complete refurbishment. Experimental results also show that the Autonomic Layer demonstrates 100% faulty component isolation for both Functional Elements (FEs) and Autonomous Elements (AEs) with randomly injected single and multiple faults. Using logic circuits from the MCNC-91 benchmark set, availability during repair phases averaged 75.05%, 82.21%, and 65.21% for the z4ml, cm85a, and cm138a circuits respectively under stated conditions. In addition to simulation, the proposed OES architecture synthesized from HDL was prototyped on a Xilinx Virtex II Pro FPGA device supporting partial reconfiguration to demonstrate the feasibility for intrinsic regeneration of the selected circuit.

Page generated in 0.0598 seconds