Spelling suggestions: "subject:"[een] FAULT TOLERANCE"" "subject:"[enn] FAULT TOLERANCE""
61 |
Suppression and characterization of decoherence in practical quantum information processing devicesSilva, Marcus January 2008 (has links)
This dissertation addresses the issue of noise in quantum information processing devices. It is common knowledge that quantum states are particularly fragile to the effects of noise. In order to perform scalable quantum computation, it is necessary to suppress effective noise to levels which depend on the size of the computation. Various theoretical proposals have discussed how this can be achieved, under various assumptions about properties of the noise and the availability of qubits. We discuss new approaches to the suppression of noise, and propose experimental protocols characterizing the noise.
In the first part of the dissertation, we discuss a number of applications of teleportation to fault-tolerant quantum computation. We demonstrate how measurement-based quantum computation can be made inherently fault-tolerant by exploiting its relationship to teleportation. We also demonstrate how continuous variable quantum systems can be used as ancillas for computation with qubits, and how information can be reliably teleported between these different systems. Building on these ideas, we discuss how the necessary resource states for teleportation can be prepared by allowing quantum particles to be scattered by qubits, and investigate the feasibility of an implementation using superconducting circuits.
In the second part of the dissertation, we propose scalable experimental protocols for extracting information about the noise. We concentrate on information which has direct practical relevance to methods of noise suppression. In particular, we demonstrate how standard assumptions about properties of the noise can be tested in a scalable manner. The experimental protocols we propose rely on symmetrizing the noise by random application of unitary operations. Depending on the symmetry group use, different information about the noise can be extracted. We demonstrate, in particular, how to estimate the probability of a small number of qubits being corrupted, as well as how to test for a necessary condition for noise correlations. We conclude by demonstrating how, without relying on assumptions about the noise, the information obtained by symmetrization can also be used to construct protective encodings for quantum states.
|
62 |
Ultra low-power fault-tolerant SRAM design in 90nm CMOS technologyWang, Kuande 15 July 2010 (has links)
With the increment of mobile, biomedical and space applications, digital systems with
low-power consumption are required. As a main part in digital systems, low-power memories are
especially desired. Reducing the power supply voltages to sub-threshold region is one of the
effective approaches for ultra low-power applications. However, the reduced Static Noise
Margin (SNM) of Static Random Access Memory (SRAM) imposes great challenges to the subthreshold SRAM design. The conventional 6-transistor SRAM cell does not function properly at sub-threshold supply voltage range because it has no enough noise margin for reliable operation. In order to achieve ultra low-power at sub-threshold operation, previous research work has demonstrated that the read and write decoupled scheme is a good solution to the reduced SNM problem. A Dual Interlocked Storage Cell (DICE) based SRAM cell was proposed to eliminate the drawback of conventional DICE cell during read operation. This cell can mitigate the singleevent effects, improve the stability and also maintain the low-power characteristic of subthreshold SRAM, In order to make the proposed SRAM cell work under different power supply voltages from 0.3 V to 0.6 V, an improved replica sense scheme was applied to produce a reference control signal, with which the optimal read time could be achieved. In this thesis, a 2K~8 bits SRAM test chip was designed, simulated and fabricated in 90nm CMOS technology provided by ST Microelectronics. Simulation results suggest that the operating frequency at VDD = 0.3 V is up to 4.7 MHz with power dissipation 6.0 ÊW, while it is 45.5 MHz at VDD = 0.6 V dissipating 140 ÊW. However, the area occupied by a single cell is larger than that by conventional SRAM due to additional transistors used. The main contribution of this thesis project is that we proposed a new design that could simultaneously solve the ultra low-power and radiation-tolerance problem in large capacity memory design.
|
63 |
Secure Store : A Secure Distributed Storage ServiceLakshmanan, Subramanian 12 August 2004 (has links)
As computers become pervasive in environments that include the home and community, new applications are emerging that will create and manipulate sensitive and private information. These applications span systems ranging from personal to mobile and hand held devices. They would benefit from a data storage service that protects the integrity and confidentiality of the stored data and is highly available. Such a data repository would have to meet the needs of a variety of applications, handling data with varying security and performance requirements.
Providing simultaneously both high levels of security and high levels of performance may not be possible when many nodes in the system are under attack. The agility approach to building secure distributed services advocates the principle that the overhead of providing strong security guarantees should be incurred only by those applications that require such high levels of security and only at times when it is necessary to defend against high threat levels. A storage service that is designed for a variety of applications must follow the principles of agility, offering applications a range of options to choose from for their security and performance requirements.
This research presents secure store, a secure and highly available distributed store to meet the performance and security needs of a variety of applications. Secure store is designed to guarantee integrity, confidentiality and availability of stored data even in the face of limited number of compromised servers. Secure store is designed based on the principles of agility. Secure store integrates two well known techniques, namely replication and secret-sharing, and exploits the tradeoffs that exist between security and performance to offer applications a range of options to choose from to suit their needs.
This thesis makes several contributions, including (1) illustration of the the principles of agility, (2) a novel gossip-style secure dissemination protocol whose performance is comparable to the best-possible benign-case protocol in the absence of any malicious activity, (3) demonstration of the performance benefits of using weaker consistency models for data access, and (4) a technique called collective endorsement that can be used in other secure distributed applications.
|
64 |
Balanced Discharging for Serial Battery Power ModulesYu, Li-ren 28 August 2012 (has links)
This thesis investigates the discharging behavior of serial boost-type battery power modules (BPMs). Even though the BPMs are connected in series to cope with a higher output voltage, all batteries in the BPMs can substantially be operated individually so that can realize the balanced discharging control strategy. By which, the battery currents are scheduled in accordance with their state-of-charges (SOCs).A battery power system formed by 10 boost-type BPMs is built, in which a micro controller is used for detecting the loaded voltages, estimating the SOCs, and controlling the duty ratios of the power converters. Experimental results demonstrate the balanced discharging capability of the serial BPMs. In addition, fault tolerance mechanism is introduced to isolate fault or exhausted batteries and keep the system working with a reduced load.
|
65 |
The Design of Fault Tolerance of Cluster Computing PlatformLiao, Yu-tien 29 August 2012 (has links)
If nodes got failed in a distributed application service, it will not only pay more cost to handle with these results missing, but also make scheduler cause additional loadings. For whole results don¡¦t recalculated cause by fault occurs, it will be recalculated data of fault nodes in backup machines. Therefore, this paper uses three methods: N + N nodes, N + 1 nodes, and N + 1 nodes with probability to experiment and analyze their pros and cons, the third way gives jobs weight before assigning them, and converts weight into probability and nice value(defined by SLURM[1]) to influence scheduler¡¦s decision of jobs¡¦ order. When fault occurs, calculating in normal nodes¡¦ results will back to control node, and then the fault node¡¦s jobs are going to be reassigned or not be reassigned to backup machine for getting complete results. Finally, we will analyze these three ways good and bad.
|
66 |
Fault-tolerance in HLA-based distributed simulationsEklöf, Martin January 2006 (has links)
<p><i>Successful integration of simulations within the Network-Based Defence (NBD), specifically use of simulations within Command and Control (C2) environments, enforces a number of requirements. Simulations must be reliable and be able to respond in a timely manner. Otherwise the commander will have no confidence in using simulation as a tool. An important aspect of these requirements is the provision of fault-tolerant simulations in which failures are detected and resolved in a consistent manner. Given the distributed nature of many military simulations systems, services for fault-tolerance in distributed simulations are desirable. The main architecture for distributed simulations within the military domain, the High Level Architecture (HLA), does not provide support for development of fault-tolerant simulations.</i></p><p><i>A common approach for fault-tolerance in distributed systems is check-pointing. In this approach, states of the system are persistently stored through-out its operation. In case a failure occurs, the system is restored using a previously saved state. Given the abovementioned shortcomings of the HLA standard this thesis explores development of fault-tolerant mechanisms in the context of the HLA. More specifically, the design, implementation and evaluation of fault-tolerance mechanisms, based on check-pointing, are described and discussed.</i></p>
|
67 |
Immunity-based detection, identification, and evaluation of aircraft sub-system failuresMoncayo, Hever Y. January 2009 (has links)
Thesis (Ph. D.)--West Virginia University, 2009. / Title from document title page. Document formatted into pages; contains xiv, 118 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 109-118).
|
68 |
A new approach to detecting failures in distributed systemsLeners, Joshua Blaise 18 September 2015 (has links)
Fault-tolerant distributed systems often handle failures in two steps: first, detect the failure and, second, take some recovery action. A common approach to detecting failures is end-to-end timeouts, but using timeouts brings problems. First, timeouts are inaccurate: just because a process is unresponsive does not mean that process has failed. Second, choosing a timeout is hard: short timeouts can exacerbate the problem of inaccuracy, and long timeouts can make the system wait unnecessarily. In fact, a good timeout value—one that balances the choice between accuracy and speed—may not even exist, owing to the variance in a system’s end-to-end delays. ƃis dissertation posits a new approach to detecting failures in distributed systems: use information about failures that is local to each component, e.g., the contents of an OS’s process table. We call such information inside information, and use it as the basis in the design and implementation of three failure reporting services for data center applications, which we call Falcon, Albatross, and Pigeon. Falcon deploys a network of software modules to gather inside information in the system, and it guarantees that it never reports a working process as crashed by sometimes terminating unresponsive components. ƃis choice helps applications by making reports of failure reliable, meaning that applications can treat them as ground truth. Unfortunately, Falcon cannot handle network failures because guaranteeing that a process has crashed requires network communication; we address this problem in Albatross and Pigeon. Instead of killing, Albatross blocks suspected processes from using the network, allowing applications to make progress during network partitions. Pigeon renounces interference altogether, and reports inside information to applications directly and with more detail to help applications make better recovery decisions. By using these services, applications can improve their recovery from failures both quantitatively and qualitatively. Quantitatively, these services reduce detection time by one to two orders of magnitude over the end-to-end timeouts commonly used by data center applications, thereby reducing the unavailability caused by failures. Qualitatively, these services provide more specific information about failures, which can reduce the logic required for recovery and can help applications better decide when recovery is not necessary.
|
69 |
Exploring Application-level Fault Tolerance for Robust Design Using FPGAChen, Jing Unknown Date
No description available.
|
70 |
Reliability and fault tolerance modelling of multiprocessor systemsValdivia, Roberto Abraham January 1989 (has links)
Reliability evaluation by analytic modelling constitute an important issue of designing a reliable multiprocessor system. In this thesis, a model for reliability and fault tolerance analysis of the interconnection network is presented, based on graph theory. Reliability and fault tolerance are considered as deterministic and probabilistic measures of connectivity. Exact techniques for reliability evaluation fail for large multiprocessor systems because of the enormous computational resources required. Therefore, approximation techniques have to be used. Three approaches are proposed, the first by simplifying the symbolic expression of reliability; the other two by applying a hierarchical decomposition to the system. All these methods give results close to those obtained by exact techniques.
|
Page generated in 0.0348 seconds