• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1473
  • 547
  • 296
  • 191
  • 80
  • 32
  • 30
  • 27
  • 22
  • 13
  • 10
  • 10
  • 10
  • 10
  • 10
  • Tagged with
  • 3340
  • 628
  • 605
  • 551
  • 543
  • 411
  • 399
  • 371
  • 364
  • 347
  • 338
  • 335
  • 311
  • 267
  • 256
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Threshold analysis with fault-tolerant operations for nonbinary quantum error correcting codes

Kanungo, Aparna 01 November 2005 (has links)
Quantum error correcting codes have been introduced to encode the data bits in extra redundant bits in order to accommodate errors and correct them. However, due to the delicate nature of the quantum states or faulty gate operations, there is a possibility of catastrophic spread of errors which might render the error correction techniques ineffective. Hence, in this thesis we concentrate on how various operations can be carried out fault-tolerantly so that the errors are not propagated in the same block. We prove universal fault-tolerance for nonbinary CSS codes. This thesis is focussed only on nonbinary quantum codes and all the results pertain to nonbinary codes. Efficient error detection and correction techniques using fault-tolerant techniques can help as long as we ensure that the gate error probability is below a certain threshold. The calculation of this threshold is therefore important to see if quantum computations are realizable or not, even with fault-tolerant operations. We derive an expression to compute the gate error threshold for nonbinary quantum codes and test this result for different classes of codes, to get codes with best threshold results.
52

Limitations for detecting small-scale faults using the coherency analysis of seismic data

Barnett, David Benjamin 16 August 2006 (has links)
Coherency analyzes the trace to trace amplitude similarities recorded by seismic waves. Coherency algorithms have been used to identify the structural or stratigraphic features of an area but the limitations for detecting small-scale features are not known. These limitations become extremely important when interpreting coherency within poorly acquired or processed data sets. In order to obtain a better understanding of the coherency limitations, various synthetic seismic data sets were created. The sensitivity of the coherency algorithms to variations in wave frequency, signal-to-noise ratio and fault throw was investigated. Correlation between the coherency values of a faulted reflector and the known offset shows that coherency has the ability to detect the presence of various scale features that may be previously thought to be below seismic resolution or difficult to discriminate with conventional interpretation methods. Coherency values had a smaller standard deviation and were less sensitive to noise when processed with a temporal window length less than one period. A fault could be detected by coherency when the signal-to-noise ratio was >3. A fault could also be detected as long as the throw-to-wavelength ratio was >5% or two-way traveltime-toperiod >10%. Therefore, this study suggests that coherency has the ability to detect a fault as long as the frequency of the data imaging that fault has a period no greater than one order of magnitude to the traveltime through the fault and that the signal can easily be distinguished from noise. Results from application of the coherency analysis were applied to the characterization of a very deep fault and fracture system imaged by a field seismic data set. A series of reverse and strike-slip faults were detected and mapped. Magnitudes of the throws for these faults were not known, but subtle amplitude anomalies in seismic sections confirmed the coherency analysis. The results of this study suggest that coherency has demonstrated an ability to detect features that would normally beoverlooked using traditional interpretation methods and has many future implications for poorly imaged seismic areas, such as sub-salt.
53

A hybrid system for fault detection and sensor fusion based on fuzzy clustering and artificial immune systems

Jaradat, Mohammad Abdel Kareem Rasheed 25 April 2007 (has links)
In this study, an efficient new hybrid approach for multiple sensors data fusion and fault detection is presented, addressing the problem with possible multiple faults, which is based on conventional fuzzy soft clustering and artificial immune system (AIS). The proposed hybrid system approach consists of three main phases. In the first phase signal separation is performed using the Fuzzy C-Means (FCM) algorithm. Subsequently a single (fused) signal based on the information provided from the sensor signals is generated by the fusion engine. The information provided from the previous two phases is used for fault detection in the third phase based on the Artificial Immune System (AIS) negative selection mechanism. The simulations and experiments for multiple sensor systems have confirmed the strength of the new approach for online fusing and fault detection. The hybrid system gives a fault tolerance by handling different problems such as noisy sensor signals and multiple faulty sensors. This makes the new hybrid approach attractive for solving such fusion problems and fault detection during real time operations. This hybrid system is extended for early fault detection in complex mechanical systems based on a set of extracted features; these features characterize the collected sensors data. The hybrid system is able to detect the onset of fault conditions which can lead to critical damage or failure. This early detection of failure signs can provide more effective information for any maintenance actions or corrective procedure decisions.
54

Fault tolerant multipliers and dividers using time shared triple modular redundancy /

Gallagher, William Lynn, January 1999 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 1999. / Vita. Includes bibliographical references (leaves 139-145). Available also in a digital version from Dissertation Abstracts.
55

Fiesta++ : a software implemented fault injection tool for transient fault injection

Chaudhari, Ameya Suhas 26 January 2015 (has links)
Computer systems, even when correctly designed, can suffer from temporary errors due to radiation particles striking the circuit or changes in the operating conditions such as the temperature or the voltage. Such transient errors can cause systems to malfunction or even crash. Fault injection is a technique used for simulating the effect of such errors on the system. Fault injection tools inject errors in either the software running on the processors or in the underlying computer hardware to simulate the effect of a fault and observe the system behavior. These tools can be used to determine the different responses of the system to such errors and estimate the probability of occurrence of errors in the computations performed by the system. They can also be used to test the fault tolerance capabilities of the system under test or any proposed technique for providing fault tolerance in circuits or software. As a part of this thesis, I have developed a software implemented fault injection tool, Fiesta++, for evaluating the fault tolerance and fault response of software applications. Software implemented fault injection tools inject faults into the software state of the application as it runs on a processor. Since such fault injection tools are used to conduct experiments on applications executing natively on a processor, the experiments can be carried out at almost the same speed as the application execution and can be run on the same hardware as used by the software application in the field. Fiesta++ offers two modes of operation: whitebox and blackbox. The whitebox mode assumes that users have some degree of knowledge of the structure of the software under test and allows them to specify fault injection targets in terms of the application variables and fault injection time in terms of code locations and events at run time. It can be used for precise fault injection to get reproducible outcomes from the fault injection experiments. The blackbox mode is targeted for the case where the user has very little or no knowledge of the application code structure. In this mode, Fiesta++ provides the user with a view of the active process memory and an array of associated information which a user can use to inject faults. / text
56

Syndepositional fault control on dolomitization of a steep-walled carbonate platform margin, Yates Formation, Rattlesnake Canyon, New Mexico

Simon, Rebekah Elizabeth 02 February 2015 (has links)
Syndepositional deformation features are fundamental components of carbonate platforms both in the subsurface and in seismic-scale field analogs. These deformation features are commonly opening-mode, solution-widened fractures that can evolve into extensional faults, and reactivate frequently through the evolution of the platform. They also have potential to behave as fluid flow conduits from the earliest phases of platform growth through burial and uplift, and can be active during hydrocarbon generation. As such, diagenetic alteration in the margins of these carbonate platforms is often intense, may demonstrate a preferential spatial relationship to the deformation features rather than the depositional fabrics of the strata, and may impact the permeability development of reservoir strata near deformation features. This study focuses on a syndepositional graben known as the Cave Graben fault system in the Yates Formation of Rattlesnake Canyon in the Guadalupe Mountains, and investigates the distribution of dolomite around the faults and associated opening-mode fractures, in an effort to understand the control the Cave Graben faults exert on fluid flow through the platform margins. Two generations of dolomite are identified on the outcrop: a fabric retentive dolomite located in the uppermost facies of the platform, and a fabric destructive dolomite that forms white, chalky haloes around syndepositional deformation features. The first generation of dolomite is dully luminescent and has very small crystal sizes, as well as a low trace element concentration and an ¹⁸O-enriched stable isotopic signature compared to Permian marine carbonate ratios. This dolomite is interpreted to have formed from the penecontemporaneous refluxing of concentrated lagoonal brine, and shows little fault control on its distribution. The second generation of dolomite is brightly luminescent and has much larger crystal sizes, as well as a higher trace element concentration and a slightly ¹⁸O-depleted isotopic signature compared to the first generation of dolomite, though it is still enriched in ¹⁸O compared to Permian marine carbonate. This dolomite is interpreted to have formed in a burial environment due to the transport of concentrated brines from the overlying evaporites through syndepositional deformation features. Overall, this study suggests that, once open, syndepositional deformation features may become the primary fluid conduit through otherwise impermeable strata, and may control the distribution of diagenetic products over a long period of geologic time. It provides valuable insight into the interaction of syndepositional faults and fractures and fluid flow, and may improve understanding of diagenesis in analogous subsurface carbonates reservoir intervals. / text
57

Replicating multithreaded services

Kapritsos, Emmanouil 09 February 2015 (has links)
For the last 40 years, the systems community has invested a lot of effort in designing techniques for building fault tolerant distributed systems and services. This effort has produced a massive list of results: the literature describes how to design replication protocols that tolerate a wide range of failures (from simple crashes to malicious "Byzantine" failures) in a wide range of settings (e.g. synchronous or asynchronous communication, with or without stable storage), optimizing various metrics (e.g. number of messages, latency, throughput). These techniques have their roots in ideas, such as the abstraction of State Machine Replication and the Paxos protocol, that were conceived when computing was very different than it is today: computers had a single core; all processing was done using a single thread of control, handling requests sequentially; and a collection of 20 nodes was considered a large distributed system. In the last decade, however, computing has gone through some major paradigm shifts, with the advent of multicore architectures and large cloud infrastructures. This dissertation explains how these profound changes impact the practical usefulness of traditional fault tolerant techniques and proposes new ways to architect these solutions to fit the new paradigms. / text
58

A reconfiguration-based defect-tolerant design paradigm for nanotechnologies

He, Chen 28 August 2008 (has links)
Not available / text
59

Byzantine fault-tolerance and beyond

Martin, Jean-Philippe Etienne 28 August 2008 (has links)
Not available / text
60

Reliable mobile agents for distributed computing

Wagealla, Waleed January 2003 (has links)
The emergence of platform-independent, mobile code technologies has created big opportunities for Internet-based applications. Mobile agents are being utilized to perform a variety of tasks from personalized computing to business-critical transactions. Unfortunately, these advances were not matched by correspondent research into the reliability of these new technologies. This work has been undertaken to investigate the faulttolerance of this new paradigm. Agent programs' mobility and autonomy of execution has introduced a new class of failures different to that of traditional distributed systems. Therefore, fault tolerance is one of the main problems that must be resolved to improve the adoption of an agents' paradigm. The investigation of mobile agents reliability in this thesis resulted in the development of REMA (REliable Mobile Agents), which guarantees the reliable execution, migration, and communication of mobile agents in the presence of faults that might affect the agents hosts or their communication network. We introduced an algorithm for the transparent detection of faults that might affect agent execution, migration, and communication. A decentralized structure was used to divide the agent dynamic distributed system into network-partitioning proof spaces. Lightweight messaging was adopted as the basic error detection engine, which together with the loosely coupled detection managers provided an efficient, low overhead detection mechanism for agent-based distributed processing. The problem of taking checkpoint of agent execution is hampered by the lack of the accessibility of the underlying structure of the JVM. Thus, an alternative solution has been achieved through the REMA Checkpoint and Recovery (REMA-CR) package. REMA-CR provides the developer with powerful classes and methods that allow for capturing the critical data of agents' execution. The developed recovery protocol offers a communication-pairs, independent checkpointing strategy at a low-cost, that covers all possible faults that might invalidate reliable agent execution, migration and communication and maintains the exactly once execution property. The results and the performance of REMA confirmed our objectives of providing a fault tolerant wrapper for agents and their applications with acceptable overhead cost.

Page generated in 0.0304 seconds