• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 291
  • 135
  • 54
  • 27
  • 6
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 622
  • 622
  • 161
  • 149
  • 137
  • 116
  • 107
  • 102
  • 74
  • 73
  • 72
  • 71
  • 66
  • 61
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

ADAM: A Decentralized Parallel Computer Architecture Featuring Fast Thread and Data Migration and a Uniform Hardware Abstraction

Huang, Andrew "bunnie" 01 June 2002 (has links)
The furious pace of Moore's Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of migration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading. The architecture is abstract, and presents programmers with a simple, uniform fine-grained multithreaded parallel programming model with implicit memory management. In other words, the spatial nature and implementation details (such as the number of processors) of a parallel machine are entirely hidden from the programmer. Compiler writers are encouraged to devise programming languages for the machine that guide a programmer to express their ideas in terms of objects, since objects exhibit an inherent physical locality of data and code. The machine implementation can then leverage this locality to automatically distribute data and threads across the physical machine by using a set of high performance migration mechanisms. An implementation of this architecture could migrate a null thread in 66 cycles -- over a factor of 1000 improvement over previous work. Performance also scales well; the time required to move a typical thread is only 4 to 5 times that of a null thread. Data migration performance is similar, and scales linearly with data block size. Since the performance of the migration mechanism is on par with that of an L2 cache, the implementation simulated in my work has no data caches and relies instead on multithreading and the migration mechanism to hide and reduce access latencies.
72

Analysing Fault Tolerance for Erlang Applications

Nyström, Jan Henry January 2009 (has links)
ERLANG is a concurrent functional language, well suited for distributed, highly concurrent and fault-tolerant software. An important part of Erlang is its support for failure recovery. Fault tolerance is provided by organising the processes of an ERLANG application into tree structures. In these structures, parent processes monitor failures of their children and are responsible for their restart. Libraries support the creation of such structures during system initialisation.A technique to automatically analyse that the process structure of an ERLANG application from the source code is presented. The analysis exposes shortcomings in the fault tolerance properties of the application. First, the process structure is extracted through static analysis of the initialisation code of the application. Thereafter, analysis of the process structure checks two important properties of the fault handling mechanism: 1) that it will recover from any process failure, 2) that it will not hide persistent errors.The technique has been implemented in a tool, and applied it to several OTP library applications and to a subsystem of a commercial system the AXD 301 ATM switch.The static analysis of the ERLANG source code is achieved through symbolic evaluation. The evaluation is peformed according to an abstraction of ERLANG’s actual semamtics. The actual semantics is formalised for a nontrivial part of the language and it is proven that the abstraction of the semantics simulates the actual semantics. / ASTEC
73

Ultra low-power fault-tolerant SRAM design in 90nm CMOS technology

Wang, Kuande 15 July 2010
With the increment of mobile, biomedical and space applications, digital systems with low-power consumption are required. As a main part in digital systems, low-power memories are especially desired. Reducing the power supply voltages to sub-threshold region is one of the effective approaches for ultra low-power applications. However, the reduced Static Noise Margin (SNM) of Static Random Access Memory (SRAM) imposes great challenges to the subthreshold SRAM design. The conventional 6-transistor SRAM cell does not function properly at sub-threshold supply voltage range because it has no enough noise margin for reliable operation. In order to achieve ultra low-power at sub-threshold operation, previous research work has demonstrated that the read and write decoupled scheme is a good solution to the reduced SNM problem. A Dual Interlocked Storage Cell (DICE) based SRAM cell was proposed to eliminate the drawback of conventional DICE cell during read operation. This cell can mitigate the singleevent effects, improve the stability and also maintain the low-power characteristic of subthreshold SRAM, In order to make the proposed SRAM cell work under different power supply voltages from 0.3 V to 0.6 V, an improved replica sense scheme was applied to produce a reference control signal, with which the optimal read time could be achieved. In this thesis, a 2K~8 bits SRAM test chip was designed, simulated and fabricated in 90nm CMOS technology provided by ST Microelectronics. Simulation results suggest that the operating frequency at VDD = 0.3 V is up to 4.7 MHz with power dissipation 6.0 ÊW, while it is 45.5 MHz at VDD = 0.6 V dissipating 140 ÊW. However, the area occupied by a single cell is larger than that by conventional SRAM due to additional transistors used. The main contribution of this thesis project is that we proposed a new design that could simultaneously solve the ultra low-power and radiation-tolerance problem in large capacity memory design.
74

Suppression and characterization of decoherence in practical quantum information processing devices

Silva, Marcus January 2008 (has links)
This dissertation addresses the issue of noise in quantum information processing devices. It is common knowledge that quantum states are particularly fragile to the effects of noise. In order to perform scalable quantum computation, it is necessary to suppress effective noise to levels which depend on the size of the computation. Various theoretical proposals have discussed how this can be achieved, under various assumptions about properties of the noise and the availability of qubits. We discuss new approaches to the suppression of noise, and propose experimental protocols characterizing the noise. In the first part of the dissertation, we discuss a number of applications of teleportation to fault-tolerant quantum computation. We demonstrate how measurement-based quantum computation can be made inherently fault-tolerant by exploiting its relationship to teleportation. We also demonstrate how continuous variable quantum systems can be used as ancillas for computation with qubits, and how information can be reliably teleported between these different systems. Building on these ideas, we discuss how the necessary resource states for teleportation can be prepared by allowing quantum particles to be scattered by qubits, and investigate the feasibility of an implementation using superconducting circuits. In the second part of the dissertation, we propose scalable experimental protocols for extracting information about the noise. We concentrate on information which has direct practical relevance to methods of noise suppression. In particular, we demonstrate how standard assumptions about properties of the noise can be tested in a scalable manner. The experimental protocols we propose rely on symmetrizing the noise by random application of unitary operations. Depending on the symmetry group use, different information about the noise can be extracted. We demonstrate, in particular, how to estimate the probability of a small number of qubits being corrupted, as well as how to test for a necessary condition for noise correlations. We conclude by demonstrating how, without relying on assumptions about the noise, the information obtained by symmetrization can also be used to construct protective encodings for quantum states.
75

Suppression and characterization of decoherence in practical quantum information processing devices

Silva, Marcus January 2008 (has links)
This dissertation addresses the issue of noise in quantum information processing devices. It is common knowledge that quantum states are particularly fragile to the effects of noise. In order to perform scalable quantum computation, it is necessary to suppress effective noise to levels which depend on the size of the computation. Various theoretical proposals have discussed how this can be achieved, under various assumptions about properties of the noise and the availability of qubits. We discuss new approaches to the suppression of noise, and propose experimental protocols characterizing the noise. In the first part of the dissertation, we discuss a number of applications of teleportation to fault-tolerant quantum computation. We demonstrate how measurement-based quantum computation can be made inherently fault-tolerant by exploiting its relationship to teleportation. We also demonstrate how continuous variable quantum systems can be used as ancillas for computation with qubits, and how information can be reliably teleported between these different systems. Building on these ideas, we discuss how the necessary resource states for teleportation can be prepared by allowing quantum particles to be scattered by qubits, and investigate the feasibility of an implementation using superconducting circuits. In the second part of the dissertation, we propose scalable experimental protocols for extracting information about the noise. We concentrate on information which has direct practical relevance to methods of noise suppression. In particular, we demonstrate how standard assumptions about properties of the noise can be tested in a scalable manner. The experimental protocols we propose rely on symmetrizing the noise by random application of unitary operations. Depending on the symmetry group use, different information about the noise can be extracted. We demonstrate, in particular, how to estimate the probability of a small number of qubits being corrupted, as well as how to test for a necessary condition for noise correlations. We conclude by demonstrating how, without relying on assumptions about the noise, the information obtained by symmetrization can also be used to construct protective encodings for quantum states.
76

Ultra low-power fault-tolerant SRAM design in 90nm CMOS technology

Wang, Kuande 15 July 2010 (has links)
With the increment of mobile, biomedical and space applications, digital systems with low-power consumption are required. As a main part in digital systems, low-power memories are especially desired. Reducing the power supply voltages to sub-threshold region is one of the effective approaches for ultra low-power applications. However, the reduced Static Noise Margin (SNM) of Static Random Access Memory (SRAM) imposes great challenges to the subthreshold SRAM design. The conventional 6-transistor SRAM cell does not function properly at sub-threshold supply voltage range because it has no enough noise margin for reliable operation. In order to achieve ultra low-power at sub-threshold operation, previous research work has demonstrated that the read and write decoupled scheme is a good solution to the reduced SNM problem. A Dual Interlocked Storage Cell (DICE) based SRAM cell was proposed to eliminate the drawback of conventional DICE cell during read operation. This cell can mitigate the singleevent effects, improve the stability and also maintain the low-power characteristic of subthreshold SRAM, In order to make the proposed SRAM cell work under different power supply voltages from 0.3 V to 0.6 V, an improved replica sense scheme was applied to produce a reference control signal, with which the optimal read time could be achieved. In this thesis, a 2K~8 bits SRAM test chip was designed, simulated and fabricated in 90nm CMOS technology provided by ST Microelectronics. Simulation results suggest that the operating frequency at VDD = 0.3 V is up to 4.7 MHz with power dissipation 6.0 ÊW, while it is 45.5 MHz at VDD = 0.6 V dissipating 140 ÊW. However, the area occupied by a single cell is larger than that by conventional SRAM due to additional transistors used. The main contribution of this thesis project is that we proposed a new design that could simultaneously solve the ultra low-power and radiation-tolerance problem in large capacity memory design.
77

Secure Store : A Secure Distributed Storage Service

Lakshmanan, Subramanian 12 August 2004 (has links)
As computers become pervasive in environments that include the home and community, new applications are emerging that will create and manipulate sensitive and private information. These applications span systems ranging from personal to mobile and hand held devices. They would benefit from a data storage service that protects the integrity and confidentiality of the stored data and is highly available. Such a data repository would have to meet the needs of a variety of applications, handling data with varying security and performance requirements. Providing simultaneously both high levels of security and high levels of performance may not be possible when many nodes in the system are under attack. The agility approach to building secure distributed services advocates the principle that the overhead of providing strong security guarantees should be incurred only by those applications that require such high levels of security and only at times when it is necessary to defend against high threat levels. A storage service that is designed for a variety of applications must follow the principles of agility, offering applications a range of options to choose from for their security and performance requirements. This research presents secure store, a secure and highly available distributed store to meet the performance and security needs of a variety of applications. Secure store is designed to guarantee integrity, confidentiality and availability of stored data even in the face of limited number of compromised servers. Secure store is designed based on the principles of agility. Secure store integrates two well known techniques, namely replication and secret-sharing, and exploits the tradeoffs that exist between security and performance to offer applications a range of options to choose from to suit their needs. This thesis makes several contributions, including (1) illustration of the the principles of agility, (2) a novel gossip-style secure dissemination protocol whose performance is comparable to the best-possible benign-case protocol in the absence of any malicious activity, (3) demonstration of the performance benefits of using weaker consistency models for data access, and (4) a technique called collective endorsement that can be used in other secure distributed applications.
78

Balanced Discharging for Serial Battery Power Modules

Yu, Li-ren 28 August 2012 (has links)
This thesis investigates the discharging behavior of serial boost-type battery power modules (BPMs). Even though the BPMs are connected in series to cope with a higher output voltage, all batteries in the BPMs can substantially be operated individually so that can realize the balanced discharging control strategy. By which, the battery currents are scheduled in accordance with their state-of-charges (SOCs).A battery power system formed by 10 boost-type BPMs is built, in which a micro controller is used for detecting the loaded voltages, estimating the SOCs, and controlling the duty ratios of the power converters. Experimental results demonstrate the balanced discharging capability of the serial BPMs. In addition, fault tolerance mechanism is introduced to isolate fault or exhausted batteries and keep the system working with a reduced load.
79

The Design of Fault Tolerance of Cluster Computing Platform

Liao, Yu-tien 29 August 2012 (has links)
If nodes got failed in a distributed application service, it will not only pay more cost to handle with these results missing, but also make scheduler cause additional loadings. For whole results don¡¦t recalculated cause by fault occurs, it will be recalculated data of fault nodes in backup machines. Therefore, this paper uses three methods: N + N nodes, N + 1 nodes, and N + 1 nodes with probability to experiment and analyze their pros and cons, the third way gives jobs weight before assigning them, and converts weight into probability and nice value(defined by SLURM[1]) to influence scheduler¡¦s decision of jobs¡¦ order. When fault occurs, calculating in normal nodes¡¦ results will back to control node, and then the fault node¡¦s jobs are going to be reassigned or not be reassigned to backup machine for getting complete results. Finally, we will analyze these three ways good and bad.
80

Fault-tolerance in HLA-based distributed simulations

Eklöf, Martin January 2006 (has links)
<p><i>Successful integration of simulations within the Network-Based Defence (NBD), specifically use of simulations within Command and Control (C2) environments, enforces a number of requirements. Simulations must be reliable and be able to respond in a timely manner. Otherwise the commander will have no confidence in using simulation as a tool. An important aspect of these requirements is the provision of fault-tolerant simulations in which failures are detected and resolved in a consistent manner. Given the distributed nature of many military simulations systems, services for fault-tolerance in distributed simulations are desirable. The main architecture for distributed simulations within the military domain, the High Level Architecture (HLA), does not provide support for development of fault-tolerant simulations.</i></p><p><i>A common approach for fault-tolerance in distributed systems is check-pointing. In this approach, states of the system are persistently stored through-out its operation. In case a failure occurs, the system is restored using a previously saved state. Given the abovementioned shortcomings of the HLA standard this thesis explores development of fault-tolerant mechanisms in the context of the HLA. More specifically, the design, implementation and evaluation of fault-tolerance mechanisms, based on check-pointing, are described and discussed.</i></p>

Page generated in 0.0727 seconds