• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 267
  • 135
  • 54
  • 25
  • 5
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 578
  • 578
  • 160
  • 144
  • 123
  • 116
  • 104
  • 89
  • 73
  • 72
  • 71
  • 69
  • 58
  • 57
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Desenvolvimento e teste de um monitor de barramento I2C para proteção contra falhas transientes / Development and test of an I2C bus monitor for protection against transient faults

Carvalho, Vicente Bueno January 2016 (has links)
A comunicação entre circuitos integrados tem evoluído em desempenho e confiabilidade ao longo dos anos. Inicialmente os projetos utilizavam barramentos paralelos, onde existe a necessidade de uma grande quantidade de vias, utilizando muitos pinos de entrada e saída dos circuitos integrados resultando também em uma grande suscetibilidade a interferências eletromagnéticas (EMI) e descargas eletrostáticas (ESD). Na sequência, ficou claro que o modelo de barramento serial possuía ampla vantagem em relação ao predecessor, uma vez que este utiliza um menor número de vias, facilitando o processo de leiaute de placas, facilitando também a integridade de sinais possibilitando velocidades muito maiores apesar do menor número de vias. Este trabalho faz uma comparação entre os principais protocolos seriais de baixa e média velocidade. Nessa pesquisa, foram salientadas as características positivas e negativas de cada protocolo, e como resultado o enquadramento de cada um dos protocolos em um segmento de atuação mais apropriado. O objetivo deste trabalho é utilizar o resultado da análise comparativa dos protocolos seriais para propor um aparato de hardware capaz de suprir uma deficiência encontrada no protocolo serial I2C, amplamente utilizado na indústria, mas que possui restrições quando a aplicação necessita alta confiabilidade. O aparato, aqui chamado de Monitor de Barramento I2C, é capaz de verificar a integridade de dados, sinalizar métricas sobre a qualidade das comunicações, detectar falhas transitórias e erros permanentes no barramento e agir sobre os dispositivos conectados ao barramento para a recuperação de tais erros, evitando falhas. Foi desenvolvido um mecanismo de injeção de falhas para simular as falhas em dispositivos conectados ao barramento e, portanto, verificar a resposta do monitor. Resultados no PSoC5, da empresa Cypress, mostram que a solução proposta tem um baixo custo em termos de área e nenhum impacto no desempenho das comunicações. / The communication between integrated circuits has evolved in performance and reliability over the years. Initially projects used parallel buses, where there is a need for a large amount of wires, consuming many input and output pins of the integrated circuits resulting in a great susceptibility to electromagnetic interference (EMI) and electrostatic discharge (ESD). As a result, it became clear that the serial bus model had large advantage over predecessor, since it uses a smaller number of lanes, making the PCB layout process easier, which also facilitates the signal integrity allowing higher speeds despite fewer pathways. This work makes a comparison between the main low and medium speed serial protocols. The research has emphasized the positive and negative characteristics of each protocol, and as a result the framework of each of the protocols in a more appropriate market segment. The objective of this work is to use the results of comparative analysis of serial protocols to propose a hardware apparatus capable of filling a gap found in the I2C protocol, widely used in industry, but with limitations when the application requires high reliability. The apparatus, here called I2C Bus Monitor, is able to perform data integrity verification activities, to signalize metrics about the quality of communications, to detect transient faults and permanent errors on the bus and to act on the devices connected to the bus for the recovery of such errors avoiding failures. It was developed a fault injection mechanism to simulate faults in the devices connected to the bus and thus verify the monitor response. Results in the APSoC5 from Cypress show that the proposed solution has an extremely low cost overhead in terms of area and no performance impact in the communication.
52

Improving the performance of railway track-switching through the introduction of fault tolerance

Bemment, Samuel D. January 2018 (has links)
In the future, the performance of the railway system must be improved to accommodate increasing passenger volumes and service quality demands. Track switches are a vital part of the rail infrastructure, enabling traffic to take different routes. All modern switch designs have evolved from a design first patented in 1832. However, switches present single points of failure, require frequent and costly maintenance interventions, and restrict network capacity. Fault tolerance is the practice of preventing subsystem faults propagating to whole-system failures. Existing switches are not considered fault tolerant. This thesis describes the development and potential performance of fault-tolerant railway track switching solutions. The work first presents a requirements definition and evaluation framework which can be used to select candidate designs from a range of novel switching solutions. A candidate design with the potential to exceed the performance of existing designs is selected. This design is then modelled to ascertain its practical feasibility alongside potential reliability, availability, maintainability and capacity performance. The design and construction of a laboratory scale demonstrator of the design is described. The modelling results show that the performance of the fault tolerant design may exceed that of traditional switches. Reliability and availability performance increases significantly, whilst capacity gains are present but more marginal without the associated relaxation of rules regarding junction control. However, the work also identifies significant areas of future work before such an approach could be adopted in practice.
53

Assembly tolerance analysis in geometric dimensioning and tolerancing

Tangkoonsombati, Choowong 25 August 1994 (has links)
Tolerance analysis is a major link between design and manufacturing. An assembly or a part should be designed based on its functions, manufacturing processes, desired product quality, and manufacturing cost. Assembly tolerance analysis performed at the design stage can reduce potential manufacturing and assembly problems. Several commonly used assembly tolerance analysis models and their limitations are reviewed in this research. Also, a new assembly tolerance analysis model is developed to improve the limitations of the existing assembly tolerance analysis models. The new model elucidates the impact of the flatness symbol (one of the Geometric Dimensioning and Tolerancing (GD&T) specification symbols) and reduces design variables into simple mathematical equations. The new model is based on beta distribution of part dimensions. In addition, a group of manufacturing variables, including quality factor, process tolerance, and mean shift, is integrated in the new assembly tolerance analysis model. A computer integrated system has been developed to handle four support systems for the performance of tolerance analysis in a single computer application. These support systems are: 1) the CAD drawing system, 2) the Geometric Dimensioning and Tolerancing (GD&T) specification system, 3) the assembly tolerance analysis model, and 4) the tolerance database operating under the Windows environment. Dynamic Data Exchange (DDE) is applied to exchange the data between two different window applications, resulting in improvement of information transfer between the support systems. In this way, the user is able to use this integrated system to select a GD&T specification, determine a critical assembly dimension and tolerance, and access the tolerance database during the design stage simultaneously. Examples are presented to illustrate the application of the integrated tolerance analysis system. / Graduation date: 1995
54

The limits of network transparency in a distributed programming language

Collet, Raphaël 19 December 2007 (has links)
This dissertation presents a study on the extent and limits of network transparency in distributed programming languages. This property states that the result of a distributed program is the same as if it were executed on a single computer, in the case when no failure occurs. The programming language may also be network aware if it allows the programmer to control how a program is distributed and how it behaves on the network. Both aim at simplifying distributed programming, by making non-functional aspects of a program more modular. We show that network transparency is not only possible, but also practical: it can be efficient, and smoothly extended in the case of partial failure. We give a proof of concept with the programming language Oz and the system Mozart, of which we have reimplemented the distribution support on top of the Distribution Subsystem (DSS). We have extended the language to control which distribution algorithms are used in a program, and reflect partial failures in the language. Both extensions allow to handle non-functional aspects of a program without breaking the property of network transparency.
55

Logging and Recovery in a Highly Concurrent Database

Keen, John S. 01 June 1994 (has links)
This report addresses the problem of fault tolerance to system failures for database systems that are to run on highly concurrent computers. It assumes that, in general, an application may have a wide distribution in the lifetimes of its transactions. Logging remains the method of choice for ensuring fault tolerance. Generational garbage collection techniques manage the limited disk space reserved for log information; this technique does not require periodic checkpoints and is well suited for applications with a broad range of transaction lifetimes. An arbitrarily large collection of parallel log streams provide the necessary disk bandwidth.
56

Reliable Interconnection Networks for Parallel Computers

Dennison, Larry R. 01 October 1991 (has links)
This technical report describes a new protocol, the Unique Token Protocol, for reliable message communication. This protocol eliminates the need for end-to-end acknowledgments and minimizes the communication effort when no dynamic errors occur. Various properties of end-to-end protocols are presented. The unique token protocol solves the associated problems. It eliminates source buffering by maintaining in the network at least two copies of a message. A token is used to decide if a message was delivered to the destination exactly once. This technical report also presents a possible implementation of the protocol in a worm-hole routed, 3-D mesh network.
57

ADAM: A Decentralized Parallel Computer Architecture Featuring Fast Thread and Data Migration and a Uniform Hardware Abstraction

Huang, Andrew "bunnie" 01 June 2002 (has links)
The furious pace of Moore's Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of migration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading. The architecture is abstract, and presents programmers with a simple, uniform fine-grained multithreaded parallel programming model with implicit memory management. In other words, the spatial nature and implementation details (such as the number of processors) of a parallel machine are entirely hidden from the programmer. Compiler writers are encouraged to devise programming languages for the machine that guide a programmer to express their ideas in terms of objects, since objects exhibit an inherent physical locality of data and code. The machine implementation can then leverage this locality to automatically distribute data and threads across the physical machine by using a set of high performance migration mechanisms. An implementation of this architecture could migrate a null thread in 66 cycles -- over a factor of 1000 improvement over previous work. Performance also scales well; the time required to move a typical thread is only 4 to 5 times that of a null thread. Data migration performance is similar, and scales linearly with data block size. Since the performance of the migration mechanism is on par with that of an L2 cache, the implementation simulated in my work has no data caches and relies instead on multithreading and the migration mechanism to hide and reduce access latencies.
58

Analysing Fault Tolerance for Erlang Applications

Nyström, Jan Henry January 2009 (has links)
ERLANG is a concurrent functional language, well suited for distributed, highly concurrent and fault-tolerant software. An important part of Erlang is its support for failure recovery. Fault tolerance is provided by organising the processes of an ERLANG application into tree structures. In these structures, parent processes monitor failures of their children and are responsible for their restart. Libraries support the creation of such structures during system initialisation.A technique to automatically analyse that the process structure of an ERLANG application from the source code is presented. The analysis exposes shortcomings in the fault tolerance properties of the application. First, the process structure is extracted through static analysis of the initialisation code of the application. Thereafter, analysis of the process structure checks two important properties of the fault handling mechanism: 1) that it will recover from any process failure, 2) that it will not hide persistent errors.The technique has been implemented in a tool, and applied it to several OTP library applications and to a subsystem of a commercial system the AXD 301 ATM switch.The static analysis of the ERLANG source code is achieved through symbolic evaluation. The evaluation is peformed according to an abstraction of ERLANG’s actual semamtics. The actual semantics is formalised for a nontrivial part of the language and it is proven that the abstraction of the semantics simulates the actual semantics. / ASTEC
59

Ultra low-power fault-tolerant SRAM design in 90nm CMOS technology

Wang, Kuande 15 July 2010
With the increment of mobile, biomedical and space applications, digital systems with low-power consumption are required. As a main part in digital systems, low-power memories are especially desired. Reducing the power supply voltages to sub-threshold region is one of the effective approaches for ultra low-power applications. However, the reduced Static Noise Margin (SNM) of Static Random Access Memory (SRAM) imposes great challenges to the subthreshold SRAM design. The conventional 6-transistor SRAM cell does not function properly at sub-threshold supply voltage range because it has no enough noise margin for reliable operation. In order to achieve ultra low-power at sub-threshold operation, previous research work has demonstrated that the read and write decoupled scheme is a good solution to the reduced SNM problem. A Dual Interlocked Storage Cell (DICE) based SRAM cell was proposed to eliminate the drawback of conventional DICE cell during read operation. This cell can mitigate the singleevent effects, improve the stability and also maintain the low-power characteristic of subthreshold SRAM, In order to make the proposed SRAM cell work under different power supply voltages from 0.3 V to 0.6 V, an improved replica sense scheme was applied to produce a reference control signal, with which the optimal read time could be achieved. In this thesis, a 2K~8 bits SRAM test chip was designed, simulated and fabricated in 90nm CMOS technology provided by ST Microelectronics. Simulation results suggest that the operating frequency at VDD = 0.3 V is up to 4.7 MHz with power dissipation 6.0 ÊW, while it is 45.5 MHz at VDD = 0.6 V dissipating 140 ÊW. However, the area occupied by a single cell is larger than that by conventional SRAM due to additional transistors used. The main contribution of this thesis project is that we proposed a new design that could simultaneously solve the ultra low-power and radiation-tolerance problem in large capacity memory design.
60

Suppression and characterization of decoherence in practical quantum information processing devices

Silva, Marcus January 2008 (has links)
This dissertation addresses the issue of noise in quantum information processing devices. It is common knowledge that quantum states are particularly fragile to the effects of noise. In order to perform scalable quantum computation, it is necessary to suppress effective noise to levels which depend on the size of the computation. Various theoretical proposals have discussed how this can be achieved, under various assumptions about properties of the noise and the availability of qubits. We discuss new approaches to the suppression of noise, and propose experimental protocols characterizing the noise. In the first part of the dissertation, we discuss a number of applications of teleportation to fault-tolerant quantum computation. We demonstrate how measurement-based quantum computation can be made inherently fault-tolerant by exploiting its relationship to teleportation. We also demonstrate how continuous variable quantum systems can be used as ancillas for computation with qubits, and how information can be reliably teleported between these different systems. Building on these ideas, we discuss how the necessary resource states for teleportation can be prepared by allowing quantum particles to be scattered by qubits, and investigate the feasibility of an implementation using superconducting circuits. In the second part of the dissertation, we propose scalable experimental protocols for extracting information about the noise. We concentrate on information which has direct practical relevance to methods of noise suppression. In particular, we demonstrate how standard assumptions about properties of the noise can be tested in a scalable manner. The experimental protocols we propose rely on symmetrizing the noise by random application of unitary operations. Depending on the symmetry group use, different information about the noise can be extracted. We demonstrate, in particular, how to estimate the probability of a small number of qubits being corrupted, as well as how to test for a necessary condition for noise correlations. We conclude by demonstrating how, without relying on assumptions about the noise, the information obtained by symmetrization can also be used to construct protective encodings for quantum states.

Page generated in 0.0309 seconds