• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 267
  • 135
  • 54
  • 25
  • 5
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 578
  • 578
  • 160
  • 144
  • 123
  • 116
  • 104
  • 89
  • 73
  • 72
  • 71
  • 69
  • 58
  • 57
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

On reliable and scalable management of wireless sensor networks

Bapat, Sandip Shriram, January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 164-170).
332

Meeting Data Sharing Needs of Heterogeneous Distributed Users

Zhan, Zhiyuan 16 January 2007 (has links)
The fast growth of wireless networking and mobile computing devices has enabled us to access information from anywhere at any time. However, varying user needs and system resource constraints are two major heterogeneity factors that pose a challenge to information sharing systems. For instance, when a new information item is produced, different users may have different requirements for when the new value should become visible. The resources that each device can contribute to such information sharing applications also vary. Therefore, how to enable information sharing across computing platforms with varying resources to meet different user demands is an important problem for distributed systems research. In this thesis, we address the heterogeneity challenge faced by such systems. We assume that shared information is encapsulated in distributed objects, and we use object replication to increase system scalability and robustness, which introduces the consistency problem. Many consistency models have been proposed in recent years but they are either too strong and do not scale very well, or too weak to meet many users' requirements. We propose a Mixed Consistency (MC) model as a solution. We introduce an access constraints based approach to combine both strong and weak consistency models together. We also propose a MC protocol that combines existing implementations together with minimum modifications. It is designed to tolerate crash failures and slow processes/communication links in the system. We also explore how the heterogeneity challenge can be addressed in the transportation layer by developing an agile dissemination protocol. We implement our MC protocol on top of a distributed publisher-subscriber middleware, Echo. We finally measure the performance of our MC implementation. The results of the experiments are consistent with our expectations. Based on the functionality and performance of mixed consistency protocols, we believe that this model is effective in addressing the heterogeneity of user requirements and available resources in distributed systems.
333

Hybrid Cdn P2p Architecture For Multimedia Streaming

Oztoprak, Kasim 01 August 2008 (has links) (PDF)
In this thesis, the problems caused by peer behavior in peer-to-peer (P2P) video streaming is investigated. First, peer behaviors are modeled using two dimensional continuous time markov chains to investigate the reliability of P2P video streaming systems. Then a metric is proposed to evaluate the dynamic behavior and evolution of P2P overlay network. Next, a hybrid geographical location-time and interest based clustering algorithm is proposed to improve the success ratio and reduce the delivery time of required content. Finally, Hybrid Fault Tolerant Video Streaming System (HFTS) over P2P networks has been designed and offered conforming the required Quality of Service (QoS) and Fault Tolerance. The results indicate that the required QoS can be achieved in streaming video applications using the proposed hybrid approach.
334

A prognostic health management based framework for fault-tolerant control

Brown, Douglas W. 15 June 2011 (has links)
The emergence of complex and autonomous systems, such as modern aircraft, unmanned aerial vehicles (UAVs) and automated industrial processes is driving the development and implementation of new control technologies aimed at accommodating incipient failures to maintain system operation during an emergency. The motivation for this research began in the area of avionics and flight control systems for the purpose to improve aircraft safety. A prognostics health management (PHM) based fault-tolerant control architecture can increase safety and reliability by detecting and accommodating impending failures thereby minimizing the occurrence of unexpected, costly and possibly life-threatening mission failures; reduce unnecessary maintenance actions; and extend system availability / reliability. Recent developments in failure prognosis and fault tolerant control (FTC) provide a basis for a prognosis based reconfigurable control framework. Key work in this area considers: (1) long-term lifetime predictions as a design constraint using optimal control; (2) the use of model predictive control to retrofit existing controllers with real-time fault detection and diagnosis routines; (3) hybrid hierarchical approaches to FTC taking advantage of control reconfiguration at multiple levels, or layers, enabling the possibility of set-point reconfiguration, system restructuring and path / mission re-planning. Combining these control elements in a hierarchical structure allows for the development of a comprehensive framework for prognosis based FTC. First, the PHM-based reconfigurable controls framework presented in this thesis is given as one approach to a much larger hierarchical control scheme. This begins with a brief overview of a much broader three-tier hierarchical control architecture defined as having three layers: supervisory, intermediate, and low-level. The supervisory layer manages high-level objectives. The intermediate layer redistributes component loads among multiple sub-systems. The low-level layer reconfigures the set-points used by the local production controller thereby trading-off system performance for an increase in remaining useful life (RUL). Next, a low-level reconfigurable controller is defined as a time-varying multi-objective criterion function and appropriate constraints to determine optimal set-point reconfiguration. A set of necessary conditions are established to ensure the stability and boundedness of the composite system. In addition, the error bounds corresponding to long-term state-space prediction are examined. From these error bounds, the point estimate and corresponding uncertainty boundaries for the RUL estimate can be obtained. Also, the computational efficiency of the controller is examined by using the number of average floating point operations per iteration as a standard metric of comparison. Finally, results are obtained for an avionics grade triplex-redundant electro-mechanical actuator with a specific fault mode; insulation breakdown between winding turns in a brushless DC motor is used as a test case for the fault-mode. A prognostic model is developed relating motor operating conditions to RUL. Standard metrics for determining the feasibility of RUL reconfiguration are defined and used to study the performance of the reconfigured system; more specifically, the effects of the prediction horizon, model uncertainty, operating conditions and load disturbance on the RUL during reconfiguration are simulated using MATLAB and Simulink. Contributions of this work include defining a control architecture, proving stability and boundedness, deriving the control algorithm and demonstrating feasibility with an example.
335

Statistical algorithms for circuit synthesis under process variation and high defect density

Singh, Ashish Kumar, 1981- 29 August 2008 (has links)
As the technology scales, there is a need to develop design and optimization algorithms under various scenarios of uncertainties. These uncertainties are introduced by process variation and impact both delay and leakage. For future technologies at the end of CMOS scaling, not only process variation but the device defect density is projected to be very high. Thus realizing error tolerant implementation of Boolean functions with minimal redundancy overhead remains a challenging task. The dissertation is concerned with the challenges of low-power and area digital circuit design under high parametric variability and high defect density. The technology mapping provides an ideal starting point for leakage reduction because of higher structural freedom in the choices of implementations. We first describe an algorithm for technology mapping for yield enhancement that explicitly takes parameter variability into account. We then show how leakage can be reduced by accounting for its dependence on the signal state, and develop a fast gain-based technology mapping algorithm. In some scenarios the state probabilities can not be precise point values but are modeled as an interval. We extended the notion of mean leakage to the worst case mean leakage which is defined as the sum of maximal mean leakage of circuit gates over the feasible probability realizations. The gain-based algorithm has been generalized to optimize this proxy leakage metric by casting the problem within the framework of robust dynamic programming. The testing is performed by selecting various instance probabilities for the primary inputs that are deviations from the point probabilities with respect to which a point probability based gain based mapper has been run. We obtain leakage improvement for certain test probabilities with the interval probability based over the point probability based mapper. Next, we present techniques based on coding theory for implementing Boolean functions in highly defective fabrics that allow us to tolerate errors to a certain degree. The novelty of this work is that the structure of Boolean functions is exploited to minimize the redundancy overhead. Finally we have proposed an efficient analysis approach for statistical timing, which can correctly propagate the slope in the path-based statistical timing analysis. The proposed algorithm can be scaled up to one million paths.
336

Computer Fault Tolerance Study Inspired By The Immune System

Canibek, Atif Deger 01 December 2005 (has links) (PDF)
Since the advent of computers numerous approaches have been taken to create hardware systems that provide a high degree of reliability even in the presence of errors. This study seeks to address the problem from a biological perspective using the human immune system as a source of inspiration. The immune system uses many ingenious methods to provide reliable operation in the body and so may suggest how similar methods can be used in the design of reliable systems. This study provides a brief introduction into a relatively new discipline: artificial immune systems (AIS) and demonstrates a new application of AIS with an immunologically inspired approach to fault tolerance. It is shown a finite state machine can be provided with a hardware immune system to provide a novel form of fault detection giving the ability to detect faulty states during a normal operating cycle. It is called immunotronics.
337

Fault-tolerant resource allocation of an airborne network

Guo, Yan. January 2007 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Electrical and Computer Engineering, 2007. / Includes bibliographical references.
338

Collusions and Privacy in Rational-Resilient Gossip / Coalitions et respect de la vie privée dans les protocoles de dissémination aléatoire de contenus tolérant les comportements égoïstes

Decouchant, Jérémie 09 November 2015 (has links)
Les protocoles de dissémination de contenus randomisés sont une alternative bon marché et pouvant monter en charge aux systèmes centralisés. Cependant, il est bien connu que ces protocoles souffrent en présence de comportements individualistes, i.e., de participants qui cherchent à recevoir un contenu sans contribuer en retour à sa propagation. Alors que le problème des participants égoïstes a été bien étudié dans la littérature, les coalitions de participants égoïstes ont été laissés de côté. De plus, les manières actuelles permettant de limiter ou tolérer ces comportements exigent des noeuds qu'ils enregistrent leurs interactions, et rendent public leur contenu, ce qui peut dévoiler des informations gênantes. De nos jours, il y a consensus autour du besoin de renforcer les possibilités de contrôle des usagers de systèmes informatiques sur leurs données personnelles. Cependant, en l'état de nos connaissances, il n'existe pas de protocole qui évite de divulguer des informations personnelles sur les utilisateurs tout en limitant l'impact des comportements individualistes.Cette thèse apporte deux contributions.Tout d'abord, nous présentons AcTinG, un protocole qui empêche les coalitions de noeuds individualistes dans les systèmes pair-à-pair de dissémination de contenus, tout en garantissant une absence de faux-positifs dans le processus de détection de fautes. Les utilisateurs de AcTinG enregistrent leurs interactions dans des enregistrements sécurisés, et se vérifient les uns les autres grâce à une procédure d'audit non prédictible, mais vérifiable a posteriori. Ce protocole est un équilibre de Nash par construction. Une évaluation de performance montre qu'AcTinG est capable de fournir les messages à tous les noeuds malgré la présence de coalitions, et présente des propriétés de passage à l'échelle similaires aux protocoles classiques de dissémination aléatoire.Ensuite, nous décrivons PAG, le premier protocole qui évite de dévoiler des informations sur les usagers tout en les contrôlant afin d'éviter les comportements égoïstes. PAG se base sur une architecture de surveillance formée par les participants, ainsi que sur des procédures de chiffrement homomorphiques. L'évaluation théorique de ce protocole montre qu'obtenir le détail des interactions des noeuds est difficile, même en cas d'attaques collectives. Nous évaluons ce protocole en terme de protection de l'intimité des interactions et en terme de performance en utilisant un déploiement effectué sur un cluster de machines, ainsi que des simulations qui impliquent jusqu'à un million de participants, et enfin en utilisant des preuves théoriques. Ce protocole a un surcoût en bande-passante inférieur aux protocoles de communications anonymes existants, et est raisonnable en terme de coût cryptographique. / Gossip-based content dissemination protocols are a scalable and cheap alternative to centralised content sharing systems. However, it is well known that these protocols suffer from rational nodes, i.e., nodes that aim at downloading the content without contributing their fair share to the system. While the problem of rational nodes that act individually has been well addressed in the literature, textit{colluding} rational nodes is still an open issue. In addition, previous rational-resilient gossip-based solutions require nodes to log their interactions with others, and disclose the content of their logs, which may disclose sensitive information. Nowadays, a consensus exists on the necessity of reinforcing the control of users on their personal information. Nonetheless, to the best of our knowledge no privacy-preserving rational-resilient gossip-based content dissemination system exists.The contributions of this thesis are twofold.First, we present AcTinG, a protocol that prevents rational collusions in gossip-based content dissemination protocols, while guaranteeing zero false positive accusations. AcTing makes nodes maintain secure logs and mutually check each others' correctness thanks to verifiable but non predictable audits. As a consequence of its design, it is shown to be a Nash-equilibrium. A performance evaluation shows that AcTinG is able to deliver all messages despite the presence of colluders, and exhibits similar scalability properties as standard gossip-based dissemination protocols.Second, we describe PAG, the first accountable and privacy-preserving gossip protocol. PAG builds on a monitoring infrastructure, and homomorphic cryptographic procedures to provide privacy to nodes while making sure that nodes forward the content they receive. The theoretical evaluation of PAG shows that breaking the privacy of interactions is difficult, even in presence of a global and active opponent. We assess this protocol both in terms of privacy and performance using a deployment performed on a cluster of machines, simulations involving up to a million of nodes, and theoretical proofs. The bandwidth overhead is much lower than existing anonymous communication protocols, while still being practical in terms of CPU usage.
339

Design, Optimization, and Formal Verification of Circuit Fault-Tolerance Techniques / Conception, optimisation, et vérification formelle de techniques de tolérance aux fautes pour circuits

Burlyaev, Dmitry 26 November 2015 (has links)
La miniaturisation de la gravure et l'ajustement dynamique du voltage augmentent le risque de fautes dans les circuits intégrés. Pour pallier cet inconvénient, les ingénieurs utilisent des techniques de tolérance aux fautes pour masquer ou, au moins, détecter les fautes. Ces techniques sont particulièrement utilisées dans les domaines critiques (aérospatial, médical, nucléaire, etc.) où les garanties de bon fonctionnement des circuits et leurs tolérance aux fautes sont cruciales. Cependant, la vérification de propriétés fonctionnelles et de tolérance aux fautes est un problème complexe qui ne peut être résolu par simulation en raison du grand nombre d'exécutions possibles et de scénarios d'occurrence des fautes. De même, l'optimisation des surcoûts matériels ou temporels imposés par ces techniques demande de garantir que le circuit conserve ses propriétés de tolérance aux fautes après optimisation.Dans cette thèse, nous décrivons une optimisation de techniques de tolérance aux fautes classiques basée sur des analyses statiques, ainsi que de nouvelles techniques basées sur la redondance temporelle. Nous présentons comment leur correction peut être vérifiée formellement à l'aide d'un assistant de preuves.Nous étudions d'abord comment certains voteurs majoritaires peuvent être supprimés des circuits basés sur la redondance matérielle triple (TMR) sans violer leurs propriétés de tolérance. La méthodologie développée prend en compte les particularités des circuits (par ex. masquage logique d'erreurs) et des entrées/sorties pour optimiser la technique TMR.Deuxièmement, nous proposons une famille de techniques utilisant la redondance temporelle comme des transformations automatiques de circuits. Elles demandent moins de ressources matérielles que TMR et peuvent être facilement intégrés dans les outils de CAO. Les transformations sont basées sur une nouvelle idée de redondance temporelle dynamique qui permet de modifier le niveau de redondance «à la volée» sans interrompre le calcul. Le niveau de redondance peut être augmenté uniquement dans les situations critiques (par exemple, au-dessus des pôles où le niveau de rayonnement est élevé), lors du traitement de données cruciales (par exemple, le cryptage de données sensibles), ou pendant des processus critiques (par exemple, le redémarrage de l'ordinateur d'un satellite).Troisièmement, en associant la redondance temporelle dynamique avec un mécanisme de micro-points de reprise, nous proposons une transformation avec redondance temporelle double capable de masquer les fautes transitoires. La procédure de recouvrement est transparente et le comportement entrée/sortie du circuit reste identique même lors d'occurrences de fautes. En raison de la complexité de cette méthode, la garantie totale de sa correction a nécessité une certification formelle en utilisant l'assistant de preuves Coq. La méthodologie développée peut être appliquée pour certifier d'autres techniques de tolérance aux fautes exprimées comme des transformations de circuits. / Technology shrinking and voltage scaling increase the risk of fault occurrences in digital circuits. To address this challenge, engineers use fault-tolerance techniques to mask or, at least, to detect faults. These techniques are especially needed in safety critical domains (e.g., aerospace, medical, nuclear, etc.), where ensuring the circuit functionality and fault-tolerance is crucial. However, the verification of functional and fault-tolerance properties is a complex problem that cannot be solved with simulation-based methodologies due to the need to check a huge number of executions and fault occurrence scenarios. The optimization of the overheads imposed by fault-tolerance techniques also requires the proof that the circuit keeps its fault-tolerance properties after the optimization.In this work, we propose a verification-based optimization of existing fault-tolerance techniques as well as the design of new techniques and their formal verification using theorem proving. We first investigate how some majority voters can be removed from Triple-Modular Redundant (TMR) circuits without violating their fault-tolerance properties. The developed methodology clarifies how to take into account circuit native error-masking capabilities that may exist due to the structure of the combinational part or due to the way the circuit is used and communicates with the surrounding device.Second, we propose a family of time-redundant fault-tolerance techniques as automatic circuit transformations. They require less hardware resources than TMR alternatives and could be easily integrated in EDA tools. The transformations are based on the novel idea of dynamic time redundancy that allows the redundancy level to be changed "on-the-fly" without interrupting the computation. Therefore, time-redundancy can be used only in critical situations (e.g., above Earth poles where the radiation level is increased), during the processing of crucial data (e.g., the encryption of selected data), or during critical processes (e.g., a satellite computer reboot).Third, merging dynamic time redundancy with a micro-checkpointing mechanism, we have created a double-time redundancy transformation capable of masking transient faults. Our technique makes the recovery procedure transparent and the circuit input/output behavior remains unchanged even under faults. Due to the complexity of that method and the need to provide full assurance of its fault-tolerance capabilities, we have formally certified the technique using the Coq proof assistant. The developed proof methodology can be applied to certify other fault-tolerance techniques implemented through circuit transformations at the netlist level.
340

ADC : ambiente para experimentação e avaliação de protocolos de difusão confiável / Reliable broadcast protocols experimentation and evaluation environment (ADC)

Barcelos, Patricia Pitthan de Araujo January 1996 (has links)
Uma tendência recente em sistemas de computação é distribuir a computação entre diversos processadores físicos. Isto conduz a dois tipos de sistemas: sistemas fortemente acoplados e sistemas fracamente acoplados. Este trabalho enfoca os sistemas de computação classificados como fracamente acoplados, ou sistemas distribuídos, como são popularmente conhecidos. Um sistema distribuído, segundo [BAB 86], pode ser definido como um conjunto de processadores autônomos que não compartilham memória, não tem acesso a clocks' globais e cuja comunicação é realizada somente por troca de mensagens. As exigências intrínsecas de sistemas distribuídos compreendem a confiabilidade e a disponibilidade. Estas exigências tem levado a um crescente interesse em técnicas de tolerância a falhas, cujo objetivo é manter a consistência do sistema distribuído, mesmo na ocorrência de falhas. Uma técnica de tolerância a falhas amplamente utilizada em sistemas distribuídos é a técnica de difusão confiável. A difusão confiável é uma técnica de redundância de software, onde um processador dissemina um valor para os demais processadores em um sistema distribuído, o qual esta sujeito a falhas [BAB 85]. Por ser uma técnica básica de comunicação, diversos procedimentos de tolerância a falhas baseiam-se em difusão confiável. Este trabalho descreve a implementação de um ambiente de apoio a sistemas distribuídos intitulado Ambiente para Experimentação e Avaliação de Protocolos de Difusão Confiável (ADC). Neste ambiente são utilizados os recursos da difusão confiável para a obtenção de uma concordância entre todos os membros do sistema livres de falha. Esta concordância, conhecida como consenso, é obtida através de algoritmos de consenso, os quais visam introduzir o grau de confiabilidade exigido pelos sistemas distribuídos. O ADC (Ambiente para Experimentação e Avaliação de Protocolos de Difusão Confiável) foi desenvolvido em estações de trabalho SUN (SunOS) utilizando o sistema operacional de rede heterogêneo HetNOS [BAA 93] desenvolvido na UFRGS. O ambiente foi implementado com base em um estudo realizado sobre protocolos de difusão confiável [BAR 94]. Através da implementação do ADC e possível simular a execução de protocolos de difusão confiável aplicando modelos propostos para os mesmos. Desta execução são extraídos resultados, sobre os quais pode-se realizar uma analise. Esta análise tem sua fundamentação principalmente nos parâmetros de desempenho, confiabilidade e complexidade. Tanto a implementação do ADC como a realização da analise do modelo proposto foram realizados tendo como suporte alguns dos protocolos de difusão confiável disponíveis na literatura. O principal objetivo deste ambiente consiste na experimentação, ou seja, na verificação da relação teórico-prática dos sistemas distribuídos perante a utilização de uma técnica de redundância de software, a difusão confiável. Através deste ambiente torna-se possível a determinação de parâmetros tais como o número de mensagens de difusão trocadas entre os processos, o número de mensagens de retransmissão enviadas, o número de mensagens emitidas durante todo o processamento do modelo, etc. Estes parâmetros resultam numa analise consistente de protocolos de difusão confiável. / A recent trend in computing systems is to distribute the computation between several physical processors. This leads to two different systems: closely coupled systems and loosely coupled systems. This work focuses on computing systems classified as loosely coupled or distributed systems, as they are commonly known. According to [BAB 86], a distributed system can be defined as a set of autonomous processors with no shared memory, no global clocks and whose comunication is performed only by message exchange. The inherent requirements of distributed systems include reliability and availability. These have caused an increasing interest in fault tolerance techniques, whose goal is to keep the distributed system consistent despite failures. A fault tolerance technique largely used in distributed systems is reliable broadcast. Reliable broadcast is a software redundancy technique, where a processor disseminates a value to other processors in a distributed system, in which failures can occur [BAB85]. Because it is a basic communication technique, several fault tolerance procedures are based on reliable broadcast. This work describes the implementation of a support environment for distributed systems called Reliable Broadcast Protocols Experimentation and Evaluation Environment (ADC). Reliable broadcast resources are used in this environment to obtain an agreement among all off-failure system components. This agreement, called consensus, has been obtained through consensus algorithms, which aim to introduce the reliability degree required in distributed systems. The ADC has been developed in Sun workstation (SunOS) using the heterogeneous operating system HetNOS [BAA 93] which was developed at UFRGS. The environment has been implemented based on a research about reliable broadcast protocols [BAR 94]. Through the ADC it is possible to simulate the execution of reliable broadcast protocols applying proposed models to them. From this execution results are extracted, and over them analysis can be done. This analysis has been based essentialy in parameters such as performance, reliability and complexity. Some classical reliable broadcast protocols were used as a support to ADC implementation and model analysis. The main goal of this environment consists in validating diffusion protocols in a practical distributed systems environment, facing reliable broadcast. Through this environment it can be possible the analysis of important parameters resolution such as the number of messages exchanged between process, the number of retransmission of messages sent, the number of messages sent during the whole model processing, others. These parameters result in a consistent analysis of reliable broadcast protocols.

Page generated in 0.2121 seconds