• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 268
  • 135
  • 54
  • 25
  • 5
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 579
  • 579
  • 160
  • 144
  • 123
  • 116
  • 104
  • 89
  • 73
  • 72
  • 71
  • 69
  • 59
  • 57
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Robust verification of quantum computation

Gheorghiu, Alexandru January 2018 (has links)
Quantum computers promise to offer a considerable speed-up in solving certain problems, compared to the best classical algorithms. In many instances, the gap between quantum and classical running times is conjectured to be exponential. While this is great news for those applications where quantum computers would provide such an advantage, it also raises a significant challenge: how can classical computers verify the correctness of quantum computations? In attempting to answer this question, a number of protocols have been developed in which a classical client (referred to as verifier) can interact with one or more quantum servers (referred to as provers) in order to certify the correctness of a quantum computation performed by the server(s). These protocols are of one of two types: either there are multiple non-communicating provers, sharing entanglement, and the verifier is completely classical; or, there is a single prover and the classical verifier has a device for preparing or measuring quantum states. The latter type of protocols are, arguably, more relevant to near term quantum computers, since having multiple quantum computers that share a large amount of entanglement is, from a technological standpoint, extremely challenging. Before the realisation of practical single-prover protocols, a number of challenges need to be addressed: how robust are these protocols to noise on the verifier's device? Can the protocols be made fault-tolerant without significantly increasing the requirements of the verifier? How do we know that the verifier's device is operating correctly? Could this device be eliminated completely, thus having a protocol with a fully classical verifier and a single quantum prover? Our work attempts to provide answers to these questions. First, we consider a single-prover verification protocol developed by Fitzsimons and Kashefi and show that this protocol is indeed robust with respect to deviations on the quantum state prepared by the verifier. We show that this is true even if those deviations are the result of a correlation with the prover's system. We then use this result to give a verification protocol which is device- independent. The protocol consists of a verifier with a measurement device and a single prover. Device-independence means that the verifier need not trust the measurement device (nor the prover) which can be assumed to be fully malicious (though not communicating with the prover). A key element in realising this protocol is a robust technique of Reichardt, Unger and Vazirani for testing, using non-local correlations, that two untrusted devices share a large number of entangled states. This technique is referred to as rigidity of non-local correlations. Our second result is to prove a rigidity result for a type of quantum correlations known as steering correlations. To do this, we first show that steering correlations can be used in order to certify maximally entangled states, in a setting in which each test is independent of the previous one. We also show that the fidelity with which we characterise the state, in this specific test, is optimal. We then improve the previous result by removing the independence assumption. This then leads to our desired rigidity result. We make use of it, in a similar fashion to the device-independent case, in order to give a verification protocol that is one-sided device-independent. The importance of this application is to show how different trust assumptions affect the efficiency of the protocol. Next, we describe a protocol for fault-tolerantly verifying quantum computations, with minimal "quantum requirements" for the verifier. Specifically, the verifier only requires a device for measuring single-qubit states. Both this device, and the prover's operations are assumed to be prone to errors. We show that under standard assumptions about the error model, it is possible to achieve verification of quantum computation using fault-tolerant principles. As a proof of principle, and to better illustrate the inner workings of the protocol, we describe a toy implementation of the protocol in a quantum simulator, and present the results we obtained, when running it for a small computation. Finally, we explore the possibility of having a verification protocol, with a classical verifier and a single prover, such that the prover is blind with respect to the verifier's computation. We give evidence that this is not possible. In fact, our result is only concerned with blind quantum computation with a classical client, and uses complexity theoretic results to argue why it is improbable for such a protocol to exist. We then use these complexity theoretic techniques to show that a client, with the ability to prepare and send quantum states to a quantum server, would not be able to delegate arbitrary NP problems to that server. In other words, even a client with quantum capabilities cannot exploit those capabilities to delegate the computation of NP problems, while keeping the input, to that computation, private. This is again true, provided certain complexity theoretic conjectures are true.
142

Cost-effective dynamic repair for FPGAs in real-time systems / Reparo dinâmico de baixo custo para FPGAs em sistemas tempo-real

Santos, Leonardo Pereira January 2016 (has links)
Field-Programmable Gate Arrays (FPGAs) são largamente utilizadas em sistemas digitais por características como flexibilidade, baixo custo e alta densidade. Estas características advém do uso de células de SRAM na memória de configuração, o que torna estes dispositivos suscetíveis a erros induzidos por radiação, tais como SEUs. TMR é o método de mitigação mais utilizado, no entanto, possui um elevado custo tanto em área como em energia, restringindo seu uso em aplicações de baixo custo e/ou baixo consumo. Como alternativa a TMR, propõe-se utilizar DMR associado a um mecanismo de reparo da memória de configuração da FPGA chamado scrubbing. O reparo de FPGAs em sistemas em tempo real apresenta desafios específicos. Além da garantia da computação correta dos dados, esta computação deve se dar completamente dentro do tempo disponível (time-slot), devendo ser finalizada antes do tempo limite (deadline). A diferença entre o tempo de computação dos dados e a deadline é chamado de slack e é o tempo disponível para reparo do sistema. Este trabalho faz uso de scrubbing deslocado dinâmico, que busca maximizar a probabilidade de reparo da memória de configuração de FPGAs dentro do slack disponível, baseado em um diagnóstico do erro. O scrubbing deslocado já foi utilizado com técnicas de diagnóstico de grão fino (NAZAR, 2015). Este trabalho propõe o uso de técnicas de diagnóstico de grão grosso para o scrubbing deslocado, evitando as penalidades de desempenho e custos em área associados a técnicas de grão fino. Circuitos do conjunto MCNC foram protegidos com as técnicas propostas e submetidos a seções de injeção de erros (NAZAR; CARRO, 2012a). Os dados obtidos foram analisados e foram calculadas as melhores posição iniciais do scrubbing para cada um dos circuitos. Calculou-se a taxa de Failure-in-Time (FIT) para comparação entre as diferentes técnicas de diagnóstico propostas. Os resultados obtidos confirmaram a hipótese inicial deste trabalho que a redução do número de bits sensíveis e uma baixa degradação do período do ciclo de relógio permitiram reduzir a taxa de FIT quando comparadas com técnicas de grão fino. Por fim, uma comparação entre as três técnicas propostas é feita, analisando o desempenho e custos em área associados a cada uma. / Field-Programmable Gate Arrays (FPGAs) are widely used in digital systems due to characteristics such as flexibility, low cost and high density. These characteristics are due to the use of SRAM memory cells in the configuration memory, which make these devices susceptible to radiation-induced errors, such as SEUs. TMR is the most used mitigation technique, but it has an elevated cost both in area as well as in energy, restricting its use in low cost/low energy applications. As an alternative to TMR, we propose the use of DMR associated with a repair mechanism of the FPGA configuration memory called scrubbing. The repair of FPGA in real-time systems present a specific set of challenges. Besides guaranteeing the correct computation of data, this computation must be completely carried out within the available time (time-slot), being finalized before a time limit (deadline). The difference between the computation time and the deadline is called the slack and is the time available to repair the system. This work uses a dynamic shifted scrubbing that aims to maximize the repair probability of the configuration memory of the FPGA within the available slack based on error diagnostic. The shifted scrubbing was already proposed with fine-grained diagnostic techniques (NAZAR, 2015). This work proposes the use of coarse-grained diagnostic technique as a way to avoid the performance penalties and area costs associated to fine-grained techniques. Circuits of the MCNC suite were protected by the proposed techniques and subject to error-injection campaigns (NAZAR; CARRO, 2012a). The obtained data was analyzed and the best scrubbing starting positions for each circuit were calculated. The Failure-in-Time (FIT) rates were calculated to compare the different proposed diagnostic techniques. The obtained results validated the initial hypothesis of this work that the reduction of the number of sensitive bits and a low degradation of the clock cycle allowed a reduced FIT rate when compared with fine-grained diagnostic techniques. Finally, a comparison is made between the proposed techniques, considering performance and area costs associated to each one.
143

The coordinated control of autonomous agents

Abel, Ryan Orlin 01 December 2010 (has links)
This thesis considers the coordinated control of autonomous agents. The agents are modeled as double integrators, one for each Cartesian dimension. The goal is to force the agents to converge to a formation specified by their desired relative positions. To this end a pair of one-step-ahead optimization based control laws are developed. The control algorithms produce a communication topology that mirrors the geometric formation topology due to the careful choice of the minimized cost functions. Through this equivalence a natural understanding of the relationship between the geometric formation topology and the communication infrastructure is gained. It is shown that the control laws are stable and guarantee convergence for all viable formation topologies. Additionally, velocity constraints can be added to allow the formation to follow fixed or arbitrary time dependent velocities. Both control algorithms only require local information exchange. As additional agents attach to the formation, only those agents that share position constraints with the joining agents need to adjust their control laws. When redundancy is incorporated into the formation topology, it is possible for the system to survive loss of agents or communication channels. In the event that an agent drops out of the formation, only the agents with position interdependence on the lost agent need to adjust their control laws. Finally, if a communication channel is lost, only the agents that share that communication channel must adjust their control laws. The first control law falls into the category of distributed control, since it requires either the global information exchange to compute the formation size or an a priori knowledge of the largest possible formation. The algorithm uses the network size to penalize the control input for each formation. When using a priori knowledge, it is shown that additional redundancy not only adds robustness to loss of agents or communication channels, but it also decreases the settling times to the desired formation. Conversely, the overall control strategy suffers from sluggish response when the network is small with respect to the largest possible network. If global information exchange is used, scalability suffers. The second control law was developed to address the negative aspects of the first. It is a fully decentralized controller, as it does not require global information exchange or any a priori knowledge.
144

Assessing Apache Spark Streaming with Scientific Data

Dahal, Janak 06 August 2018 (has links)
Processing real-world data requires the ability to analyze data in real-time. Data processing engines like Hadoop come short when results are needed on the fly. Apache Spark's streaming library is increasingly becoming a popular choice as it can stream and analyze a significant amount of data. To showcase and assess the ability of Spark various metrics were designed and operated using data collected from the USGODAE data catalog. The latency of streaming in Apache Spark was measured and analyzed against many nodes in the cluster. Scalability was monitored by adding and removing nodes in the middle of a streaming job. Fault tolerance was verified by stopping nodes in the middle of a job and making sure that the job was rescheduled and completed on other node/s. A full stack application was designed that would automate data collection, data processing and visualizing the results. Google Maps API was used to visualize results by color coding the world map with values from various analytics.
145

Timing Predictability in Future Multi-Core Avionics Systems

Löfwenmark, Andreas January 2017 (has links)
With more functionality added to safety-critical avionics systems, new platforms are required to offer the computational capacity needed. Multi-core platforms offer a potential that is now being explored, but they pose significant challenges with respect to predictability due to shared resources (such as memory) being accessed from several cores in parallel. Multi-core processors also suffer from higher sensitivity to permanent and transient faults due to shrinking transistor sizes. This thesis addresses several of these challenges. First, we review major contributions that assess the impact of fault tolerance on worst-case execution time of processes running on a multi-core platform. In particular, works that evaluate the timing effects using fault injection methods. We conclude that there are few works that address the intricate timing effects that appear when inter-core interferences due to simultaneous accesses of shared resources are combined with the fault tolerance techniques. We assess the applicability of the methods to COTS multi-core processors used in avionics. We identify dark spots on the research map of the joint problem of hardware reliability and timing predictability for multi-core avionics systems. Next, we argue that the memory requests issued by the real-time operating systems (RTOS) must be considered in resource-monitoring systems to ensure proper execution on all cores. We also adapt and extend an existing method for worst-case response time analysis to fulfill the specific requirements of avionics systems. We relax the requirement of private memory banks to also allow cores to share memory banks.
146

Safety and hazard analysis in concurrent systems

Rao, Shrisha 01 January 2005 (has links)
Safety is a well-known and important class of property of software programs, and of systems in general. The basic notion that informs this work is that the time to think about safety is when it still exists but could be lost. The notion is not just to analyse safety as existing or not with a given system state, but also in the sense that a system is presently safe but becoming less so. Safety as considered here is not restricted to one type of property, and indeed for much of the discussion it does not matter what types of measures are used to assess safety. The work done here is for the purpose of laying a theoretical and mathematical foundation for allowing static analyses of systems to further safety. This is done using tools from lattice theory applied to the poset of system states partially ordered by reachability. Such analyses are common (e.g., with abstract interpretations of software functioning) with respect to other kinds of systems, but there does not seem to exist a formalism that permits them specifically for safety. Using the basic analytical tools developed, a study is made of the problem of composing systems from components. Three types of composition: direct sum, direct product, and exponentiation---are noted, and the first two are treated in some depth. It is shown that the set of all systems formed with the direct sum and direct product operators can be specified by a BNF grammar. A three-valued ``safety logic'' is specified, using which the safety and fault-tolerance of composed systems can be computed given the system composition. It is also shown that the set of all systems also forms separate monoids (in the sense familiar to mathematicians), and that other monoids can be derived based on equivalence classes of systems. The example of a train approaching a railroad crossing, where a gate must be closed prior to the train's arrival and opened after its exit, is considered and analysed as an example.
147

A tool for automatic formal analysis of fault tolerance

Nilsson, Markus January 2005 (has links)
<p>The use of computer-based systems is rapidly increasing and such systems can now be found in a wide range of applications, including safety-critical applications such as cars and aircrafts. To make the development of such systems more efficient, there is a need for tools for automatic safety analysis, such as analysis of fault tolerance.</p><p>In this thesis, a tool for automatic formal analysis of fault tolerance was developed. The tool is built on top of the existing development environment for the synchronous language Esterel, and provides an output that can be visualised in the Item toolkit for fault tree analysis (FTA). The development of the tool demonstrates how fault tolerance analysis based on formal verification can be automated. The generated output from the fault tolerance analysis can be represented as a fault tree that is familiar to engineers from the traditional FTA analysis. The work also demonstrates that interesting attributes of the relationship between a critical fault combination and the input signals can be generated automatically.</p><p>Two case studies were used to test and demonstrate the functionality of the developed tool. A fault tolerance analysis was performed on a hydraulic leakage detection system, which is a real industrial system, but also on a synthetic system, which was modeled for this purpose.</p>
148

Performance and availability trade-offs in fault-tolerant middleware

Szentiványi, Diana January 2002 (has links)
<p>Distributing functionality of an application is in common use. Systems that are built with this feature in mind also have to provide high levels of dependability. One way of assuring availability of services is to tolerate faults in the system, thereby avoiding failures. Building distributed applications is not an easy task. To provide fault tolerance is even harder.</p><p>Using middlewares as mediators between hardware and operating systems on one hand and high-level applications on the other hand is a solution to the above difficult problems. It can help application writers by providing automatic generation of code supporting e.g. fault tolerance mechanisms, and by offering interoperability and language independence.</p><p>For over twenty years, the research community is producing results in the area of . However, experimental studies of different platforms are performed mostly by using made-up simple applications. Also, especially in case of CORBA, there is no fault-tolerant middleware totally conforming to the standard, and well studied in terms of trade-offs.</p><p>This thesis presents a fault-tolerant CORBA middleware built and evaluated using a realistic application running on top of it. Also, it contains results obtained after experiments with an alternative infrastructure implementing a robust fault-tolerant algorithm using basic CORBA. In the first infrastructure a problem is the existence of single points of failure. On the other hand, overheads and recovery times fall in acceptable ranges. When using the robust algorithm, the problem of single points of failure disappears. The problem here is the memory usage, and overhead values as well as recovery times that can become quite long.</p> / Report code: LiU-TEK-LIC-2002:55.
149

Restoring Consistency after Network Partitions

Asplund, Mikael January 2007 (has links)
<p>The software industry is facing a great challenge. While systems get more complex and distributed across the world, users are becoming more dependent on their availability. As systems increase in size and complexity so does the risk that some part will fail. Unfortunately, it has proven hard to tackle faults in distributed systems without a rigorous approach. Therefore, it is crucial that the scientific community can provide answers to how distributed computer systems can continue functioning despite faults.</p><p>Our contribution in this thesis is regarding a special class of faults which occurs whennetwork links fail in such a way that parts of the network become isolated, such faults are termed network partitions. We consider the problem of how systems that have integrity constraints on data can continue operating in presence of a network partition. Such a system must act optimistically while the network is split and then perform a some kind of reconciliation to restore consistency afterwards.</p><p>We have formally described four reconciliation algorithms and proven them correct. The novelty of these algorithms lies in the fact that they can restore consistency after network partitions in a system with integrity constraints and that one of the protocols allows the system to provide service during the reconciliation. We have implemented and evaluated the algorithms using simulation and as part of a partition-tolerant CORBA middleware. The results indicate that it pays off to act optimistically and that it is worthwhile to provide service during reconciliation.</p>
150

CORBA in the aspect of replicated distributed real-time databases

Milton, Robert January 2002 (has links)
A distributed real-time database (DRTDB) is a database distributed over a network on several nodes and where the transactions are associated with deadlines. The issues of concern in this kind of database are data consistency and the ability to meet deadlines. In addition, there is the possibility that the nodes, on which the database is distributed, are heterogeneous. This means that the nodes may be built on different platforms and written in different languages. This makes the integration of these nodes difficult, since data types may be represented differently on different nodes. The common object request broker architecture (CORBA), defined by the Object Management Group (OMG), is a distributed object computing (DOC) middleware created to overcome problems with heterogeneous sites. The project described in this paper aims to investigate the suitability of CORBA as a middleware in a DRTDB. Two extensions to CORBA, Fault-Tolerance CORBA (FT-CORBA) and Real-Time CORBA (RT-CORBA) is of particular interest since the combination of these extensions provides the features for object replication and end-to-end predictability, respectively. The project focuses on the ability of RT-CORBA meeting hard deadlines and FT-CORBA maintaining replica consistency by using replication with eventual consistency. The investigation of the combination of RT-CORBA and FT-CORBA results in two proposed architectures that meet real-time requirements and provides replica consistency with CORBA as the middleware in a DRTDB.

Page generated in 0.0477 seconds