• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 213
  • 38
  • 37
  • 33
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 6
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 399
  • 399
  • 203
  • 117
  • 99
  • 71
  • 70
  • 53
  • 50
  • 41
  • 39
  • 39
  • 38
  • 38
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

T-COUNT OPTIMIZATION OF QUANTUM CARRY LOOK-AHEAD ADDER

Khalus, Vladislav Ivanovich 01 January 2019 (has links)
With the emergence of quantum physics and computer science in the 20th century, a new era was born which can solve very difficult problems in a much faster rate or problems that classical computing just can't solve. In the 21st century, quantum computing needs to be used to solve tough problems in engineering, business, medical, and other fields that required results not today but yesterday. To make this dream come true, engineers in the semiconductor industry need to make the quantum circuits a reality. To realize quantum circuits and make them scalable, they need to be fault tolerant, therefore Clifford+T gates need to be implemented into those circuits. But the main issue is that in the Clifford+T gate set, T gates are expensive to implement. Carry Look-Ahead addition circuits have caught the interest of researchers because the number of gate layers encountered by a given qubit in the circuit (or the circuit's depth) is logarithmic in terms of the input size n. Therefore, this thesis focuses on optimizing previous designs of out-of-place and in-place Carry Look-Ahead Adders to decrease the T-count, sum of all T and T Hermitian transpose gates in a quantum circuit.
262

Contrôle tolérant aux défauts appliqué aux systèmes pile à combustible à membrane échangeuse de protons (pemfc) / Fault Tolerant Control Applied to Proton Exchange Membrane Fuel Cell Systems (pemfc)

Dijoux, Étienne 12 April 2019 (has links)
La pile à combustible apparaît comme un système performant pour produire de l’électricité « verte » à partir de l’hydrogène dès lors que celui-ci est produit à partir de sources d’énergie renouvelables. Les avantages et la maturité de la technologie à membrane polymère font des PEMFC des candidates prometteuses. Cependant, plusieurs verrous scientifiques et technologiques limitent encore leur utilisation à grande échelle, en particulier leur coût, leur fiabilité et leur durée de vie. L’amélioration de ces caractéristiques passe par la mise en place d’outils de supervision, de détection de défauts et de contrôle des systèmes pile à combustible (PàC). Le travail de recherche est le fruit d’une collaboration entre le FC LAB de l’Université de Bourgogne Franche Comté et le LE2P de l’Université de La Réunion. Ce sujet de thèse s’inscrit dans la continuité des travaux menés au laboratoire FC LAB, portant en particulier sur le diagnostic et le pronostic de systèmes PàC, et des travaux menés au laboratoire LE2P, portant sur le test en ligne d’algorithmes de commande de PEMFC. Parmi les méthodes développées pour déployer la sureté de fonctionnement à un système physique, on retrouve les techniques de tolérance aux défauts, conçues pour maintenir la stabilité du système ainsi que des performances acceptables, même en présence de défauts. Ces techniques se décomposent généralement en trois phases : la détection d’erreurs ou de défaillances, l’identification des défauts à l’origine des problèmes, et l’atténuation. La littérature fait état d’un grand nombre d’outils de diagnostic et d’algorithmes de contrôle, mais l’association du diagnostic et du contrôle reste marginale. L’objectif de ce travail de thèse est donc le test en ligne de différentes stratégies de commande tolérante aux défauts, permettant de maintenir la stabilité du système et des performances acceptables même en présence de défauts. / Fuel cells (FC) are powerful systems for electricity production. They have a good efficiency and do not generate greenhouse gases. This technology involves a lot of scientific fields, which leads to the appearance of strongly inter-dependent parameters. It makes the system particularly hard to control and increase the fault’s occurrence frequency. These two issues underline the necessity to maintain the expected system performance, even in faulty condition. It is a so-called “fault tolerant control” (FTC). The present paper aims to describe the state of the art of FTC applied to the proton exchange membrane fuel cell (PEMFC). The FTC approach is composed of two parts. First, a diagnostic part allows the identification and the isolation of a fault. It requires a good a priori knowledge of all the possible faults in the system. Then, a control part, where an optimal control strategy is needed to find the best operating point or to recover the fault.
263

Analysis and Development of Error-Job Mapping and Scheduling for Network-on-Chips with Homogeneous Processors

Karlsson, Erik January 2010 (has links)
<p>Due to increased complexity of today’s computer systems, which are manufactured in recent semiconductor technologies, and the fact that recent semiconductor technologies are more liable to soft errors (non-permanent errors) it is inherently difficult to ensure that the systems are and will remain error-free. Depending on the application, a soft error can have serious consequences for the system. It is therefore important to detect the presence of soft errors as early as possible and recover from the erroneous state and maintain correct operation. There is an entire research area devoted on proposing, implementing and analyzing techniques that can detect and recover from these errors, known as fault tolerance. The drawback of using faulttolerance is that it usually introduces some overhead. This overhead may be for instance redundant hardware, which increases the cost of the system, or it may be a time overhead that negatively impacts on system performance. Thus a main concern when applying fault tolerance is to minimize the imposed overhead while the system is still able to deliver the correct error-free operation. In this thesis we have analyzed one well known fault tolerant technique, Rollback-Recovery with Checkpointing (RRC). This technique is able to detect and recover from errors by taking and storing checkpoints during the execution of a job.Therefore we can think as if a job is divided into a number of execution segments and a checkpoint is taken after executing each execution segment. This technique requires the job to be concurrently executed on two processors. At each checkpoint, both processors exchange data, which contains enough information for the job’s state. The exchanged data are then compared. If the data differ, it means that an error is detected in the previous execution segment and it is therefore re-executed. If the exchanged data are the same, it means that no errors are detected and the data are stored as a safe point from which the job can be restarted later. A time overhead due to exchanging data between processors is therefore introduced, and it increases the average execution time of a job, i.e. the average time required for a given job to complete. The overhead depends on the number of links that has to be traversed (due to data exchange) after each execution segment and the number of execution segments that are needed for the given job. The number of links that has to be traversed after each execution segment is twice the distance between the processors that are executing the same job concurrently. However, this is only true if all the links are fully functional. A link failure can result in a longer route for communication between the processors. Even though all links arefully functional, the number of execution segments still depends on error-free probabilities of the processors, and these error-free probabilities can vary between processors. This implies that the choice of processors affects the total number of links the communication has to traverse. Choosing two processors with higher error-free probability further away from eachother increases the distance, but decreases the number of execution segments, which can result in a lower overhead. By carefully determining the mapping for a given job, one can decrease the overhead, hence decreasing the average execution time. Since it is very common to have a larger number of jobs than available resources, it is not only important to find a good mapping to decrease the average execution time for a whole system, but also a good order of execution for a given set jobs (scheduling of the jobs). We propose in this thesis several mapping and scheduling algorithms that aim to reduce the average execution time in a fault-tolerant multiprocessor System-on-Chip, which uses Network-on-Chip as an underlying interconnect architecture, so that the fault-tolerant technique (RRC) can perform efficiently.</p>
264

On reliable and scalable management of wireless sensor networks

Bapat, Sandip Shriram, January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 164-170).
265

Meeting Data Sharing Needs of Heterogeneous Distributed Users

Zhan, Zhiyuan 16 January 2007 (has links)
The fast growth of wireless networking and mobile computing devices has enabled us to access information from anywhere at any time. However, varying user needs and system resource constraints are two major heterogeneity factors that pose a challenge to information sharing systems. For instance, when a new information item is produced, different users may have different requirements for when the new value should become visible. The resources that each device can contribute to such information sharing applications also vary. Therefore, how to enable information sharing across computing platforms with varying resources to meet different user demands is an important problem for distributed systems research. In this thesis, we address the heterogeneity challenge faced by such systems. We assume that shared information is encapsulated in distributed objects, and we use object replication to increase system scalability and robustness, which introduces the consistency problem. Many consistency models have been proposed in recent years but they are either too strong and do not scale very well, or too weak to meet many users' requirements. We propose a Mixed Consistency (MC) model as a solution. We introduce an access constraints based approach to combine both strong and weak consistency models together. We also propose a MC protocol that combines existing implementations together with minimum modifications. It is designed to tolerate crash failures and slow processes/communication links in the system. We also explore how the heterogeneity challenge can be addressed in the transportation layer by developing an agile dissemination protocol. We implement our MC protocol on top of a distributed publisher-subscriber middleware, Echo. We finally measure the performance of our MC implementation. The results of the experiments are consistent with our expectations. Based on the functionality and performance of mixed consistency protocols, we believe that this model is effective in addressing the heterogeneity of user requirements and available resources in distributed systems.
266

A prognostic health management based framework for fault-tolerant control

Brown, Douglas W. 15 June 2011 (has links)
The emergence of complex and autonomous systems, such as modern aircraft, unmanned aerial vehicles (UAVs) and automated industrial processes is driving the development and implementation of new control technologies aimed at accommodating incipient failures to maintain system operation during an emergency. The motivation for this research began in the area of avionics and flight control systems for the purpose to improve aircraft safety. A prognostics health management (PHM) based fault-tolerant control architecture can increase safety and reliability by detecting and accommodating impending failures thereby minimizing the occurrence of unexpected, costly and possibly life-threatening mission failures; reduce unnecessary maintenance actions; and extend system availability / reliability. Recent developments in failure prognosis and fault tolerant control (FTC) provide a basis for a prognosis based reconfigurable control framework. Key work in this area considers: (1) long-term lifetime predictions as a design constraint using optimal control; (2) the use of model predictive control to retrofit existing controllers with real-time fault detection and diagnosis routines; (3) hybrid hierarchical approaches to FTC taking advantage of control reconfiguration at multiple levels, or layers, enabling the possibility of set-point reconfiguration, system restructuring and path / mission re-planning. Combining these control elements in a hierarchical structure allows for the development of a comprehensive framework for prognosis based FTC. First, the PHM-based reconfigurable controls framework presented in this thesis is given as one approach to a much larger hierarchical control scheme. This begins with a brief overview of a much broader three-tier hierarchical control architecture defined as having three layers: supervisory, intermediate, and low-level. The supervisory layer manages high-level objectives. The intermediate layer redistributes component loads among multiple sub-systems. The low-level layer reconfigures the set-points used by the local production controller thereby trading-off system performance for an increase in remaining useful life (RUL). Next, a low-level reconfigurable controller is defined as a time-varying multi-objective criterion function and appropriate constraints to determine optimal set-point reconfiguration. A set of necessary conditions are established to ensure the stability and boundedness of the composite system. In addition, the error bounds corresponding to long-term state-space prediction are examined. From these error bounds, the point estimate and corresponding uncertainty boundaries for the RUL estimate can be obtained. Also, the computational efficiency of the controller is examined by using the number of average floating point operations per iteration as a standard metric of comparison. Finally, results are obtained for an avionics grade triplex-redundant electro-mechanical actuator with a specific fault mode; insulation breakdown between winding turns in a brushless DC motor is used as a test case for the fault-mode. A prognostic model is developed relating motor operating conditions to RUL. Standard metrics for determining the feasibility of RUL reconfiguration are defined and used to study the performance of the reconfigured system; more specifically, the effects of the prediction horizon, model uncertainty, operating conditions and load disturbance on the RUL during reconfiguration are simulated using MATLAB and Simulink. Contributions of this work include defining a control architecture, proving stability and boundedness, deriving the control algorithm and demonstrating feasibility with an example.
267

Controlling over-actuated road vehicles during failure conditions

Wanner, Daniel January 2015 (has links)
The aim of electrification of chassis and driveline systems in road vehicles is to reduce the global emissions and their impact on the environment. The electrification of such systems in vehicles is enabling a whole new set of functionalities improving safety, handling and comfort for the user. This trend is leading to an increased number of elements in road vehicles such as additional sensors, actuators and software codes. As a result, the complexity of vehicle components and subsystems is rising and has to be handled during operation. Hence, the probability of potential faults that can lead to component or subsystem failures deteriorating the dynamic behaviour of road vehicles is becoming higher. Mechanical, electric, electronic or software faults can cause these failures independently or by mutually influencing each other, thereby leading to potentially critical traffic situations or even accidents. There is a need to analyse faults regarding their influence on the dynamic behaviour of road vehicles and to investigate their effect on the driver-vehicle interaction and to find new control strategies for fault handling. A structured method for the classification of faults regarding their influence on the longitudinal, lateral and yaw motion of a road vehicle is proposed. To evaluate this method, a broad failure mode and effect analysis was performed to identify and model relevant faults that have an effect on the vehicle dynamic behaviour. This fault classification method identifies the level of controllability, i.e. how easy or difficult it is for the driver and the vehicle control system to correct the disturbance on the vehicle behaviour caused by the fault. Fault-tolerant control strategies are suggested which can handle faults with a critical controllability level in order to maintain the directional stability of the vehicle. Based on the principle of control allocation, three fault-tolerant control strategies are proposed and have been evaluated in an electric vehicle with typical faults. It is shown that the control allocation strategies give a less critical trajectory deviation compared to an uncontrolled vehicle and a regular electronic stability control algorithm. An experimental validation confirmed the potential of this type of fault handling using one of the proposed control allocation strategies. Driver-vehicle interaction has been experimentally analysed during various failure conditions with typical faults of an electric driveline both at urban and motorway speeds. The driver reactions to the failure conditions were analysed and the extent to which the drivers could handle a fault were investigated. The drivers as such proved to be capable controllers by compensating for the occurring failures in time when they were prepared for the eventuality of a failure. Based on the experimental data, a failure-sensitive driver model has been developed and evaluated for different failure conditions. The suggested fault classification method was further verified with the conducted experimental studies. The interaction between drivers and a fault-tolerant control system with the occurrence of a fault that affects the vehicle dynamic stability was investigated further. The control allocation strategy has a positive influence on maintaining the intended path and the vehicle stability, and supports the driver by reducing the necessary corrective steering effort. This fault-tolerant control strategy has shown promising results and its potential for improving traffic safety. / <p>QC 20150520</p>
268

A HIGHLY RELIABLE NON-VOLATILE FILE SYSTEM FOR SMALL SATELLITES

Nimmagadda, Rama Krishna 01 January 2008 (has links)
Recent Advancements in Solid State Memories have resulted in packing several Giga Bytes (GB) of memory into tiny postage stamp size Memory Cards. Of late, Secure Digital (SD) cards have become a de-facto standard for all portable handheld devices. They have found growing presence in almost all embedded applications, where huge volumes of data need to be handled and stored. For the very same reason SD cards are being widely used in space applications also. Using these SD Cards in space applications requires robust radiation hardened SD cards and Highly Reliable Fault Tolerant File Systems to handle them. The present work is focused on developing a Highly Reliable Fault Tolerant SD card based FAT16 File System for space applications.
269

Toward Cyber-Secure and Resilient Networked Control Systems

Teixeira, André January 2014 (has links)
Resilience is the ability to maintain acceptable levels of operation in the presence of abnormal conditions. It is an essential property in industrial control systems, which are the backbone of several critical infrastructures. The trend towards using pervasive information technology systems, such as the Internet, results in control systems becoming increasingly vulnerable to cyber threats. Traditional cyber security does not consider the interdependencies between the physical components and the cyber systems. On the other hand, control-theoretic approaches typically deal with independent disturbances and faults, thus they are not tailored to handle cyber threats. Theory and tools to analyze and build control system resilience are, therefore, lacking and in need to be developed. This thesis contributes towards a framework for analyzing and building resilient control systems. First, a conceptual model for networked control systems with malicious adversaries is introduced. In this model, the adversary aims at disrupting the system behavior while remaining undetected by an anomaly detector The adversary is constrained in terms of the available model knowledge, disclosure resources, and disruption capabilities. These resources may correspond to the anomaly detector’s algorithm, sniffers of private data, and spoofers of control commands, respectively. Second, we address security and resilience under the perspective of risk management, where the notion of risk is defined in terms of a threat’s scenario, impact, and likelihood. Quantitative tools to analyze risk are proposed. They take into account both the likelihood and impact of threats. Attack scenarios with high impact are identified using the proposed tools, e.g., zero-dynamics attacks are analyzed in detail. The problem of revealing attacks is also addressed. Their stealthiness is characterized, and how to detect them by modifying the system’s structure is also described. As our third contribution, we propose distributed fault detection and isolation schemes to detect physical and cyber threats on interconnected second-order linear systems. A distributed scheme based on unknown input observers is designed to jointly detect and isolate threats that may occur on the network edges or nodes. Additionally, we propose a distributed scheme based on local models and measurements that is resilient to changes outside the local subsystem. The complexity of the proposed methods is decreased by reducing the number of monitoring nodes and by characterizing the minimum amount of model information and measurements needed to achieve fault detection and isolation. Finally, we tackle the problem of distributed reconfiguration under sensor and actuator faults. In particular, we consider a control system with redundant sensors and actuators cooperating to recover from the removal of individual nodes. The proposed scheme minimizes a quadratic cost while satisfying a model-matching condition, which maintains the nominal closed-loop behavior after faults. Stability of the closed-loop system under the proposed scheme is analyzed. / Ett resilient system har förmågan att återhämta sig efter en kraftig och oväntad störning. Resiliens är en viktig egenskap hos industriella styrsystem som utgör en viktig komponent i många kritiska infrastrukturer, såsom processindustri och elkraftnät. Trenden att använda storskaliga IT-system, såsom Internet, inom styrsystem resulterar i en ökad sårbarhet för cyberhot. Traditionell IT-säkerhet tar inte hänsyn till den speciella koppling mellan fysikaliska komponenter och ITsystem som finns inom styrsystem. Å andra sidan så brukar traditionell reglerteknik fokusera på att hantera naturliga fel och inte cybersårbarheter. Teori och verktyg för resilienta och cybersäkra styrsystem saknas därför och behöver utvecklas. Denna avhandling bidrar till att ta fram ett ramverk för att analysera och konstruera just sådana styrsystem. Först så tar vi fram en representativ abstrakt modell för nätverkade styrsystem som består av fyra komponenter: den fysikaliska processen med sensorer och ställdon, kommunikationsnätet, det digitala styrsystemet och en feldetektor. Sedan införs en konceptuell modell för attacker gentemot det nätverkade styrsystemet. I modellen så beskrivs attacker som försöker undgå att skapa alarm i feldetektorn men ändå stör den fysikaliska processen. Dessutom så utgår modellen ifrån att den som utför attacken har begränsade resurser i fråga om modellkännedom och kommunikationskanaler. Det beskrivna ramverket används sedan för att studera resilens gentemot attackerna genom en riskanalys, där risk definieras utifrån ett hots scenario, konsekvenser och sannolikhet. Kvantitativa metoder för att uppskatta attackernas konsekvenser och sannolikheter tas fram, och speciellt visas hur hot med hög risk kan identifieras och motverkas. Resultaten i avhandlingen illustreras med ett flertal numeriska och praktiska exempel. / <p>QC 20141016</p>
270

Lower bound for scalable Byzantine agreement

Holtby, Dan 12 January 2010 (has links)
We consider the problem of computing Byzantine Agreement in a synchronous network with n processors each with a private random string, where each pair of processors is connected by a private communication line. The adversary is malicious and non-adaptive, i.e., it must choose the processors to corrupt at the start of the algorithm. Byzantine Agreement is known to be computable in this model in an expected constant number of rounds. We consider a scalable model where in each round each uncorrupt processor can send to any set of log n other processors and listen to any set of log n processors. We define the loss of a computation to be the number of uncorrupt processors whose output, does not agree with the output of the majority of uncorrupt processors, We show that. if there are I corrupt processors, then any randomised protocol which has probability at least 1/2 -h 1/ log u of loss less than t 2/3 / 16fn1/3log5/3n requires at least f rounds.

Page generated in 0.128 seconds