• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 103
  • 37
  • 18
  • 14
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 240
  • 240
  • 51
  • 50
  • 48
  • 47
  • 47
  • 41
  • 31
  • 28
  • 28
  • 28
  • 26
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

MULTI-LEVEL CELL FLASH MEMORY FAULT TESTING AND DIAGNOSIS

MARTIN, ROBERT ROHAN 27 September 2005 (has links)
No description available.
112

FAULT DIAGNOSIS OF VEHICULAR ELECTRIC POWER GENERATION AND STORAGE

Uliyar, Hithesh Sanjiva 28 October 2010 (has links)
No description available.
113

Fault Diagnosis and Fault-Tolerant Control of Quadrotor UAVs

Avram, Remus C. 31 May 2016 (has links)
No description available.
114

A Bayesian approach to fault isolation with application to diesel engine diagnosis

Pernestål, Anna January 2007 (has links)
Users of heavy trucks, as well as legislation, put increasing demands on heavy trucks. The vehicles should be more comfortable, reliable and safe. Furthermore, they should consume less fuel and be more environmentally friendly. For example, this means that faults that cause the emissions to increase must be detected early. To meet these requirements on comfort and performance, advanced sensor-based computer control-systems are used. However, the increased complexity makes the vehicles more difficult for the workshop mechanic to maintain and repair. A diagnosis system that detects and localizes faults is thus needed, both as an aid in the repair process and for detecting and isolating (localizing) faults on-board, to guarantee that safety and environmental goals are satisfied. Reliable fault isolation is often a challenging task. Noise, disturbances and model errors can cause problems. Also, two different faults may lead to the same observed behavior of the system under diagnosis. This means that there are several faults, which could possibly explain the observed behavior of the vehicle. In this thesis, a Bayesian approach to fault isolation is proposed. The idea is to compute the probabilities, given ``all information at hand'', that certain faults are present in the system under diagnosis. By ``all information at hand'' we mean qualitative and quantitative information about how probable different faults are, and possibly also data which is collected during test drives with the vehicle when faults are present. The information may also include knowledge about which observed behavior that is to be expected when certain faults are present. The advantage of the Bayesian approach is the possibility to combine information of different characteristics, and also to facilitate isolation of previously unknown faults as well as faults from which only vague information is available. Furthermore, Bayesian probability theory combined with decision theory provide methods for determining the best action to perform to reduce the effects from faults. Using the Bayesian approach to fault isolation to diagnose large and complex systems may lead to computational and complexity problems. In this thesis, these problems are solved in three different ways. First, equivalence classes are introduced for different faults with equal probability distributions. Second, by using the structure of the computations, efficient storage methods can be used. Finally, if the previous two simplifications are not sufficient, it is shown how the problem can be approximated by partitioning it into a set of sub problems, which each can be efficiently solved using the presented methods. The Bayesian approach to fault isolation is applied to the diagnosis of the gas flow of an automotive diesel engine. Data collected from real driving situations with implemented faults, is used in the evaluation of the methods. Furthermore, the influences of important design parameters are investigated. The experiments show that the proposed Bayesian approach has promising potentials for vehicle diagnosis, and performs well on this real problem. Compared with more classical methods, e.g. structured residuals, the Bayesian approach used here gives higher probability of detection and isolation of the true underlying fault. / Både användare och lagstiftare ställer idag ökande krav på prestanda hos tunga lastbilar. Fordonen ska var bekväma, tillförlitliga och säkra. Dessutom ska de ha bättre bränsleekonomi vara mer miljövänliga. Detta betyder till exempel att fel som orsakar förhöjda emissioner måste upptäckas i ett tidigt stadium. För att möta dessa krav på komfort och prestanda används avancerade sensorbaserade reglersystem. Emellertid leder den ökade komplexiteten till att fordonen blir mer komplicerade för en mekaniker att underhålla, felsöka och reparera. Därför krävs det ett diagnossystem som detekterar och lokaliserar felen, både som ett hjälpmedel i reparationsprocessen, och för att kunna detektera och lokalisera (isolera) felen ombord för att garantera att säkerhetskrav och miljömål är uppfyllda. Tillförlitlig felisolering är ofta en utmanande uppgift. Brus, störningar och modellfel kan orsaka problem. Det kan också det faktum två olika fel kan leda till samma observerade beteende hos systemet som diagnosticeras. Detta betyder att det finns flera fel som möjligen skulle kunna förklara det observerade beteendet hos fordonet. I den här avhandlingen föreslås användandet av en Bayesianska ansats till felisolering. I metoden beräknas sannolikheten för att ett visst fel är närvarande i det diagnosticerade systemet, givet ''all tillgänglig information''. Med ''all tillgänglig information'' menas både kvalitativ och kvantitativ information om hur troliga fel är och möjligen även data som samlats in under testkörningar med fordonet, då olika fel finns närvarande. Informationen kan även innehålla kunskap om vilket beteende som kan förväntas observeras då ett särskilt fel finns närvarande. Fördelarna med den Bayesianska metoden är möjligheten att kombinera information av olika karaktär, men också att att den möjliggör isolering av tidigare okända fel och fel från vilka det endast finns vag information tillgänglig. Vidare kan Bayesiansk sannolikhetslära kombineras med beslutsteori för att erhålla metoder för att bestämma nästa bästa åtgärd för att minska effekten från fel. Användandet av den Bayesianska metoden kan leda till beräknings- och komplexitetsproblem. I den här avhandlingen hanteras dessa problem på tre olika sätt. För det första så introduceras ekvivalensklasser för fel med likadana sannolikhetsfördelningar. För det andra, genom att använda strukturen på beräkningarna kan effektiva lagringsmetoder användas. Slutligen, om de två tidigare förenklingarna inte är tillräckliga, visas det hur problemet kan approximeras med ett antal delproblem, som vart och ett kan lösas effektivt med de presenterade metoderna. Den Bayesianska ansatsen till felisolering har applicerats på diagnosen av gasflödet på en dieselmotor. Data som har samlats från riktiga körsituationer med fel implementerade används i evalueringen av metoderna. Vidare har påverkan av viktiga parametrar på isoleringsprestandan undersökts. Experimenten visar att den föreslagna Bayesianska ansatsen har god potential för fordonsdiagnos, och prestandan är bra på detta reella problem. Jämfört med mer klassiska metoder baserade på strukturerade residualer ger den Bayesianska metoden högre sannolikhet för detektion och isolering av det sanna, underliggande, felet. / QC 20101115
115

MULTIRESOLUTION-MULTIVARIATE ANALYSIS OF VIBRATION SIGNALS; APPLICATION IN FAULT DIAGNOSIS OF INTERNAL COMBUSTION ENGINES

Haqshenas, Seyyed Reza 04 1900 (has links)
<p>Condition monitoring and fault diagnosis of mechanical systems are two important issues that have received considerable attention from both academia and industry. Several techniques have been developed to date to address these issues. One category of these techniques which has been successfully applied in many industrial plants is based on the multiresolution multivariate analysis algorithms and more specifically the multi-scale principal component analysis (MSPCA). The present research aims to develop a multi-resolution multivariate analysis technique which can be effectively used for fault diagnosis of an internal combustion engine. Crank Angle Domain (CAD) Analysis is the most intuitive strategy for monitoring internal combustion engines. \comment{ as a cyclic system in which events at each cycle is correlated to a particular position of the crankshaft, this leads to analyzing the engine performance in angle domain (i.e. Crank Angle domain for engine) as very logical and intuitive strategy.} Therefore, MSPCA and CAD analysis were combined and a new technique, named CAD-MSPCA, was developed. In addition to this contribution, two indices were defined based on estimation of covariance matrices of scores and fault matrices. These indices were then employed for both fault localization and isolation purposes. In addition to this development, an interesting discovery made through this research was to use the statistical indices , calculated by MSPCA, for fault identification. It is mathematically shown that in case these indices detect a fault in the system, one can determine the spectral characteristics of the fault by performing the spectrum analysis of these indices. This analysis demonstrated the MSPCA as an attractive and reliable alternative for bearing fault diagnosis. These new contributions were validated through simulation examples as well as real measurement data.</p> / Master of Applied Science (MASc)
116

Design of an Adaptive Cruise Control Model for Hybrid Systems Fault Diagnosis

Breimer, Benjamin 04 1900 (has links)
<p>Driver Assistance Systems like Adaptive Cruise Control (ACC) can help prevent accidents by reducing the workload on the driver. However, this can only be accomplished if the driver can rely on the system to perform safely even in the presence of faults.</p> <p>In this thesis we develop an Adaptive Cruise Control model that will be used to investigate Hybrid Systems Fault Diagnosis techniques. System Identification is performed upon an electric motor to obtain its transfer function. This electric motor belongs to a 1/10th scale RC car that is being used as part of a test bench for the Adaptive Cruise Control system. The identified model is then used to design a hybrid controller which will switch between a set of LQR controllers to create an example Adaptive Cruise Controller. The model of the controller is then used to generate fixed point code for implementation on the testbed and validation against the model controller. Finally a detailed hazard analysis of the resulting system is performed using Leveson's STPA.</p> / Master of Applied Science (MASc)
117

Data Mining Algorithms for Decentralized Fault Detection and Diagnostic in Industrial Systems

Grbovic, Mihajlo January 2012 (has links)
Timely Fault Detection and Diagnosis in complex manufacturing systems is critical to ensure safe and effective operation of plant equipment. Process fault is defined as a deviation from normal process behavior, defined within the limits of safe production. The quantifiable objectives of Fault Detection include achieving low detection delay time, low false positive rate, and high detection rate. Once a fault has been detected pinpointing the type of fault is needed for purposes of fault mitigation and returning to normal process operation. This is known as Fault Diagnosis. Data-driven Fault Detection and Diagnosis methods emerged as an attractive alternative to traditional mathematical model-based methods, especially for complex systems due to difficulty in describing the underlying process. A distinct feature of data-driven methods is that no a priori information about the process is necessary. Instead, it is assumed that historical data, containing process features measured in regular time intervals (e.g., power plant sensor measurements), are available for development of fault detection/diagnosis model through generalization of data. The goal of my research was to address the shortcomings of the existing data-driven methods and contribute to solving open problems, such as: 1) decentralized fault detection and diagnosis; 2) fault detection in the cold start setting; 3) optimizing the detection delay and dealing with noisy data annotations. 4) developing models that can adapt to concept changes in power plant dynamics. For small-scale sensor networks, it is reasonable to assume that all measurements are available at a central location (sink) where fault predictions are made. This is known as a centralized fault detection approach. For large-scale networks, decentralized approach is often used, where network is decomposed into potentially overlapping blocks and each block provides local decisions that are fused at the sink. The appealing properties of the decentralized approach include fault tolerance, scalability, and reusability. When one or more blocks go offline due to maintenance of their sensors, the predictions can still be made using the remaining blocks. In addition, when the physical facility is reconfigured, either by changing its components or sensors, it can be easier to modify part of the decentralized system impacted by the changes than to overhaul the whole centralized system. The scalability comes from reduced costs of system setup, update, communication, and decision making. Main challenges in decentralized monitoring include process decomposition and decision fusion. We proposed a decentralized model where the sensors are partitioned into small, potentially overlapping, blocks based on the Sparse Principal Component Analysis (PCA) algorithm, which preserves strong correlations among sensors, followed by training local models at each block, and fusion of decisions based on the proposed Maximum Entropy algorithm. Moreover, we introduced a novel framework for adding constraints to the Sparse PCA problem. The constraints limit the set of possible solutions by imposing additional goals to be reached trough optimization along with the existing Sparse PCA goals. The experimental results on benchmark fault detection data show that Sparse PCA can utilize prior knowledge, which is not directly available in data, in order to produce desirable network partitions, with a pre-defined limit on communication cost and/or robustness. / Computer and Information Science
118

Sandra fault analysis and simulation

Ali, Muhammad, Cheng, Yongqiang, Li, Jian-Ping, Hu, Yim Fun, Pillai, Prashant, Pillai, Anju, Xu, Kai J. January 2013 (has links)
No / Fault management is one of the important management functions of a telecommunication network and mainly deals with fault monitoring and diagnosis. This paper applies reliability theories and methodologies for the fault management of an aeronautical communication system developed within the EU FP7 SANDRA project. The failure of the SANDRA terminal demonstrator is an undesirable event and the corresponding fault tree was built upon a reliability function analysis and was used to quickly monitor failures in the system. By using Monte Carlo simulations, the SANDRA demonstrator's reliability can be predicted and important components, which have major contributions to system failures, can be identified. The results can be used to improve the system reliability by adding parallel components in weak and important places. / Fault management is one of the important management functions of a telecommunication network and mainly deals with fault monitoring and diagnosis. This paper applies reliability theories and methodologies for the fault management of an aeronautical communication system developed within the EU FP7 SANDRA project. The failure of the SANDRA terminal demonstrator is an undesirable event and the corresponding fault tree was built upon a reliability function analysis and was used to quickly monitor failures in the system. By using Monte Carlo simulations, the SANDRA demonstrator's reliability can be predicted and important components, which have major contributions to system failures, can be identified. The results can be used to improve the system reliability by adding parallel components in weak and important places.
119

Acceleration of Hardware Testing and Validation Algorithms using Graphics Processing Units

Li, Min 16 November 2012 (has links)
With the advances of very large scale integration (VLSI) technology, the feature size has been shrinking steadily together with the increase in the design complexity of logic circuits. As a result, the efforts taken for designing, testing, and debugging digital systems have increased tremendously. Although the electronic design automation (EDA) algorithms have been studied extensively to accelerate such processes, some computational intensive applications still take long execution times. This is especially the case for testing and validation. In order tomeet the time-to-market constraints and also to come up with a bug-free design or product, the work presented in this dissertation studies the acceleration of EDA algorithms on Graphics Processing Units (GPUs). This dissertation concentrates on a subset of EDA algorithms related to testing and validation. In particular, within the area of testing, fault simulation, diagnostic simulation and reliability analysis are explored. We also investigated the approaches to parallelize state justification on GPUs, which is one of the most difficult problems in the validation area. Firstly, we present an efficient parallel fault simulator, FSimGP2, which exploits the high degree of parallelism supported by a state-of-the-art graphic processing unit (GPU) with the NVIDIA Compute Unified Device Architecture (CUDA). A novel three-dimensional parallel fault simulation technique is proposed to achieve extremely high computation efficiency on the GPU. The experimental results demonstrate a speedup of up to 4Ã compared to another GPU-based fault simulator. Then, another GPU based simulator is used to tackle an even more computation-intensive task, diagnostic fault simulation. The simulator is based on a two-stage framework which exploits high computation efficiency on the GPU. We introduce a fault pair based approach to alleviate the limited memory capacity on GPUs. Also, multi-fault-signature and dynamic load balancing techniques are introduced for the best usage of computing resources on-board. With continuously feature size scaling and advent of innovative nano-scale devices, the reliability analysis of the digital systems becomes more important nowadays. However, the computational cost to accurately analyze a large digital system is very high. We proposes an high performance reliability analysis tool on GPUs. To achieve highmemory bandwidth on GPUs, two algorithms for simulation scheduling and memory arrangement are proposed. Experimental results demonstrate that the parallel analysis tool is efficient, reliable and scalable. In the area of design validation, we investigate state justification. By employing the swarm intelligence and the power of parallelism on GPUs, we are able to efficiently find a trace that could help us reach the corner cases during the validation of a digital system. In summary, the work presented in this dissertation demonstrates that several applications in the area of digital design testing and validation can be successfully rearchitected to achieve maximal performance on GPUs and obtain significant speedups. The proposed algorithms based on GPU parallelism collectively aim to contribute to improving the performance of EDA tools in Computer aided design (CAD) community on GPUs and other many-core platforms. / Ph. D.
120

Search State Extensibility based Learning Framework for Model Checking and Test Generation

Chandrasekar, Maheshwar 20 September 2010 (has links)
The increasing design complexity and shrinking feature size of hardware designs have created resource intensive design verification and manufacturing test phases in the product life-cycle of a digital system. On the contrary, time-to-market constraints require faster verification and test phases; otherwise it may result in a buggy design or a defective product. This trend in the semiconductor industry has considerably increased the complexity and importance of Design Verification, Manufacturing Test and Silicon Diagnosis phases of a digital system production life-cycle. In this dissertation, we present a generalized learning framework, which can be customized to the common solving technique for problems in these three phases. During Design Verification, the conformance of the final design to its specifications is verified. Simulation-based and Formal verification are the two widely known techniques for design verification. Although the former technique can increase confidence in the design, only the latter can ensure the correctness of a design with respect to a given specification. Originally, Design Verification techniques were based on Binary Decision Diagram (BDD) but now such techniques are based on branch-and-bound procedures to avoid space explosion. However, branch-and-bound procedures may explode in time; thus efficient heuristics and intelligent learning techniques are essential. In this dissertation, we propose a novel extensibility relation between search states and a learning framework that aids in identifying non-trivial redundant search states during the branch-and-bound search procedure. Further, we also propose a probability based heuristic to guide our learning technique. First, we utilize this framework in a branch-and-bound based preimage computation engine. Next, we show that it can be used to perform an upper-approximation based state space traversal, which is essential to handle industrial-scale hardware designs. Finally, we propose a simple but elegant image extraction technique that utilizes our learning framework to compute over-approximate image space. This image computation is later leveraged to create an abstraction-refinement based model checking framework. During Manufacturing Test, test patterns are applied to the fabricated system, in a test environment, to check for the existence of fabrication defects. Such patterns are usually generated by Automatic Test Pattern Generation (ATPG) techniques, which assume certain fault types to model arbitrary defects. The size of fault list and test set has a major impact on the economics of manufacturing test. Towards this end, we propose a fault col lapsing approach to compact the size of target fault list for ATPG techniques. Further, from the very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain. However, ATPG is a problem in the multi-valued domain; thus we propose a multi-valued ATPG framework to utilize this underlying nature. We also employ our learning technique for branch-and-bound procedures in this multi-valued framework. To improve the yield for high-volume manufacturing, silicon diagnosis identifies a set of candidate defect locations in a faulty chip. Subsequently physical failure analysis - an extremely time consuming step - utilizes these candidates as an aid to locate the defects. To reduce the number of candidates returned to the physical failure analysis step, efficient diagnostic patterns are essential. Towards this objective, we propose an incremental framework that utilizes our learning technique for a branch-and-bound procedure. Further, it learns from the ATPG phase where detection-patterns are generated and utilizes this information during diagnostic-pattern generation. Finally, we present a probability based heuristic for X-filling of detection-patterns with the objective of enhancing the diagnostic resolution of such patterns. We unify these techniques into a framework for test pattern generation with good detection and diagnostic ability. Overall, we propose a learning framework that can speed up design verification, test and diagnosis steps in the life cycle of a hardware system. / Ph. D.

Page generated in 0.0536 seconds