• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 120
  • 95
  • 27
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 288
  • 288
  • 113
  • 105
  • 83
  • 72
  • 72
  • 64
  • 62
  • 57
  • 57
  • 51
  • 46
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Simultaneous fault diagnosis of automotive engine ignition systems using pairwise coupled relevance vector machine, extracted pattern features and decision threshold optimization

Zhang, Zai Yong January 2011 (has links)
University of Macau / Faculty of Science and Technology / Department of Electromechanical Engineering
102

Magnetic force microscopy imaging of current paths in integrated circuits with overlayers

Pu, Anle 14 September 2007 (has links)
Imaging of current in internal conductors through magnetic field detection by magnetic force microscopy (MFM) is of growing interest in the analysis of integrated circuits (ICs). This thesis presents a systematic study of the MFM based mapping of current in model circuits by using force and force gradient techniques. In comparing these two techniques, force was found to have a much higher signal to noise ratio (from ~150 to ~580 times) than force gradient at large tip-sample distances considering the presence of thick overlayers in ICs. As a result, force will have better sensitivity and can therefore be used to detect much smaller minimum currents. We have achieved a sensitivity of ~0.64 µA per square-root Hertz in air and ~0.095 µA per squre-root Hertz in vacuum for force with a pinning field with a probe-circuit separation of 1.0 µm. We conclude that the force technique is superior for the application of MFM current imaging of buried conductors, albeit with reduced spatial resolution. Numerical modeling of the MFM images has shown that the simple point probe approximation is insufficient to model MFM images. An extended model, which considers realistic MFM probe geometries and the forces acting on the whole probe, has been shown to be necessary. Qualitative and quantitative comparisons of the experimental and simulation results with this model are in agreement to within experimental uncertainty. The comparisons suggested that the CoCr film thickness is not uniform on the probe, which was verified by scanning electron microscope cross-section images of the probes cut by a focused ion beam. Most notably, the CoCr film was 1.5 times thicker on the cantilever than on the tip. Based on the simulation and experimental results, we have devised a method to accurately locate the current path from MFM images with submicrometer uncertainty. The method was tested for different patterns of model conducting lines. It was shown to be a useful technique for fault location in IC failure analysis when current flows through the devices buried under overlayers and no topographic features are on the surface to provide clues about the positions of the devices. / October 2007
103

Optimum Sensor Localization/Selection In A Diagnostic/Prognostic Architecture

Zhang, Guangfan 17 February 2005 (has links)
Optimum Sensor Localization/Selection in A Diagnostic/Prognostic Architecture Guangfan Zhang 107 Pages Directed by Dr. George J. Vachtsevanos This research addresses the problem of sensor localization/selection for fault diagnostic purposes in Prognostics and Health Management (PHM)/Condition-Based Maintenance (CBM) systems. The performance of PHM/CBM systems relies not only on the diagnostic/prognostic algorithms used, but also on the types, location, and number of sensors selected. Most of the research reported in the area of sensor localization/selection for fault diagnosis focuses on qualitative analysis and lacks a uniform figure of merit. Moreover, sensor localization/selection is mainly studied as an open-loop problem without considering the performance feedback from the on-line diagnostic/prognostic system. In this research, a novel approach for sensor localization/selection is proposed in an integrated diagnostic/prognostic architecture to achieve maximum diagnostic performance. First, a fault detectability metric is defined quantitatively. A novel graph-based approach, the Quantified-Directed Model, is called upon to model fault propagation in complex systems and an appropriate figure-of-merit is defined to maximize fault detectability and minimize the required number of sensors while achieving optimum performance. Secondly, the proposed sensor localization/selection strategy is integrated into a diagnostic/prognostic system architecture while exhibiting attributes of flexibility and scalability. Moreover, the performance is validated and verified in the integrated diagnostic/prognostic architecture, and the performance of the integrated diagnostic/prognostic architecture acts as useful feedback for further optimizing the sensors considered. The approach is tested and validated through a five-tank simulation system. This research has led to the following major contributions: ??generalized methodology for sensor localization/selection for fault diagnostic purposes. ??quantitative definition of fault detection ability of a sensor, a novel Quantified-Directed Model (QDG) method for fault propagation modeling purposes, and a generalized figure of merit to maximize fault detectability and minimize the required number of sensors while achieving optimum diagnostic performance at the system level. ??novel, integrated architecture for a diagnostic/prognostic system. ??lidation of the proposed sensor localization/selection approach in the integrated diagnostic/prognostic architecture.
104

A Particle Filtering-based Framework for On-line Fault Diagnosis and Failure Prognosis

Orchard, Marcos Eduardo 08 November 2007 (has links)
This thesis presents an on-line particle-filtering-based framework for fault diagnosis and failure prognosis in nonlinear, non-Gaussian systems. The methodology assumes the definition of a set of fault indicators, which are appropriate for monitoring purposes, the availability of real-time process measurements, and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. The incorporation of particle-filtering (PF) techniques in the proposed scheme not only allows for the implementation of real time algorithms, but also provides a solid theoretical framework to handle the problem of fault detection and isolation (FDI), fault identification, and failure prognosis. Founded on the concept of sequential importance sampling (SIS) and Bayesian theory, PF approximates the conditional state probability distribution by a swarm of points called particles and a set of weights representing discrete probability masses. Particles can be easily generated and recursively updated in real time, given a nonlinear process dynamic model and a measurement model that relates the states of the system with the observed fault indicators. Two autonomous modules have been considered in this research. On one hand, the fault diagnosis module uses a hybrid state-space model of the plant and a particle-filtering algorithm to (1) calculate the probability of any given fault condition in real time, (2) estimate the probability density function (pdf) of the continuous-valued states in the monitored system, and (3) provide information about type I and type II detection errors, as well as other critical statistics. Among the advantages offered by this diagnosis approach is the fact that the pdf state estimate may be used as the initial condition in prognostic modules after a particular fault mode is isolated, hence allowing swift transitions between FDI and prognostic routines. The failure prognosis module, on the other hand, computes (in real time) the pdf of the remaining useful life (RUL) of the faulty subsystem using a particle-filtering-based algorithm. This algorithm consecutively updates the current state estimate for a nonlinear state-space model (with unknown time-varying parameters) and predicts the evolution in time of the fault indicator pdf. The outcome of the prognosis module provides information about the precision and accuracy of long-term predictions, RUL expectations, 95% confidence intervals, and other hypothesis tests for the failure condition under study. Finally, inner and outer correction loops (learning schemes) are used to periodically improve the parameters that characterize the performance of FDI and/or prognosis algorithms. Illustrative theoretical examples and data from a seeded fault test for a UH-60 planetary carrier plate are used to validate all proposed approaches. Contributions of this research include: (1) the establishment of a general methodology for real time FDI and failure prognosis in nonlinear processes with unknown model parameters, (2) the definition of appropriate procedures to generate dependable statistics about fault conditions, and (3) a description of specific ways to utilize information from real time measurements to improve the precision and accuracy of the predictions for the state probability density function (pdf).
105

Adaptable, scalable, probabilistic fault detection and diagnostic methods for the HVAC secondary system

Li, Zhengwei 30 March 2012 (has links)
As the popularity of building automation system (BAS) increases, there is an increasing need to understand/analyze the HVAC system behavior with the monitoring data. However, the current constraints prevent FDD technology from being widely accepted, which include: 1)Difficult to understand the diagnostic results; 2)FDD methods have strong system dependency and low adaptability; 3)The performance of FDD methods is still not satisfactory; 4)Lack of information. This thesis aims at removing the constraints, with a specific focus on air handling unit (AHU), which is one of the most common HVAC components in commercial buildings. To achieve the target, following work has been done in the thesis. On understanding the diagnostic results, a standard information structure including probability, criticality and risk is proposed. On improving method's adaptability, a low system dependency FDD method: rule augmented CUSUM method is developed and tested, another highly adaptable method: principal component analysis (PCA) method is implemented and tested. On improving the overall FDD performance (detection sensitivity and diagnostic accuracy), a hypothesis that using integrated approach to combine different FDD methods could improve the FDD performance is proposed, both deterministic and probabilistic integration approaches are implemented to verify this hypothesis. On understanding the value of information, the FDD results for a testing system under different information availability scenarios are compared. The results show that rule augmented CUSUM method is able to detect the abrupt faults and most incipient faults, therefore is a reliable method to use. The results also show that overall improvement of FDD method is possible using Bayesian integration approach, given accurate parameters (sensitivity and specificity), but not guaranteed with deterministic integration approach, although which is simpler to use. The study of information availability reveals that most of the faults can be detected in low and medium information availability scenario, moving further to high information availability scenario only slightly improves the diagnostic performance. The key message from this thesis to the community is that: using Bayesian approach to integrate high adaptable FDD methods and delivering the results in a probability context is an optimal solution to remove the current constraints and push FDD technology to a new position.
106

Design-for-testability techniques for deep submicron technology /

Das, Debaleena. January 2000 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2000. / Vita. Includes bibliographical references (leaves 81-85). Available also in a digital version from Dissertation Abstracts.
107

Multivariate statistical monitoring and diagnosis with applications in semiconductor processes /

Yue, Hongyu, January 2000 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2000. / Vita. Includes bibliographical references (leaves 187-201). Available also in a digital version from Dissertation Abstracts.
108

Application of communication theory to health assessment, degradation quantification, and root cause diagnosis

Costuros, Theodossios Vlasios 15 October 2013 (has links)
A review of diagnostic methods shows that new techniques are required that quantify system degradation from measured response. Information theory, developed by Claude E. Shannon, involves the quantification of information defining limits in signal processing for reliable data communication. One such technique considers information theory fundamentals forming an analogy between a machine and a communication channel to modify Shannon`s channel capacity concept and apply it to measured machine system response. The technique considers the residual signal (difference between a measured signal induced by faults from a baseline signal) to quantify degradation, perform system health assessment, and diagnose faults. Similar to noise hampering data transmission, mechanical faults hinder power transmission through the system. This residual signal can be viewed as noise within the context of information theory, to permit application of information theory to machines to construct a health measure for assessment of machine health. The goal of this dissertation is to create and study metrics for assessment of machine health. This dissertation explores channel capacity which is grounded and supported by proven theorems of information theory, studies different ways to apply and calculate channel capacity in practical industry settings, and creates methods to assess and pinpoint degradation by applying the channel capacity based measures to signals. Channel capacity is the maximum rate of information that can be sent and received over a channel having a known level of noise. A measured signal from a machine consists of a baseline signal exemplary of health, intrinsic that contaminates all measurements, and signals generated by the faults. Noise, the difference between the measured signal and the baseline signal, consists of intrinsic noise and "fault noise". Separation between fault and intrinsic (embedded in the measurement) noise shows channel capacity calculations for the machine require minimal computational efforts, and calculations are consistent in the presence of intrinsic white noise. Considering the response average or DC component of a signal in the channel capacity calculations adds robustness to diagnostic results. The method successfully predicted robot failures. Important to system health assessment is having a good baseline response as reference. The technique is favorable for industry because it applies to measurement data and calculations are done in the time domain. The technique can be used in semi-conducting industry as a tool monitoring system performance and lowering fab operating cost by extending component use and scheduling maintenance as needed. With a window running average channel capacity the technique is able to locate the fault in time. / text
109

Methods for improving the reliability of semiconductor fault detection and diagnosis with principal component analysis

Cherry, Gregory Allan 28 August 2008 (has links)
Not available / text
110

Data-driven approach for control performance monitoring and fault diagnosis

Yu, Jie 28 August 2008 (has links)
Not available / text

Page generated in 0.0268 seconds