• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Observability and Economic aspects of Fault Detection and Diagnosis Using CUSUM based Multivariate Statistics

Bin Shams, Mohamed January 2010 (has links)
This project focuses on the fault observability problem and its impact on plant performance and profitability. The study has been conducted along two main directions. First, a technique has been developed to detect and diagnose faulty situations that could not be observed by previously reported methods. The technique is demonstrated through a subset of faults typically considered for the Tennessee Eastman Process (TEP); which have been found unobservable in all previous studies. The proposed strategy combines the cumulative sum (CUSUM) of the process measurements with Principal Component Analysis (PCA). The CUSUM is used to enhance faults under conditions of small fault/signal to noise ratio while the use of PCA facilitates the filtering of noise in the presence of highly correlated data. Multivariate indices, namely, T2 and Q statistics based on the cumulative sums of all available measurements were used for observing these faults. The ARLo.c was proposed as a statistical metric to quantify fault observability. Following the faults detection, the problem of fault isolation is treated. It is shown that for the particular faults considered in the TEP problem, the contribution plots are not able to properly isolate the faults under consideration. This motivates the use of the CUSUM based PCA technique previously used for detection, for unambiguously diagnose the faults. The diagnosis scheme is performed by constructing a family of CUSUM based PCA models corresponding to each fault and then testing whether the statistical thresholds related to a particular faulty model is exceeded or not, hence, indicating occurrence or absence of the corresponding fault. Although the CUSUM based techniques were found successful in detecting abnormal situations as well as isolating the faults, long time intervals were required for both detection and diagnosis. The potential economic impact of these resulting delays motivates the second main objective of this project. More specifically, a methodology to quantify the potential economical loss due to unobserved faults when standard statistical monitoring charts are used is developed. Since most of the chemical and petrochemical plants are operated under closed loop scheme, the interaction of the control is also explicitly considered. An optimization problem is formulated to search for the optimal tradeoff between fault observability and closed loop performance. This optimization problem is solved in the frequency domain by using approximate closed loop transfer function models and in the time domain using a simulation based approach. The optimization in the time domain is applied to the TEP to solve for the optimal tuning parameters of the controllers that minimize an economic cost of the process.
2

Observability and Economic aspects of Fault Detection and Diagnosis Using CUSUM based Multivariate Statistics

Bin Shams, Mohamed January 2010 (has links)
This project focuses on the fault observability problem and its impact on plant performance and profitability. The study has been conducted along two main directions. First, a technique has been developed to detect and diagnose faulty situations that could not be observed by previously reported methods. The technique is demonstrated through a subset of faults typically considered for the Tennessee Eastman Process (TEP); which have been found unobservable in all previous studies. The proposed strategy combines the cumulative sum (CUSUM) of the process measurements with Principal Component Analysis (PCA). The CUSUM is used to enhance faults under conditions of small fault/signal to noise ratio while the use of PCA facilitates the filtering of noise in the presence of highly correlated data. Multivariate indices, namely, T2 and Q statistics based on the cumulative sums of all available measurements were used for observing these faults. The ARLo.c was proposed as a statistical metric to quantify fault observability. Following the faults detection, the problem of fault isolation is treated. It is shown that for the particular faults considered in the TEP problem, the contribution plots are not able to properly isolate the faults under consideration. This motivates the use of the CUSUM based PCA technique previously used for detection, for unambiguously diagnose the faults. The diagnosis scheme is performed by constructing a family of CUSUM based PCA models corresponding to each fault and then testing whether the statistical thresholds related to a particular faulty model is exceeded or not, hence, indicating occurrence or absence of the corresponding fault. Although the CUSUM based techniques were found successful in detecting abnormal situations as well as isolating the faults, long time intervals were required for both detection and diagnosis. The potential economic impact of these resulting delays motivates the second main objective of this project. More specifically, a methodology to quantify the potential economical loss due to unobserved faults when standard statistical monitoring charts are used is developed. Since most of the chemical and petrochemical plants are operated under closed loop scheme, the interaction of the control is also explicitly considered. An optimization problem is formulated to search for the optimal tradeoff between fault observability and closed loop performance. This optimization problem is solved in the frequency domain by using approximate closed loop transfer function models and in the time domain using a simulation based approach. The optimization in the time domain is applied to the TEP to solve for the optimal tuning parameters of the controllers that minimize an economic cost of the process.
3

Statistical Fault Detection with Applications to IMU Disturbances

Törnqvist, David January 2006 (has links)
<p>This thesis deals with the problem of detecting faults in an environment where the measurements are affected by additive noise. To do this, a residual sensitive to faults is derived and statistical methods are used to distinguish faults from noise. Standard methods for fault detection compare a batch of data with a model of the system using the generalized likelihood ratio. Careful treatment of the initial state of the model is quite important, in particular for short batch sizes. One method to handle this is the parity-space method which solves the problem by removing the influence of the initial state using a projection.</p><p>In this thesis, the case where prior knowledge about the initial state is available is treated. This can be obtained for example from a Kalman filter. Combining the prior estimate with a minimum variance estimate from the data batch results in a smoothed estimate. The influence of the estimated initial state is then removed. It is also shown that removing the influence of the initial state by an estimate from the data batch will result in the parity-space method. To model slowly changing faults, an efficient parameterization using Chebyshev polynomials is given.</p><p>The methods described above have been applied to an Inertial Measurement Unit, IMU. The IMU usually consists of accelerometers and gyroscopes, but has in this work been extended with a magnetometer. Traditionally, the IMU has been used to estimate position and orientation of airplanes, missiles etc. Recently, the size and cost has decreased making it possible to use IMU:s for applications such as augmented reality and body motion analysis. Since a magnetometer is very sensitive to disturbances from metal, such disturbances have to be detected. Detection of the disturbances makes compensation possible. Another topic covered is the fundamental question of observability for fault inputs. Given a fixed or linearly growing fault, conditions for observability are given.</p><p>The measurements from the IMU show that the noise distribution of the sensors can be well approximated with white Gaussian noise. This gives good correspondence between practical and theoretical results when the sensor is kept at rest. The disturbances for the IMU can be approximated using smooth functions with respect to time. Low rank parameterizations can therefore be used to describe the disturbances. The results show that the use of smoothing to obtain the initial state estimate and parameterization of the disturbances improves the detection performance drastically.</p>
4

Statistical Fault Detection with Applications to IMU Disturbances

Törnqvist, David January 2006 (has links)
This thesis deals with the problem of detecting faults in an environment where the measurements are affected by additive noise. To do this, a residual sensitive to faults is derived and statistical methods are used to distinguish faults from noise. Standard methods for fault detection compare a batch of data with a model of the system using the generalized likelihood ratio. Careful treatment of the initial state of the model is quite important, in particular for short batch sizes. One method to handle this is the parity-space method which solves the problem by removing the influence of the initial state using a projection. In this thesis, the case where prior knowledge about the initial state is available is treated. This can be obtained for example from a Kalman filter. Combining the prior estimate with a minimum variance estimate from the data batch results in a smoothed estimate. The influence of the estimated initial state is then removed. It is also shown that removing the influence of the initial state by an estimate from the data batch will result in the parity-space method. To model slowly changing faults, an efficient parameterization using Chebyshev polynomials is given. The methods described above have been applied to an Inertial Measurement Unit, IMU. The IMU usually consists of accelerometers and gyroscopes, but has in this work been extended with a magnetometer. Traditionally, the IMU has been used to estimate position and orientation of airplanes, missiles etc. Recently, the size and cost has decreased making it possible to use IMU:s for applications such as augmented reality and body motion analysis. Since a magnetometer is very sensitive to disturbances from metal, such disturbances have to be detected. Detection of the disturbances makes compensation possible. Another topic covered is the fundamental question of observability for fault inputs. Given a fixed or linearly growing fault, conditions for observability are given. The measurements from the IMU show that the noise distribution of the sensors can be well approximated with white Gaussian noise. This gives good correspondence between practical and theoretical results when the sensor is kept at rest. The disturbances for the IMU can be approximated using smooth functions with respect to time. Low rank parameterizations can therefore be used to describe the disturbances. The results show that the use of smoothing to obtain the initial state estimate and parameterization of the disturbances improves the detection performance drastically.

Page generated in 0.0578 seconds