Spelling suggestions: "subject:"atemsystem failure engineering"" "subject:"systsystem failure engineering""
61 |
A prognostic health management based framework for fault-tolerant controlBrown, Douglas W. 15 June 2011 (has links)
The emergence of complex and autonomous systems, such as modern aircraft, unmanned aerial vehicles (UAVs) and automated industrial processes is driving the development and implementation of new control technologies aimed at accommodating incipient failures to maintain system operation during an emergency. The motivation for this research began in the area of avionics and flight control systems for the purpose to improve aircraft safety. A prognostics health management (PHM) based fault-tolerant control architecture can increase safety and reliability by detecting and accommodating impending failures thereby minimizing the occurrence of unexpected, costly and possibly life-threatening mission failures; reduce unnecessary maintenance actions; and extend system availability / reliability.
Recent developments in failure prognosis and fault tolerant control (FTC) provide a basis for a prognosis based reconfigurable control framework. Key work in this area considers: (1) long-term lifetime predictions as a design constraint using optimal control; (2) the use of model predictive control to retrofit existing controllers with real-time fault detection and diagnosis routines; (3) hybrid hierarchical approaches to FTC taking advantage of control reconfiguration at multiple levels, or layers, enabling the possibility of set-point reconfiguration, system restructuring and path / mission re-planning. Combining these control elements in a hierarchical structure allows for the development of a comprehensive framework for prognosis based FTC.
First, the PHM-based reconfigurable controls framework presented in this thesis is given as one approach to a much larger hierarchical control scheme. This begins with a brief overview of a much broader three-tier hierarchical control architecture defined as having three layers: supervisory, intermediate, and low-level. The supervisory layer manages high-level objectives. The intermediate layer redistributes component loads among multiple sub-systems. The low-level layer reconfigures the set-points used by the local production controller thereby trading-off system performance for an increase in remaining useful life (RUL).
Next, a low-level reconfigurable controller is defined as a time-varying multi-objective criterion function and appropriate constraints to determine optimal set-point reconfiguration. A set of necessary conditions are established to ensure the stability and boundedness of the composite system. In addition, the error bounds corresponding to long-term state-space prediction are examined. From these error bounds, the point estimate and corresponding uncertainty boundaries for the RUL estimate can be obtained. Also, the computational efficiency of the controller is examined by using the number of average floating point operations per iteration as a standard metric of comparison.
Finally, results are obtained for an avionics grade triplex-redundant electro-mechanical actuator with a specific fault mode; insulation breakdown between winding turns in a brushless DC motor is used as a test case for the fault-mode. A prognostic model is developed relating motor operating conditions to RUL. Standard metrics for determining the feasibility of RUL reconfiguration are defined and used to study the performance of the reconfigured system; more specifically, the effects of the prediction horizon, model uncertainty, operating conditions and load disturbance on the RUL during reconfiguration are simulated using MATLAB and Simulink. Contributions of this work include defining a control architecture, proving stability and boundedness, deriving the control algorithm and demonstrating feasibility with an example.
|
62 |
An investigation into the quality of supply voltage dip-proofingLange, Lyle George January 1998 (has links)
A dissertation submitted in partial fulfillment of the Master's Diploma in Technology: Electrical Engineering (Heavy Current), M.L. Sultan Technikon, 1998. / With the ever increasing electrical demand on an electrical system, the quality of the supply will be tested more and more. And it is with this deteriorating quality that the topic of voltage dips and depressions has become a contentious issue amounts the industrial sector and supply authorities, hence the means to combat this issue in recent years. / M
|
63 |
A Model Based Framework for Fault Diagnosis and Prognosis of Dynamical Systems with an Application to Helicopter TransmissionsPatrick-Aldaco, Romano 06 July 2007 (has links)
The thesis presents a framework for integrating models, simulation, and experimental data to diagnose incipient failure modes and prognosticate the remaining useful life of critical components, with an application to the main transmission of a helicopter. Although the helicopter example is used to illustrate the methodology presented, by appropriately adapting modules, the architecture can be applied to a variety of similar engineering systems. Models of the kind referenced are commonly referred to in the literature as physical or physics-based models. Such models utilize a mathematical description of some of the natural laws that govern system behaviors.
The methodology presented considers separately the aspects of diagnosis and prognosis of engineering systems, but a similar generic framework is proposed for both. The methodology is tested and validated through comparison of results to data from experiments carried out on helicopters in operation and a test cell employing a prototypical helicopter gearbox. Two kinds of experiments have been used. The first one retrieved vibration data from several healthy and faulted aircraft transmissions in operation. The second is a seeded-fault damage-progression test providing gearbox vibration data and ground truth data of increasing crack lengths. For both kinds of experiments, vibration data were collected through a number of accelerometers mounted on the frame of the transmission gearbox. The applied architecture consists of modules with such key elements as the modeling of vibration signatures, extraction of descriptive vibratory features, finite element analysis of a gearbox component, and characterization of fracture progression.
Contributions of the thesis include: (1) generic model-based fault diagnosis and failure prognosis methodologies, readily applicable to a dynamic large-scale mechanical system; (2) the characterization of the vibration signals of a class of complex rotary systems through model-based techniques; (3) a reverse engineering approach for fault identification using simulated vibration data; (4) the utilization of models of a faulted planetary gear transmission to classify descriptive system parameters either as fault-sensitive or fault-insensitive; and (5) guidelines for the integration of the model-based diagnosis and prognosis architectures into prognostic algorithms aimed at determining the remaining useful life of failing components.
|
64 |
A Bayesian least squares support vector machines based framework for fault diagnosis and failure prognosisKhawaja, Taimoor Saleem 21 July 2010 (has links)
A high-belief low-overhead Prognostics and Health Management (PHM) system
is desired for online real-time monitoring of complex non-linear systems operating
in a complex (possibly non-Gaussian) noise environment. This thesis presents a
Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault
diagnosis and failure prognosis in nonlinear, non-Gaussian systems. The methodology
assumes the availability of real-time process measurements, definition of a set
of fault indicators, and the existence of empirical knowledge (or historical data) to
characterize both nominal and abnormal operating conditions.
An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm,
set within a Bayesian Inference framework, not only allows for the development of
real-time algorithms for diagnosis and prognosis but also provides a solid theoretical
framework to address key concepts related to classication for diagnosis and regression
modeling for prognosis. SVM machines are founded on the principle of Structural
Risk Minimization (SRM) which tends to nd a good trade-o between low empirical
risk and small capacity. The key features in SVM are the use of non-linear kernels,
the absence of local minima, the sparseness of the solution and the capacity control
obtained by optimizing the margin. The Bayesian Inference framework linked with
LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis.
Additional levels of inference provide the much coveted features of adaptability
and tunability of the modeling parameters.
The two main modules considered in this research are fault diagnosis and failure
prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed
scheme uses only baseline data to construct a 1-class LS-SVM machine which,
when presented with online data, is able to distinguish between normal behavior and
any abnormal or novel data during real-time operation. The results of the scheme
are interpreted as a posterior probability of health (1 - probability of fault). As
shown through two case studies in Chapter 3, the scheme is well suited for diagnosing
imminent faults in dynamical non-linear systems.
Finally, the failure prognosis scheme is based on an incremental weighted Bayesian
LS-SVR machine. It is particularly suited for online deployment given the incremental
nature of the algorithm and the quick optimization problem solved in the LS-SVR
algorithm. By way of kernelization and a Gaussian Mixture Modeling (GMM)
scheme, the algorithm can estimate (possibly) non-Gaussian posterior distributions
for complex non-linear systems. An efficient regression scheme associated with the
more rigorous core algorithm allows for long-term predictions, fault growth estimation
with confidence bounds and remaining useful life (RUL) estimation after a fault
is detected.
The leading contributions of this thesis are (a) the development of a novel Bayesian
Anomaly Detector for efficient and reliable Fault Detection and Identification (FDI)
based on Least Squares Support Vector Machines , (b) the development of a data-driven
real-time architecture for long-term Failure Prognosis using Least Squares Support
Vector Machines,(c) Uncertainty representation and management using Bayesian
Inference for posterior distribution estimation and hyper-parameter tuning, and finally (d) the statistical characterization of the performance of diagnosis and prognosis
algorithms in order to relate the efficiency and reliability of the proposed schemes.
|
65 |
Failure mechanisms of complex systemsSiddique, Shahnewaz 22 May 2014 (has links)
Understanding the behavior of complex, large-scale, interconnected systems in a rigorous and structured manner is one of the most pressing scientific and technological challenges of current times. These systems include, among many others, transportation and communications systems, smart grids and power grids, financial markets etc. Failures of these systems have potentially enormous social, environmental and financial costs. In this work, we investigate the failure mechanisms of load-sharing complex systems. The systems are composed of multiple nodes or components whose failures are determined based on the interaction of their respective strengths and loads (or capacity and demand respectively) as well as the ability of a component to share its load with its neighbors when needed. Each component possesses a specific strength (capacity) and can be in one of three states: failed, damaged or functioning normally. The states are determined based on the load (demand) on the component.
We focus on two distinct mechanisms to model the interaction between components strengths and loads. The first, a Loss of Strength (LOS) model and the second, a Customer Service (CS) model. We implement both models on lattice and scale-free graph network topologies. The failure mechanisms of these two models demonstrate temporal scaling phenomena, phase transitions and multiple distinct failure modes excited by extremal dynamics. We find that the resiliency of these models is sensitive to the underlying network topology. For critical ranges of parameters the models demonstrate power law and exponential failure patterns. We find that the failure mechanisms of these models have parallels to failure mechanisms of critical infrastructure systems such as congestion in transportation networks, cascading failure in electrical power grids, creep-rupture in composite structures, and draw-downs in financial markets. Based on the different variants of failure, strategies for mitigating and postponing failure in these critical infrastructure systems can be formulated.
|
66 |
Load allocation for optimal risk management in systems with incipient failure modesBole, Brian McCaslyn 13 January 2014 (has links)
The development and implementation challenges associated with a proposed load allocation paradigm for fault risk assessment and system health management based on uncertain fault diagnostic and failure prognostic information are investigated. Health management actions are formulated in terms of a value associated with improving system reliability, and a cost associated with inducing deviations from a system's nominal performance. Three simulated case study systems are considered to highlight some of the fundamental challenges of formulating and solving an optimization on the space of available supervisory control actions in the described health management architecture. Repeated simulation studies on the three case-study systems are used to illustrate an empirical approach for tuning the conservatism of health management policies by way of adjusting risk assessment metrics in the proposed health management paradigm. The implementation and testing of a real-world prognostic system is presented to illustrate model development challenges not directly addressed in the analysis of the simulated case study systems. Real-time battery charge depletion prediction for a small unmanned aerial vehicle is considered in the real-world case study. An architecture for offline testing of prognostics and decision making algorithms is explained to facilitate empirical tuning of risk assessment metrics and health management policies, as was demonstrated for the three simulated case study systems.
|
67 |
Methodology for the conceptual design of a robust and opportunistic system-of-systemsTalley, Diana Noonan 18 November 2008 (has links)
Systems are becoming more complicated, complex, and interrelated. Designers have recognized the need to develop systems from a holistic perspective and design them as Systems-of-Systems (SoS). The design of the SoS, especially in the conceptual design phase, is generally characterized by significant uncertainty. As a result, it is possible for all three types of uncertainty (aleatory, epistemic, and error) and the associated factors of uncertainty (randomness, sampling, confusion, conflict, inaccuracy, ambiguity, vagueness, coarseness, and simplification) to affect the design process. While there are a number of existing SoS design methods, several gaps have been identified: the ability to modeling all of the factors of uncertainty at varying levels of knowledge; the ability to consider both the pernicious and propitious aspects of uncertainty; and, the ability to determine the value of reducing the uncertainty in the design process.
While there are numerous uncertainty modeling theories, no one theory can effectively model every kind of uncertainty. This research presents a Hybrid Uncertainty Modeling Method (HUMM) that integrates techniques from the following theories: Probability Theory, Evidence Theory, Fuzzy Set Theory, and Info-Gap theory. The HUMM is capable of modeling all of the different factors of uncertainty and can model the uncertainty for multiple levels of knowledge.
In the design process, there are both pernicious and propitious characteristics associated with the uncertainty. Existing design methods typically focus on developing robust designs that are insensitive to the associated uncertainty. These methods do not capitalize on the possibility of maximizing the potential benefit associated with the uncertainty. This research demonstrates how these deficiencies can be overcome by identifying the most robust and opportunistic design.
In a design process it is possible that the most robust and opportunistic design will not be selected from the set of potential design alternatives due to the related uncertainty. This research presents a process called the Value of Reducing Uncertainty Method (VRUM) that can determine the value associated with reducing the uncertainty in the design problem before a final decision is made by utilizing two concepts: the Expected Value of Reducing Uncertainty (EVRU) and the Expected Cost to Reducing Uncertainty (ECRU).
|
68 |
A framework for conducting mechanistic based reliability assessments of components operating in complex systemsWallace, Jon Michael 02 December 2003 (has links)
Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process.
The objective of this study is the development of a framework that infuses the influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are qualitative in nature and employ system reliability and safety engineering principles for an appropriate starting point for the component reliability assessment.
The most unique steps of the framework are the steps used to quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two newly developed multivariate probability tools: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary statistical information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution.
Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously.
The final step of the framework is the actual probabilistic assessment of the component. Variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration.
The framework developed in this study is implemented to conduct the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. The framework, as implemented resulted in a considerable improvement to the accuracy of the part reliability assessment and an increased statistical understanding of the component failure behavior.
|
69 |
Knowledge-Based Architecture for Integrated Condition Based Maintenance of Engineering SystemsSaxena, Abhinav 06 July 2007 (has links)
A paradigm shift is emerging in system reliability and maintainability. The military and industrial sectors are moving away from the traditional breakdown and scheduled maintenance to adopt concepts referred to as Condition Based Maintenance (CBM) and Prognostic Health Management (PHM). In addition to signal processing and subsequent diagnostic and prognostic algorithms these new technologies involve storage of large volumes of both quantitative and qualitative information to carry out maintenance tasks effectively. This not only requires research and development in advanced technologies but also the means to store, organize and access this knowledge in a timely and efficient fashion. Knowledge-based expert systems have been shown to possess capabilities to manage vast amounts of knowledge, but an intelligent systems approach calls for attributes like learning and adaptation in building autonomous decision support systems.
This research presents an integrated knowledge-based approach to diagnostic reasoning for CBM of engineering systems. A two level diagnosis scheme has been conceptualized in which first a fault is hypothesized using the observational symptoms from the system and then a more specific diagnostic test is carried out using only the relevant sensor measurements to confirm the hypothesis. Utilizing the qualitative (textual) information obtained from these systems in combination with quantitative (sensory) information reduces the computational burden by carrying out a more informed testing. An Industrial Language Processing (ILP) technique has been developed for processing textual information from industrial systems. Compared to other automated methods that are computationally expensive, this technique manipulates standardized language messages by taking advantage of their semi-structured nature and domain limited vocabulary in a tractable manner.
A Dynamic Case-based reasoning (DCBR) framework provides a hybrid platform for diagnostic reasoning and an integration mechanism for the operational infrastructure of an autonomous Decision Support System (DSS) for CBM. This integration involves data gathering, information extraction procedures, and real-time reasoning frameworks to facilitate the strategies and maintenance of critical systems. As a step further towards autonomy, DCBR builds on a self-evolving knowledgebase that learns from its performance feedback and reorganizes itself to deal with non-stationary environments. A unique Human-in-the-Loop Learning (HITLL) approach has been adopted to incorporate human feedback in the traditional Reinforcement Learning (RL) algorithm.
|
70 |
Explorative study for stochastic failure analysis of a roughened bi-material interface: implementation of the size sensitivity based perturbation methodFukasaku, Kotaro 24 May 2011 (has links)
In our age in which the use of electronic devices is expanding all over the world, their reliability and miniaturization have become very crucial. The thesis is based on the study of one of the most frequent failure mechanisms in semiconductor packages, the delamination of interface or the separation of two bonded materials, in order to improve their adhesion and a fortiori the reliability of microelectronic devices. It focuses on the metal (-oxide) / polymer interfaces because they cover 95% of all existing interfaces.
Since several years, research activities at mesoscopic scale (1-10µm) have proved that the more roughened the surface of the interface, i.e., presenting sharp asperities, the better the adhesion between these two materials. Because roughness exhibits extremely complex shapes, it is difficult to find a description that can be used for reliability analysis of interfaces. In order to investigate quantitatively the effect of roughness variation on adhesion properties, studies have been carried out involving analytical fracture mechanics; then numerical studies were conducted with Finite Element Analysis. Both were done in a deterministic way by assuming an ideal profile which is repeated periodically.
With the development of statistical and stochastic roughness representation on the one hand, and with the emergence of probabilistic fracture mechanics on the other, the present work adds a stochastic framework to the previous studies. In fact, one of the Stochastic Finite Element Methods, the Perturbation method is chosen for implementation, because it can investigate the effect of the geometric variations on the mechanical response such as displacement field. In addition, it can carry out at once what traditional Finite Element Analysis does with numerous simulations which require changing geometric parameters each time.
This method is developed analytically, then numerically by implementing a module in a Finite Element package MSc. Marc/Mentat. In order to get acquainted and to validate the implementation, the Perturbation method is applied analytically and numerically to the 3 point bending test on a beam problem, because the input of the Perturbation method in terms of roughness parameters is still being studied. The capabilities and limitations of the implementation are outlined.
Finally, recommendations for using the implementation and for furture work on roughness representation are discussed.
|
Page generated in 0.1166 seconds