Spelling suggestions: "subject:"failure model"" "subject:"ailure model""
11 |
Utility optimal decision making when responding to No Fault Found eventsArchana Ravindran (9029510) 26 June 2020 (has links)
<p>No Fault Founds (NFFs) are an expensive problem faced by the airline industry. The underlying cause of NFFs are a major focus of research work in the field, but the dearth of consistent data is a roadblock faced by many decision makers. An important risk factor identified is the occurrence rate of NFFs.</p><p>This research work aims to help decision makers in the Airline Maintenance, Repair and Overhaul teams, when faced with recurring NFFs, to make a choice based on value derived from the system and risk preference of the decision maker under uncertainty. The value of the aircraft fleet is laid out using Net Present Value at every decision point along the system life cycle while accounting for the uncertainty in the failure rate information. Two extreme decisions are considered for the decision maker to choose between: rebooting the system every time a failure occurs and results in an NFF which allows for it to recur while reducing uncertainty of the failure rate; or eliminating the failure mode which assumes that the failure does not recur and therefore completely removes the uncertainty. Both decisions have their associated uncertain costs that affect the NPV calculated. We use a Monte Carlo approach to estimate the expected profit from deciding to eliminate the failure mode. We make use of Expected Utility Theory to account for the risk preference of a decision maker under uncertainty and build an Expected Utility Maximizing decision framework.</p>To conclude we give some guidance to interpret the results and understand what factors influence the optimal decision. We conclude that not accounting for uncertainty in estimating a failure rate for the future along with uncertainty in NFF costs can lead to an undesirable decision. If the decision maker waits too long to gather more information and reduce uncertainty, then rebooting the system for the remaining life could be more worthwhile than spending the large amount of money to Eliminate a failure mode. Finally, we conclude that, despite uncertainties in information of occurrence rates and costs of NFFs, an Expected Utility maximizing decision between the two options considered – Reboot and Eliminate – is possible given the available information.
|
12 |
Processos de burn-in e de garantia em sistemas coerentes sob o modelo de tempo de vida geral / Burni-in and warranty processes in coherent systems under the general lifetime modelGonzalez Alvarez, Nelfi Gertrudis 09 October 2009 (has links)
Neste trabalho consideramos três tópicos principais. Nos dois primeiros generalizamos alguns dos resultados clássicos da Teoria da Confiabilidade na otimização dos procedimentos de burn-in e de políticas de garantia, respectivamente, sob o modelo de tempo de vida geral, quando um sistema coerente é observado ao nível de seus componentes, e estendemos os conceitos de intensidade de falha na forma de banheira e do modelo de falha geral através da definiçâo de processos progressivamente mensuráveis sob a pré-t-história completa dos componentes do sistema. Uma regra de parada monótona é usada na metodologia de otimizaçâo proposta. No terceiro tópico modelamos os custos de garantia descontados por reparo mínimo de um sistema coerente ao nível de seus componentes, propomos o estimador martingal do custo esperado para um período de garantia fixado e provamos as suas propriedades assintóticas mediante o Teorema do Limite Central para Martingais. / In this work we consider three main topics. In the first two, we generalize some classical results on Reliability Theory related to the optimization in burn-in procedures and warranty policies, using the general lifetime model of a coherent system observed on the component level and extending the definitions of bathtub shaped failure rate and general failure model to progressively measurable processes under the complete pre-t-history. A monotone stopping rule is applied within the proposed methodology. In the third topic, we define the discounted warranty cost process for a coherent system minimally repaired on the component level and we propose a martingale estimator to the expected warranty cost for a fixed period and setting its asymptotic properties by means of Martingale Central Limit Theorem.
|
13 |
Processos de burn-in e de garantia em sistemas coerentes sob o modelo de tempo de vida geral / Burni-in and warranty processes in coherent systems under the general lifetime modelNelfi Gertrudis Gonzalez Alvarez 09 October 2009 (has links)
Neste trabalho consideramos três tópicos principais. Nos dois primeiros generalizamos alguns dos resultados clássicos da Teoria da Confiabilidade na otimização dos procedimentos de burn-in e de políticas de garantia, respectivamente, sob o modelo de tempo de vida geral, quando um sistema coerente é observado ao nível de seus componentes, e estendemos os conceitos de intensidade de falha na forma de banheira e do modelo de falha geral através da definiçâo de processos progressivamente mensuráveis sob a pré-t-história completa dos componentes do sistema. Uma regra de parada monótona é usada na metodologia de otimizaçâo proposta. No terceiro tópico modelamos os custos de garantia descontados por reparo mínimo de um sistema coerente ao nível de seus componentes, propomos o estimador martingal do custo esperado para um período de garantia fixado e provamos as suas propriedades assintóticas mediante o Teorema do Limite Central para Martingais. / In this work we consider three main topics. In the first two, we generalize some classical results on Reliability Theory related to the optimization in burn-in procedures and warranty policies, using the general lifetime model of a coherent system observed on the component level and extending the definitions of bathtub shaped failure rate and general failure model to progressively measurable processes under the complete pre-t-history. A monotone stopping rule is applied within the proposed methodology. In the third topic, we define the discounted warranty cost process for a coherent system minimally repaired on the component level and we propose a martingale estimator to the expected warranty cost for a fixed period and setting its asymptotic properties by means of Martingale Central Limit Theorem.
|
14 |
Model-based Evaluation: from Dependability Theory to SecurityAlaboodi, Saad Saleh 21 June 2013 (has links)
How to quantify security is a classic question in the security community that until today has had no plausible answer. Unfortunately, current security evaluation models are often either quantitative but too specific (i.e., applicability is limited), or comprehensive (i.e., system-level) but qualitative. The importance of quantifying security cannot be overstated, but doing so is difficult and complex, for many reason: the “physics” of the amount of security is ambiguous; the operational state is defined by two confronting parties; protecting and breaking systems is a cross-disciplinary mechanism; security is achieved by comparable security strength and breakable by the weakest link; and the human factor is unavoidable, among others. Thus, security engineers face great challenges in defending the principles of information security and privacy. This thesis addresses model-based system-level security quantification and argues that properly addressing the quantification problem of security first requires a paradigm shift in security modeling, addressing the problem at the abstraction level of what defines a computing system and failure model, before any system-level analysis can be established. Consequently, we present a candidate computing systems abstraction and failure model, then propose two failure-centric model-based quantification approaches, each including a bounding system model, performance measures, and evaluation techniques. The first approach addresses the problem considering the set of controls. To bound and build the logical network of a security system, we extend our original work on the Information Security Maturity Model (ISMM) with Reliability Block Diagrams (RBDs), state vectors, and structure functions from reliability engineering. We then present two different groups of evaluation methods. The first mainly addresses binary systems, by extending minimal path sets, minimal cut sets, and reliability analysis based on both random events and random variables. The second group addresses multi-state security systems with multiple performance measures, by extending Multi-state Systems (MSSs) representation and the Universal Generating Function (UGF) method. The second approach addresses the quantification problem when the two sets of a computing system, i.e., assets and controls, are considered. We adopt a graph-theoretic approach using Bayesian Networks (BNs) to build an asset-control graph as the candidate bounding system model, then demonstrate its application in a novel risk assessment method with various diagnosis and prediction inferences. This work, however, is multidisciplinary, involving foundations from many fields, including security engineering; maturity models; dependability theory, particularly reliability engineering; graph theory, particularly BNs; and probability and stochastic models.
|
15 |
Model-based Evaluation: from Dependability Theory to SecurityAlaboodi, Saad Saleh 21 June 2013 (has links)
How to quantify security is a classic question in the security community that until today has had no plausible answer. Unfortunately, current security evaluation models are often either quantitative but too specific (i.e., applicability is limited), or comprehensive (i.e., system-level) but qualitative. The importance of quantifying security cannot be overstated, but doing so is difficult and complex, for many reason: the “physics” of the amount of security is ambiguous; the operational state is defined by two confronting parties; protecting and breaking systems is a cross-disciplinary mechanism; security is achieved by comparable security strength and breakable by the weakest link; and the human factor is unavoidable, among others. Thus, security engineers face great challenges in defending the principles of information security and privacy. This thesis addresses model-based system-level security quantification and argues that properly addressing the quantification problem of security first requires a paradigm shift in security modeling, addressing the problem at the abstraction level of what defines a computing system and failure model, before any system-level analysis can be established. Consequently, we present a candidate computing systems abstraction and failure model, then propose two failure-centric model-based quantification approaches, each including a bounding system model, performance measures, and evaluation techniques. The first approach addresses the problem considering the set of controls. To bound and build the logical network of a security system, we extend our original work on the Information Security Maturity Model (ISMM) with Reliability Block Diagrams (RBDs), state vectors, and structure functions from reliability engineering. We then present two different groups of evaluation methods. The first mainly addresses binary systems, by extending minimal path sets, minimal cut sets, and reliability analysis based on both random events and random variables. The second group addresses multi-state security systems with multiple performance measures, by extending Multi-state Systems (MSSs) representation and the Universal Generating Function (UGF) method. The second approach addresses the quantification problem when the two sets of a computing system, i.e., assets and controls, are considered. We adopt a graph-theoretic approach using Bayesian Networks (BNs) to build an asset-control graph as the candidate bounding system model, then demonstrate its application in a novel risk assessment method with various diagnosis and prediction inferences. This work, however, is multidisciplinary, involving foundations from many fields, including security engineering; maturity models; dependability theory, particularly reliability engineering; graph theory, particularly BNs; and probability and stochastic models.
|
16 |
Reliability Modelling Of Whole RAID Storage SubsystemsKarmakar, Prasenjit 04 1900 (has links) (PDF)
Reliability modelling of RAID storage systems with its various components such as RAID controllers, enclosures, expanders, interconnects and disks is important from a storage system designer's point of view. A model that can express all the failure characteristics of the whole RAID storage system can be used to evaluate design choices, perform cost reliability trade-offs and conduct sensitivity analyses.
We present a reliability model for RAID storage systems where we try to model all the components as accurately as possible. We use several state-space reduction techniques, such as aggregating all in-series components and hierarchical decomposition, to reduce the size of our model. To automate computation of reliability, we use the PRISM model checker as a CTMC solver where appropriate.
Initially, we assume a simple 3-state disk reliability model with independent disk failures. Later, we assume a Weibull model for the disks; we also consider a correlated disk failure model to check correspondence with the field data available. For all other components in the system, we assume exponential failure distribution. To use the CTMC solver, we approximate the Weibull distribution for a disk using sum of exponentials and we first confirm that this model gives results that are in reasonably good agreement with those from the sequential Monte Carlo simulation methods for RAID disk subsystems.
Next, our model for whole RAID storage systems (that includes, for example, disks, expanders, enclosures) uses Weibull distributions and, where appropriate, correlated failure modes for disks, and exponential distributions with independent failure modes for all other components. Since the CTMC solver cannot handle the size of the resulting models, we solve such models using hierarchical decomposition technique. We are able to model fairly large configurations with upto 600 disks using this model.
We can use such reasonably complete models to conduct several "what-if" analyses for many RAID storage systems of interest. Our results show that, depending on the configuration, spanning a RAID group across enclosures may increase or decrease reliability. Another key finding from our model results is that redundancy mechanisms such as multipathing is beneficial only if a single failure of some other component does not cause data inaccessibility of a whole RAID group.
|
17 |
Improvement of the efficiency of vehicle inspection and maintenance programs through incorporation of vehicle remote sensing data and vehicle characteristicsSamoylov, Alexander V. 13 January 2014 (has links)
Emissions from light-duty passenger vehicles represent a significant portion of total criteria pollutant emissions in the United States. Since the 1970s, emissions testing of these vehicles has been required in many major metropolitan areas, including Atlanta, GA, that were designated to be in non-attainment for one or more of the National Ambient Air Quality Standards. While emissions inspections have successfully reduced emissions by identifying and repairing high emitting vehicles, they have been increasingly inefficient as emissions control systems have become more durable and fewer vehicles are in need of repair. Currently, only about 9% of Atlanta area vehicles fail emissions inspection, but every vehicle is inspected annually. This research addresses explores ways to create a more efficient emissions testing program while continuing to use existing testing infrastructure. To achieve this objective, on road vehicle emissions data were collected as a part of the Continuous Atlanta Fleet Evaluation program sponsored the Georgia Department of Natural Resources. These remote sensing data were combined with in-program vehicle inspection data from the Atlanta Vehicle Inspection and Maintenance (I/M) program to establish the degree to which on road vehicle remote sensing could be used to enhance program efficiency. Based on this analysis, a multi-parameter model was developed to predict the probability of a particular vehicle failing an emissions inspection. The parameters found to influence the probability of failure include: vehicle characteristics, ownership history, vehicle usage, previous emission test results, and remote sensing emissions readings. This model was the foundation for a proposed emissions testing program that would create variable timing for vehicle retesting with high and low failure probability vehicles being more and less frequently, respectively, than the current annual cycle. Implementation of this program is estimated to reduce fleet emissions of 17% for carbon monoxide, 11% for hydrocarbons, and 5% for nitrogen oxides in Atlanta. These reductions would be achieved very cost-effectively at an estimated marginal cost of $149, $7,576 and $2,436 per-ton-per-year for carbon monoxide, hydrocarbons, and nitrogen oxides emissions reductions respectfully.
|
18 |
Vliv rychlosti rázového zatěžování na napjatost, deformaci a spolehlivost komponenty palivového systému automobilu / Effect of Velocity of Impact Loading to Stress, Deformation and Durability of Component of Fuel Car SystemDobeš, Martin January 2018 (has links)
Passive safety is a well-known term. This term can be further categorized into different topics of the car passive safety, restraint systems, safety assistants (ABS, ESP, ASR, etc.). One of these topics is passive safety of the fuel system. Safety and tightness of the fuel system must be guaranteed even under non-standard conditions, for example a collision against a fixed obstacle. This issue is not often mentioned in the field of car safety. It is considered a standard. Passive safety of the fuel system is often ensured using various interesting technical solutions and devices, usually patented ones. The development of these solutions is supported by numerical simulations in different stages of development process. The doctoral thesis deals with impact loading of the plastic components of the fuel system, in particular Fuel Supply Module (FSM), which is mounted inside the fuel tank. The flange is the most important part of the fuel supply module from the car safety point of view. The flange closes FSM on the external side of the fuel tank. The thesis focuses on the finite element analysis of the complete or partial FSM, and the flange itself during impact loading. The main objective of this thesis are numerical material models, taking into account important aspects of the mechanical behavior of polymer materials during impact loading. There are a lot of ad hoc invented or standardized experiments described in this thesis. These experiments are used for estimation of the material parameters or comparison of numerical analysis vs real conditions, or tests. The solver LS-DYNA was mainly used for numerical simulations. The final results of this thesis brings new quantified knowledge about behavior of the Typical Semi-Crystal Polymer (TSCP), not only for impact loading. The practical part of this thesis defines new methodology for the numerical simulation approach of impact loading for FSM. This methodology is directly usable for new product development. A lot of numerical material models were developed and tested. The best results were achieved using numerical material model *MAT_24 with combination of *MAT_ADD_EROSION card. The limits and parameters for this numerical material model was estimated empirically during conducting experiments. The numerical material model SAMP-1 was partly solved in this doctoral thesis, but more detail study will be given in future works.
|
19 |
Crash recovery with partial amnesia failure model issuesDe Juan Marín, Rubén 30 September 2008 (has links)
Replicated systems are a kind of distributed systems whose main goal
is to ensure that computer systems are highly available, fault tolerant and
provide high performance. One of the last trends in replication techniques
managed by replication protocols, make use of Group Communication Sys-
tem, and more specifically of the communication primitive atomic broadcast
for developing more eficient replication protocols.
An important aspect in these systems consists in how they manage
the disconnection of nodes {which degrades their service{ and the connec-
tion/reconnection of nodes for maintaining their original support. This task
is delegated in replicated systems to recovery protocols. How it works de-
pends specially on the failure model adopted. A model commonly used for
systems managing large state is the crash-recovery with partial amnesia be-
cause it implies short recovery periods. But, assuming it implies arising
several problems. Most of them have been already solved in the literature:
view management, abort of local transactions started in crashed nodes {
when referring to transactional environments{ or for example the reinclu-
sion of new nodes to the replicated system. Anyway, there is one problem
related to the assumption of this second failure model that has not been
completely considered: the amnesia phenomenon. Phenomenon that can
lead to inconsistencies if it is not correctly managed.
This work presents this inconsistency problem due to the amnesia and
formalizes it, de ning the properties that must be ful lled for avoiding it
and de ning possible solutions. Besides, it also presents and formalizes an
inconsistency problem {due to the amnesia{ which appears under a speci c
sequence of events allowed by the majority partition progress condition that
will imply to stop the system, proposing the properties for overcoming it and
proposing di erent solutions. As a consequence it proposes a new majority
partition progress condition. In the sequel there is de / De Juan Marín, R. (2008). Crash recovery with partial amnesia failure model issues [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/3302
|
20 |
Fire performance of cold-formed steel sectionsCheng, Shanshan January 2015 (has links)
Thin-walled cold-formed steel (CFS) has exhibited inherent structural and architectural advantages over other constructional materials, for example, high strength-to-weight ratio, ease of fabrication, economy in transportation and the flexibility of sectional profiles, which make CFS ideal for modern residential and industrial buildings. They have been increasingly used as purlins as the intermediate members in a roof system, or load-bearing components in low- and mid-rise buildings. However, using CFS members in building structures has been facing challenges due to the lack of knowledge to the fire performance of CFS at elevated temperatures and the lack of fire design guidelines. Among all available design specifications of CFS, EN1993-1-2 is the only one which provided design guidelines for CFS at elevated temperatures, which, however, is based on the same theory and material properties of hot-rolled steel. Since the material properties of CFS are found to be considerably different from those of hot-rolled steel, the applicability of hot-rolled steel design guidelines into CFS needs to be verified. Besides, the effect of non-uniform temperature distribution on the failure of CFS members is not properly addressed in literature and has not been specified in the existing design guidelines. Therefore, a better understanding of fire performance of CFS members is of great significance to further explore the potential application of CFS. Since CFS members are always with thin thickness (normally from 0.9 to 8 mm), open cross-section, and great flexural rigidity about one axis at the expense of low flexural rigidity about a perpendicular axis, the members are usually susceptible to various buckling modes which often govern the ultimate failure of CFS members. When CFS members are exposed to a fire, not only the reduced mechanical properties will influence the buckling capacity of CFS members, but also the thermal strains which can lead additional stresses in loaded members. The buckling behaviour of the member can be analysed based on uniformly reduced material properties when the member is unprotected or uniformly protected surrounded by a fire that the temperature distribution within the member is uniform. However if the temperature distribution in a member is not uniform, which usually happens in walls and/or roof panels when CFS members are protected by plaster boards and exposed to fire on one side, the analysis of the member becomes very complicated since the mechanical properties such as Young’s modulus and yield strength and thermal strains vary within the member. This project has the aim of providing better understanding of the buckling performance of CFS channel members under non-uniform temperatures. The primary objective is to investigate the fire performance of plasterboard protected CFS members exposed to fire on one side, in the aspects of pre-buckling stress distribution, elastic buckling behaviour and nonlinear failure models. Heat transfer analyses of one-side protected CFS members have been conducted firstly to investigate the temperature distributions within the cross-section, which have been applied to the analytical study for the prediction of flexural buckling loads of CFS columns at elevated temperatures. A simplified numerical method based on the second order elastic – plastic analysis has also been proposed for the calculation of the flexural buckling load of CFS columns under non-uniform temperature distributions. The effects of temperature distributions and stress-strain relationships on the flexure buckling of CFS columns are discussed. Afterwards a modified finite strip method combined with the classical Fourier series solutions have been presented to investigate the elastic buckling behaviour of CFS members at elevated temperatures, in which the effects of temperatures on both strain and mechanical properties have been considered. The variations of the elastic buckling loads/moments, buckling modes and slenderness of CFS columns/beams with increasing temperatures have been examined. The finite element method is also used to carry out the failure analysis of one-side protected beams at elevated temperatures. The effects of geometric imperfection, stress-strain relationships and temperature distributions on the ultimate moment capacities of CFS beams under uniform and non-uniform temperature distributions are examined. At the end the direct strength method based design methods have been discussed and corresponding recommendations for the designing of CFS beams at elevated temperatures are presented. This thesis has contributed to improve the knowledge of the buckling and failure behaviour of CFS members at elevated temperatures, and the essential data provided in the numerical studies has laid the foundation for further design-oriented studies.
|
Page generated in 0.0673 seconds