Spelling suggestions: "subject:"extremely value analysis""
11 |
3D study of non-metallic inclusions by EEmethod and use of statistics for the estimationof largest size inclusions in tool steel.Safa, Meer January 2010 (has links)
The control of non-metallic inclusions is very important for the improvement of performance during the application of tool steel. This present study was performed to see the effect of changing of some process parameters during the vacuum degassing of the melt and how these changing parameters affects the characteristics of inclusions in tool steel. The main parameters that were changed during the vacuum degassing were the change of induction stirring, argon flow rate from both the plug 1 and 2 and different ladle ages for different heat. Electrolytic extraction method was used to observe the morphology and characteristics of inclusions as a 3 dimensional view in tool steel. Four lollipop samples from four different heats were used for the experiment and all the samples were after vacuum (AV) degassing. In this study four different types of inclusions were found and they are classified as type 1, 2, 3 and 4. Of them type 1 inclusion was the major one with mostly spherical shaped. This study shows that among the three parameters, induction stirring has the biggest effect for the total number of inclusions per volume in the sample than the other two parameters Heat 4A showed the lowest number of inclusions per volume comparing with the other heats. The main reason behind this can be said that the induction stirring was the lowest comparing with the other heats with moderate argon flow and ladle age of 12. Extreme value analysis was used in this study to predict the probability of getting largest size inclusions in a certain volume of the metal. For the prediction of the largest inclusion size, both the electrolytic extraction (3D) and cross-sectional (2D) method was used. Later in this study comparison was done to determine the accuracy of both the methods and it is concluded that for the type 1 inclusions electrolytic extraction method shows almost similar trend with cross-sectional method and electrolytic extraction method shows better accuracy for the prediction of largest size inclusions than the cross-sectional method. Electrolytic Extraction method is also applicable for the prediction of largest size inclusions for multiple types of inclusions.
|
12 |
Extreme value analysis of non-stationary time series: Quantifying climate change using observational data throughout GermanyMüller, Philipp 11 March 2019 (has links)
The overall subject of this thesis is the massive parallel application of the extreme value analysis (EVA) on climatological time series. In this branch of statistics one strives to learn about the tails of a distribution and its upper quantiles, like the so-called 50 year return level, an event realized on average only once during its return period of 50 years. Since most studies just focus on average statistics and it's the extreme events that have the biggest impact on our life, such an analysis is key for a proper understanding of the climate change. In there a time series gets separated into blocks, whose maxima can be described using the generalized extreme value (GEV) distribution for sufficiently large block sizes.
But, unfortunately, the estimation of its parameters won't be possible on a massive parallel scale with any available software package since they are all affected by onceptional problems in the maximum likelihood fit. Both the logarithms in the negative log-likelihood of the GEV distribution and the theoretical limitations on one of its parameters give rise to regions in the parameter space inaccessible to the optimization routines, causing them to produce numerical artifacts. I resolved this issue by incorporating all constraints into the optimization using the augmented Lagrangian method. With my implementation in the open source package **climex** it is now possible to analyze large
climatological data sets. In this thesis I used temperature and precipitation data from measurement stations provided by the German weather service (DWD) and the ERA-Interim reanalysis data set and analyzed them using both a qualitative method based on time windows and a more quantitative one relying on the class of vector generalized linear models (VGLM).
Due to the climate change a general shift of the temperature towards higher values and thus more hot and less cold extremes would be expect. Indeed, I could find the cation parameters of the GEV distributions, which can be thought of as the mean event size at a return period of approximately the block size of one year, to increase for both the aily maximum and minimum temperatures. But the overall changes are far more complex and dependent on the geographical location as well as the considered return period, hich is quite unexpected. E.g. for the 100 year return levels of the daily maximum temperatures a decrease was found in the east and the center of Germany for both the raw series and their anomalies, as well as a quite strong reduction for the raw series in the very south of Germany.
The VGLM-based non-stationary EVA resulted in significant trends in the GEV parameters for the daily maximum temperatures of almost all stations and for about half of them in case of the daily minima. So, there is statistically sound evidence for a change in the extreme temperatures and, surprisingly, it is not exclusively towards higher values. The analysis yielded several significant trends featuring a negative slope in the 10 year return levels.
The analysis of the temperature data of the ERA-Interim reanalysis data set yielded quite surprising results too. While in some parts of the globe, especially on land, the 10 year return levels were found to increase, they do in general decrease in most parts of the earth and almost entirely over the sea. But since we found a huge discrepancy between the results of the analysis using the station data within Germany and the results obtained for the corresponding grid points of the reanalysis data set, we can not be sure whether the patterns in the return levels of the ERA-Interim data are trustworthy. / Das Ziel dieser Arbeit ist die massiv parallele Anwendung der Extremwertanalyse (EVA) auf klimatologischen Zeitreihen. Dieser Bereich der Statistik beschäftigt sich mit den Schwänzen von Wahrscheinlichkeitsverteilungen und deren großen Quantilen, wie z.B. dem sogenannten 50-jährigen Return Level. Dies ist ein Ereignis, welches im Mittel nur einmal innerhalb seiner Return Periode von 50 Jahren realisiert wird. Da sich aber die Mehrheit der wissenschaftlichen Studien auf die Analyse gemittelter statistischer Größen stützen, aber es gerade die extremen Ereignisse sind, welche unser Leben maßgeblich beeinflussen, ist eine solche EVA entscheidend für ein umfassendes Verständnis des
Klimawandels. In der Extremwertanalyse wird eine Zeitreihe in einzelne Blöcke geteilt, deren Maxima sich bei hinreichend großer Blocklänge mittels der generalisierten Extremwertverteilung (GEV) beschreiben lassen.
Die Schätzung ihrer Parameter ist auf solch massiv parallelen Skalen jedoch mit keinem der verfügbaren Softwarepakete möglich, da sie alle vom selben konzeptionellen Problem der Maximum Likelihood Methode betroffen sind. Sowohl die Logarithmen in der negativen log-Likelihood der GEV Verteilung, als auch die theoretischen Beschränkungen im Wertebereich eines ihrer Parameter machen Teile des Parameterraumes für den Optimierungsalgorithmus unzugänglich und führen zur Erzeugung numerischer Artefakte durch die Routine. Dieses Problem konnte ich lösen, indem ich die Beschränkungen mittels der augmented Lagrangian Methode in die Optimierung integrierte. Mittels dem verbesserten Fit, den ich in dem Open Source Paket **climex** zur Verfügung stellte, ist es nun möglich beliebig viele Zeitreihen in einer parallelen Analyse zu behandeln. In dieser Arbeit verwende ich Temperatur- und Niederschlagszeitreihen des deutschen Wetterdienstes (DWD) und den ERA-Interim Reanalyse Datensatz in Kombination mit sowohl einer qualitativen Analyse basierend auf Zeitfenstern, als auch einer quantitativen, welche auf der Modellklasse der Vektor-generalisierten linearen Modellen (VGLM) beruht.
Aufgrund des Klimawandels ist intuitiv eine Verschiebung der Temperaturverteilung zu höheren Werten und damit mehr heiße und weniger kalte Temperaturextreme zu erwarten. Tatsächlich konnte ich für die täglichen Maximal- und Minimaltemperaturen einen Anstieg des Location Parameters finden, dem man sich als mittlere Ereignisgröße
für eine Return Periode gleich der verwendeten Blocklänge von einem Jahr versinnbildlichen kann. Im Großen und Ganzen sind die Änderungen jedoch deutlich komplexer und hängen sowohl vom Ort, als auch von der Return Periode ab. Z.B. verringern sich die 100 jährigen Return Level der täglichen Maximaltemperaturen im Osten und im Zentrum
Deutschlands für sowohl die unprozessierten Zeitreihen, als auch für deren Anomalien, und weisen eine besonders starke Reduktion im Süden des Landes für die prozessierten auf.
Durch die VGLM-basierte, nicht-stationäre EVA konnte ich zeigen, dass nahezu alle Stationen für die täglichen Maximaltemperaturen, sowie rund die Hälfte aller Stationen für die täglichen Minimaltemperaturen, signifikante Trends in den Parameters der GEV Verteilung aufweisen. Somit war es mir möglich statistisch fundierte Beweise für Veränderungen in den extremen Temperaturen finden, die jedoch nicht ausschließlich in einer Verschiebung zu höheren Werten bestanden. Einige Stationen wiesen eine negativen Trend in ihren 10 jährigen Return Leveln auf.
Die Analyse der Temperaturzeitreihen des ERA-Interim Reanalyse Datensatzes ergab ebenfalls überraschende Resultate. Während in einigen Teilen der Welt, hauptsächlich an Land, die 10 jährigen Return Level steigen, sinkt ihr Wert für den Großteil der Zeitreihen und fast über den gesamten Ozeanen. Da jedoch eine große Diskrepanz zwischen den Ergebnissen der Stationsdaten des DWD und den dazugehörigen Rasterpunkten im ERA-Interim Datensatz besteht, konnte nicht abschließend geklärt werden in wieweit die Resultate der Rasteranalyse der Natur entsprechen.
|
13 |
Prognostisering av dimensionerande grundvattennivå : En fallstudie av Chalmersmodellen och hur referensrör med olika hydrogeologiska egenskaper påverkar modellens tillförlitlighet / Predicting extreme groundwater levels : A case study of the Chalmers model and how reference wells with different hydrogeological characteristics impact the precision of the modelCedergren, Andrea January 2022 (has links)
Grundvatten och dess varierande nivåer kan potentiellt få en stor inverkan både på byggnaderoch dess omgivning och kan innebära risker som upplyftande krafter, skred och ras. Baseratpå detta är det av vikt att kunna förutsäga extrema grundvattennivåer, kallat dimensionerandegrundvattennivåer. Däremot görs sällan mätningar under en längre tid, vilket krävs för att fastställasannolikheten av en viss grundvattennivå. För att kunna prognostisera den dimensionerandegrundvattennivån har den så kallade Chalmersmodellen utvecklats. Modellen utgår från attsätta korta mätningar från ett grundvattenrör vid en observationsplats (observationsrör) i relation till en lång mätserie från ett grundvattenrör vid en referensplats (referensrör). Enligtmetoden ska val av referensrör baseras på att det är likartade förhållanden mellan de två platserna, att de ligger inom 50 km från varandra och att mätningar i referensröret utförts i mer än 20 år. Denna studie syftar att utreda med vilken tillförlitlighet som Chalmersmodellen kan prognostisera grundvattennivåer som kan förekomma inom en viss återkomsttid. Fokus är på hur valet av referensrör som är placerade vid olikartade hydrogeologiska förhållanden påverkar Chalmersmodellens resultat. Studien utförs som en fallstudie, med utgångspunkt i utbyggnaden av tunnelbanan i Stockholm vid Sockenplan och Station Sofia. Utgångspunkten i Chalmersmodellen är att använda grundvattennivåmätningar från observationsplatsen tillsammans med mätningar från en ostörd miljö vid en referensplats. Beräkningar görs genom att utföra databehandling och beräkningar i Python i enlighet med beskrivningar från Chalmersmodellen och utvärderas genom att jämföras mot en liknande metod kallad extremvärdesanalys. Karakterisering av platserna vid observationsrören och referensrören används för att utvärdera hur stor inverkan olika hydrogeologiska egenskaper (akvifertyp och topografiskt läge) har på beräkningarna av den dimensionerande grundvattennivån. Resultaten visar att Chalmersmodellen generellt underskattar dimensionerande grundvattennivåer. Modellen har även varierande storleksordning av noggrannheten och därmed är det svårt att fastställa förväntad noggrannhet med Chalmersmodellen. Studien visar även att om observationsrör och referensrör är placerade vid en sluten akvifer kan en högre tillförlitlighet förväntas och osäkerheten i tillförlitligheten tycks öka för öppna akviferer. Slutligen om referensrör och observationsrör väljs utifrån att likartade hydrogeologiska egenskaper och samvariation mellan respektive grundvattennivåer, kan högre precision förväntas enligt denna studie. / Groundwater can potentially have a great impact on both constructions and the surrounding areas, and high groundwater levels can involve risks such as uplifting forces and landslide. Due to these risks it is important to predict and estimate the probability of extreme groundwater levels. However, when the necessary long term measurements are not available alternative methods are needed, like the Chalmers model. The Chalmers model is used for calculating extreme ground water levels, by combining a short measurement series from an observation well with the data from a reference well. For the results to be as accurate as possible, the two wells must share similar characteristics. The aim of this study is to investigate the Chalmers model when predicting the groundwater level for a specific return period. Focus will be on how the choice of different reference wells, with different characteristics, will influence the accuracy of the model. A case study will be conducted on two station sites (Sockenplan and Station Sofia) for the extension of the metro in the southern part of Stockholm, Sweden, upon which the Chalmer model will be implemented. The different characteristics of the obeservation and reference wells are tested to evaluate the accuracy of the model. The accuracy will be evaluated by using extreme value analysis as an alternative calculation model, assumed to be more precise, and compare the difference in extreme groundwater levels. The measurements used as reference in the Chalmers model are public data from The Geological Survey of Sweden, SGU, for groundwater levels. Data processing and calculations are performed in python. This study highlights the difficulties in determining the accuracy of the Chalmers model when predicting extreme groundwater levels, and no specific expected accuracy has been determined. Generally, the model appears to underestimate extreme grounwater levels. Furthermore, if the observation well and reference well are located by a confined aquifer and between inflow- and outflow areas, a higher precision can be expected. The uncertainty of the model increases with an unconfined aquifer. The results also imply that if the reference well and the observation well are selected based on similar hydrogeological characteristics, a covariation of groundwater levels over time and between highest and lowest level, a higher accuracy can be expected.
|
14 |
Development Of Methods For Structural Reliability Analysis Using Design And Analysis Of Computer Experiments And Data Based Extreme Value AnalysisPanda, Satya Swaroop 06 1900 (has links)
The work reported in this thesis is in the area of computational modeling of reliability of engineering structures. The emphasis of the study is on developing methods that are suitable for analysis of large-scale structures such as aircraft structure components. This class of problems continues to offer challenges to an analyst with the most difficult aspect of the analysis being the treatment of nonlinearity in the structural behavior, non-Gaussian nature of uncertainties and quantification of low levels of probability of failure (of the order of 10-5 or less), requiring significant computational effort. The present study covers static/ dynamic behavior, Gaussian/ non-Gaussian models of uncertainties, and (or) linear/ nonlinear structures. The novel elements in the study consist of two components:
• application of modeling tools that already exists in the area of design and analysis of computer experiments, and
. • application of data based extreme value analysis procedures that are available in the statistics literature.
The first component of the work provides opportunity to combine space filling sampling strategies (which have promise for reducing variance of estimation) with kriging based modeling in reliability studies-an opportunity that has not been explored in the existing literature. The second component of the work exploits the virtues of limiting behavior of extremes of sequence of random variables with Monte Carlo simulations of structural response-a strategy for reliability modeling that has not been explored in the existing literature. The hope here is that failure events with probabilities of the order of 10-5 or less could be investigated with relatively less number of Monte Carlo runs. The study also brings out the issues related to combining the above sources of existing knowledge with finite element modeling of engineering structures, thereby leading to newer tools for structural reliability analysis.
The thesis is organized into four chapters. The first chapter provides a review of literature that covers methods of reliability analysis and also the background literature on design and analysis of computer experiments and extreme value analysis.
The problem of reliability analysis of randomly parametered, linear (or) nonlinear structures subjected to static and (or) dynamic loads is considered in Chapter 2. A deterministic finite element model for the structure to analyze sample realization of the structure is assumed to be available. The reliability analysis is carried out within the framework of response surface methods, which involves the construction of surrogate models for performance functions to be employed in reliability calculations. These surrogate models serve as models of models, and hence termed as meta-models, for structural behavior in the neighborhood of design point. This construction, in the present study, has involved combining space filling optimal Latin hypercube sampling and kriging models. Illustrative examples on numerical prediction of reliability of a ten-bay truss and a W-seal in an aircraft structure are presented. Limited Monte Carlo simulations are used to validate the approximate procedures developed.
The reliability of nonlinear vibrating systems under stochastic excitations is investigated in Chapter 3 using a two-stage Monte Carlo simulation strategy. Systems subjected to Gaussian random excitation are considered for the study. It is assumed that the probability distribution of the maximum response in the steady state belongs to the basin of attraction of one of the classical asymptotic extreme value distributions. The first stage of the solution strategy consists of an objective selection of the form of the extreme value distribution based on hypothesis tests, and the next involves the estimation of parameters of the relevant extreme value distribution. Both these steps are implemented using data from limited Monte Carlo simulations of the system response. The proposed procedure is illustrated with examples of linear/nonlinear single-degree and multi-degree of freedom systems driven by random excitations. The predictions from the proposed method are compared with results from large-scale Monte Carlo simulations and also with classical analytical results, when available, from theory of out-crossing statistics. The method is further extended to cover reliability analysis of nonlinear dynamical systems with randomly varying system parameters. Here the methods of meta-modeling developed in Chapter 2 are extended to develop response surface models for parameters of underlying extreme value distributions. Numerical examples presented cover a host of low-dimensional dynamical systems and also the analysis of a wind turbine structure subjected to turbulent wind loads and undergoing large amplitude oscillations.
A summary of contributions made along with a few suggestions for further research is presented in Chapter 4.
|
15 |
Monte Carlo Simulation Based Response Estimation and Model Updating in Nonlinear Random VibrationsRadhika, Bayya January 2012 (has links) (PDF)
The study of randomly excited nonlinear dynamical systems forms the focus of this thesis. We discuss two classes of problems: first, the characterization of nonlinear random response of the system before it comes into existence and, the second, assimilation of measured responses into the mathematical model of the system after the system comes into existence. The first class of problems constitutes forward problems while the latter belongs to the class of inverse problems. An outstanding feature of these problems is that they are almost always not amenable for exact solutions. We tackle in the present study these two classes of problems using Monte Carlo simulation tools in conjunction with Markov process theory, Bayesian model updating strategies, and particle filtering based dynamic state estimation methods.
It is well recognized in literature that any successful application of Monte Carlo simulation methods to practical problems requires the simulation methods to be reinforced with effective means of controlling sampling variance. This can be achieved by incorporating any problem specific qualitative and (or) quantitative information that one might have about system behavior in formulating estimators for response quantities of interest. In the present thesis we outline two such approaches for variance reduction. The first of these approaches employs a substructuring scheme, which partitions the system states into two sets such that the probability distribution of the states in one of the sets conditioned on the other set become amenable for exact analytical solution. In the second approach, results from data based asymptotic extreme value analysis are employed to tackle problems of time variant reliability analysis and updating of this reliability. We exemplify in this thesis the proposed approaches for response estimation and model updating by considering wide ranging problems of interest in structural engineering, namely, nonlinear response and reliability analyses under stationary and (or) nonstationary random excitations, response sensitivity model updating, force identification, residual displacement analysis in instrumented inelastic structures under transient excitations, problems of dynamic state estimation in systems with local nonlinearities, and time variant reliability analysis and reliability model updating. We have organized the thesis into eight chapters and three appendices. A resume of contents of these chapters and appendices follows.
In the first chapter we aim to provide an overview of mathematical tools which form the basis for investigations reported in the thesis. The starting point of the study is taken to be a set of coupled stochastic differential equations, which are obtained after discretizing spatial variables, typically, based on application of finite element methods. Accordingly, we provide a summary of the following topics: (a) Markov vector approach for characterizing time evolution of transition probability density functions, which includes the forward and backward Kolmogorov equations, (b) the equations governing the time evolution of response moments and first passage times, (c) numerical discretization of governing stochastic differential equation using Ito-Taylor’s expansion, (d) the partial differential equation governing the time evolution of transition probability density functions conditioned on measurements for the study of existing instrumented structures,
(e) the time evolution of response moments conditioned on measurements based on governing equations in (d), and (f) functional recursions for evolution of multidimensional posterior probability density function and posterior filtering density function, when the time variable is also discretized. The objective of the description here is to provide an outline of the theoretical formulations within which the problems of response estimation and model updating are formulated in the subsequent chapters of the present thesis. We briefly state the class of problems, which are amenable for exact solutions. We also list in this chapter major text books, research monographs, and review papers relevant to the topics of nonlinear random vibration analysis and dynamic state estimation.
In Chapter 2 we provide a review of literature on solutions of problems of response analysis and model updating in nonlinear dynamical systems. The main focus of the review is on Monte Carlo simulation based methods for tackling these problems. The review accordingly covers numerical methods for approximate solutions of Kolmogorov equations and associated moment equations, variance reduction in simulation based analysis of Markovian systems, dynamic state estimation methods based on Kalman filter and its variants, particle filtering, and variance reduction based on Rao-Blackwellization.
In this review we chiefly cover papers that have contributed to the growth of the methodology. We also cover briefly, the efforts made in applying the ideas to structural engineering problems. Based on this review, we identify the problems of variance reduction using substructuring schemes and data based extreme value analysis and, their incorporation into response estimation and model updating strategies, as problems requiring further research attention. We also identify a range of problems where these tools could be applied.
We consider the development of a sequential Monte Carlo scheme, which incorporates a substructuring strategy, for the analysis of nonlinear dynamical systems under random excitations in Chapter 3. The proposed substructuring ensures that a part of the system states conditioned on the remaining states becomes Gaussian distributed and is amenable for an exact analytical solution. The use of Monte Carlo simulations is subsequently limited for the analysis of the remaining system states. This clearly results in reduction in sampling variance since a part of the problem is tackled analytically in an exact manner. The successful performance of the proposed approach is illustrated by considering response analysis of a single degree of freedom nonlinear oscillator under random excitations. Arguments based on variance decomposition result and Rao-Blackwell theorems are presented to demonstrate that the proposed variance reduction indeed is effective.
In Chapter 4, we modify the sequential Monte Carlo simulation strategy outlined in the preceding chapter to incorporate questions of dynamic state estimation when data on measured responses become available. Here too, the system states are partitioned into two groups such that the states in one group become Gaussian distributed when conditioned on the states in the other group. The conditioned Gaussian states are subsequently analyzed exactly using the Kalman filter and, this is interfaced with the analysis of the remaining states using sequential importance sampling based filtering strategy. The development of this combined Kalman and sequential importance sampling filtering method constitutes one of the novel elements of this study. The proposed strategy is validated by considering the problem of dynamic state estimation in linear single and multi-degree of freedom systems for which exact analytical solutions exist.
In Chapter 5, we consider the application of the tools developed in Chapter 4 for a class of wide ranging problems in nonlinear random vibrations of existing systems. The nonlinear systems considered include single and multi-degree of freedom systems, systems with memoryless and hereditary nonlinearities, and stationary and nonstationary random excitations. The specific applications considered include nonlinear dynamic state estimation in systems with local nonlinearities, estimation of residual displacement in instrumented inelastic dynamical system under transient random excitations, response sensitivity model updating, and identification of transient seismic base motions based on measured responses in inelastic systems. Comparisons of solutions from the proposed substructuring scheme with corresponding results from direct application of particle filtering are made and a satisfactory mutual agreement is demonstrated.
We consider next questions on time variant reliability analysis and corresponding model updating in Chapters 6 and 7, respectively. The research effort in these studies is focused on exploring the application of data based asymptotic extreme value analysis for problems on hand. Accordingly, we investigate reliability of nonlinear vibrating systems under stochastic excitations in Chapter 6 using a two-stage Monte Carlo simulation strategy. For systems with white noise excitation, the governing equations of motion are interpreted as a set of Ito stochastic differential equations. It is assumed that the probability distribution of the maximum over a specified time duration in the steady state response belongs to the basin of attraction of one of the classical asymptotic extreme value distributions. The first stage of the solution strategy consists of selection of the form of the extreme value distribution based on hypothesis testing, and, the next stage involves the estimation of parameters of the relevant extreme value distribution. Both these stages are implemented using data from limited Monte Carlo simulations of the system response. The proposed procedure is illustrated with examples of linear/nonlinear systems with single/multiple degrees of freedom driven by random excitations. The predictions from the proposed method are compared with the results from large scale Monte Carlo simulations, and also with the classical analytical results, when available, from the theory of out-crossing statistics. Applications of the proposed method for vibration data obtained from laboratory conditions are also discussed.
In Chapter 7 we consider the problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations. Here we assume that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes’ theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplified by considering the reliability analysis of a few low dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on limited amount of pertinent Monte Carlo simulations.
A summary of the contributions made and a few suggestions for future work are presented in Chapter 8.
The thesis also contains three appendices. Appendix A provides details of the order 1.5 strong Taylor scheme that is extensively employed at several places in the thesis. The formulary pertaining to the bootstrap and sequential importance sampling particle filters is provided in Appendix B. Some of the results on characterizing conditional probability density functions that have been used in the development of the combined Kalman and sequential importance sampling filter in Chapter 4 are elaborated in Appendix C.
|
Page generated in 0.0794 seconds