• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 420
  • 92
  • 32
  • 31
  • 10
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 741
  • 741
  • 113
  • 112
  • 112
  • 90
  • 79
  • 79
  • 68
  • 64
  • 61
  • 57
  • 53
  • 53
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Timing Synchronization and Node Localization in Wireless Sensor Networks: Efficient Estimation Approaches and Performance Bounds

Ahmad, Aitzaz 1984- 14 March 2013 (has links)
Wireless sensor networks (WSNs) consist of a large number of sensor nodes, capable of on-board sensing and data processing, that are employed to observe some phenomenon of interest. With their desirable properties of flexible deployment, resistance to harsh environment and lower implementation cost, WSNs envisage a plethora of applications in diverse areas such as industrial process control, battle- field surveillance, health monitoring, and target localization and tracking. Much of the sensing and communication paradigm in WSNs involves ensuring power efficient transmission and finding scalable algorithms that can deliver the desired performance objectives while minimizing overall energy utilization. Since power is primarily consumed in radio transmissions delivering timing information, clock synchronization represents an indispensable requirement to boost network lifetime. This dissertation focuses on deriving efficient estimators and performance bounds for the clock parameters in a classical frequentist inference approach as well as in a Bayesian estimation framework. A unified approach to the maximum likelihood (ML) estimation of clock offset is presented for different network delay distributions. This constitutes an analytical alternative to prior works which rely on a graphical maximization of the likelihood function. In order to capture the imperfections in node oscillators, which may render a time-varying nature to the clock offset, a novel Bayesian approach to the clock offset estimation is proposed by using factor graphs. Message passing using the max-product algorithm yields an exact expression for the Bayesian inference problem. This extends the current literature to cases where the clock offset is not deterministic, but is in fact a random process. A natural extension of pairwise synchronization is to develop algorithms for the more challenging case of network-wide synchronization. Assuming exponentially distributed random delays, a network-wide clock synchronization algorithm is proposed using a factor graph representation of the network. Message passing using the max- product algorithm is adopted to derive the update rules for the proposed iterative procedure. A closed form solution is obtained for each node's belief about its clock offset at each iteration. Identifying the close connections between the problems of node localization and clock synchronization, we also address in this dissertation the problem of joint estimation of an unknown node's location and clock parameters by incorporating the effect of imperfections in node oscillators. In order to alleviate the computational complexity associated with the optimal maximum a-posteriori estimator, two iterative approaches are proposed as simpler alternatives. The first approach utilizes an Expectation-Maximization (EM) based algorithm which iteratively estimates the clock parameters and the location of the unknown node. The EM algorithm is further simplified by a non-linear processing of the data to obtain a closed form solution of the location estimation problem using the least squares (LS) approach. The performance of the estimation algorithms is benchmarked by deriving the Hybrid Cramer-Rao lower bound (HCRB) on the mean square error (MSE) of the estimators. We also derive theoretical lower bounds on the MSE of an estimator in a classical frequentist inference approach as well as in a Bayesian estimation framework when the likelihood function is an arbitrary member of the exponential family. The lower bounds not only serve to compare various estimators in our work, but can also be useful in their own right in parameter estimation theory.
232

The effects of soil heterogeneity on the performance of horizontal ground loop heat exchangers

Simms, Richard Blake January 2013 (has links)
Horizontal ground loop heat exchangers (GLHE) are widely used in many countries around the world as a heat source/sink for building conditioning systems. In Canada, these systems are most common in residential buildings that do not have access to the natural gas grid or in commercial structures where the heating and cooling loads are well balanced. These horizontal systems are often preferred over vertical systems because of the expense of drilling boreholes for the vertical systems. Current practice when sizing GLHEs is to add a considerable margin of safety. A margin of safety is required because of our poor understanding of in situ GLHE performance. One aspect of this uncertianty is in how these systems interact with heterogeneous soils. To investigate the impact of soil thermal property heterogeneity on GLHE performance, a specialized finite element model was created. This code avoided some of the common, non-physical assumptions made by many horizontal GLHE models by including a representation of the complete geometry of the soil continuum and pipe network. This model was evaluated against a 400 day observation period at a field site in Elora, Ontario and its estimates were found to be capable of reaching a reasonable agreement with observations. Simulations were performed on various heterogeneous conductivity fields created with GSLIB to evaluate the impact of structural heterogeneity. Through a rigorous set of experiments, heterogeneity was found to have little effect on the overall performance of horizontal ground loops over a wide range of soil types and system configurations. Other variables, such as uncertainty of the mean soil thermal conductivity, were shown to have much more impact on the uncertainty of performance than heterogeneity. The negative impact of heterogeneity was shown to be further minimized by: maintaining a 50 cm spacing between pipes in trenches; favouring multiple trenches over a single, extremely long trench; and/or using trenches greater than 1 m deep to avoid surface effects.
233

Individual-based modelling of bacterial cultures in the study of the lag phase

Prats Soler, Clara 13 June 2008 (has links)
La microbiologia predictiva és una de les parts més importants de la microbiologia dels aliments. En el creixement d'un cultiu bacterià es poden observar quatre fases: latència, exponencial, estacionària i mort. La fase de latència té un interès específic en microbiologia predictiva; al llarg de dècades ha estat abordada des de dues perspectives diferents: a nivell cel·lular i intracel·lular (escala microscòpica), i a nivell de població (escala macroscòpica). La primera estudia els processos que tenen lloc a l'interior dels bacteris durant la seva adaptació a les noves condicions del medi, com els canvis en l'expressió gènica i en el metabolisme. La segona descriu l'evolució de la població bacteriana per mitjà de models matemàtics continus i d'experiments que avaluen variables relacionades amb la densitat cel·lular. L'objectiu d'aquest treball és millorar la comprensió de la fase de latència dels cultius bacterians i dels fenòmens intrínsecs a la mateixa. Aquest objectiu s'ha abordat amb la metodologia Individual-based Modelling (IbM) amb el simulador INDISIM (INDividual DIScrete SIMulation), que ha calgut optimitzar. La IbM introdueix una perspectiva mecanicista a través de la modelització de les cèl·lules com a unitats bàsiques. Les simulacions IbM permeten estudiar el creixement d'entre 1 i 106 bacteris, així com els fenòmens que emergeixen de la interacció entre ells. Aquests fenòmens pertanyen al que anomenem escala mesoscòpica. Aquesta perspectiva és imprescindible per entendre l'efecte en la població dels processos d'adaptació individuals. Per tant, la metodologia IbM és un pont entre els individus i la població o, el que és el mateix, entre els models a escala microscòpica i a escala macroscòpica.En primer lloc hem estudiat dos dels diversos mecanismes que poden causar la fase de latència: inòculs amb massa mitjana petita, i canvis de medi.S'ha verificat també la relació de la durada de la latència amb variables com la temperatura o la grandària de l'inòcul. En aquest treball s'ha identificat la distribució de biomassa del cultiu com una variable cabdal per analitzar l'evolució del cultiu durant el cicle de creixement. S'han definit les funcions matemàtiques que anomenem distàncies per avaluar quantitativament l'evolució d'aquesta distribució.Hem abordat, també, la fase de latència des d'un punt de vista teòric. L'evolució de la velocitat de creixement al llarg del cicle ha permès distingir dues etapes en la fase de latència que anomenem inicial i de transició. L'etapa de transició s'ha descrit per mitjà d'un model matemàtic continu validat amb simulacions INDISIM. S'ha constatat que la fase de latència ha de ser vista com un procés dinàmic, i no com un simple període de temps descrit per un paràmetre. Les funcions distància també s'han utilitzat per avaluar les propietats del creixement balancejat.Alguns dels resultats de les simulacions amb INDISIM s'han corroborat experimentalment per mitjà de citometria de flux. S'ha comprovat, al llarg de les diverses fases del creixement, el comportament de la distribució de biomassa previst per simulació, així com l'evolució de les funcions distància. La coincidència entre els resultats experimentals i els de simulació no és trivial, ja que el sistema estudiat és molt complex. Per tant, aquests resultats permeten comprovar la bondat de la metodologia INDISIM.Finalment, hem avançat en l'optimització d'eines per parametritzar IbMs, un pas essencial per poder utilitzar les simulacions INDISIM de manera quantitativa. S'han adaptat i assajat els mètodes grid search, NMTA i NEWUOA. Aquest darrer mètode ha donat els millors resultats en termes de temps, mantenint una bona precisió en els valors òptims dels paràmetres. Per concloure, podem afirmar que INDISIM ha estat validat com una bona eina per abordar l'estudi dels estats transitoris com la fase de latència. / Predictive food microbiology has become an important specific field in microbiology. Bacterial growth of a batch culture may show up to four phases: lag, exponential, stationary and death. The bacterial lag phase, which is of specific interest in the framework of predictive food microbiology, has generally been tackled with two generic approaches: at a cellular and intracellular level, which we call the microscopic scale, and at a population level, which we call the macroscopic scale. Studies at the microscopic level tackle the processes that take place inside the bacterium during its adaptation to the new conditions such as the changes in genetic expression and in metabolism. Studies at the macroscopic scale deal with the description of a population growth cycle by means of mathematical continuous modelling and experimental measurements of the variables related to cell density evolution.In this work we aimed to improve the understanding of the lag phase in bacterial cultures and the intrinsic phenomena behind it. This has been carried out from the perspective of Individual-based Modelling (IbM) with the simulator INDISIM (INDividual DIScrete SIMulation), which has been specifically improved for this purpose. IbM introduces a mechanistic approach by modelling the cell as an individual unit. IbM simulations deal with 1 to 106 cells, and allow specific study of the phenomena that emerge from the interaction among cells. These phenomena belong to the mesoscopic level.Mesoscopic approaches are essential if we are to understand the effects of cellular adaptations at an individual level in the evolution of a population.Thus, they are a bridge between individuals and population, or, to put it another way, between models at a microscopic scale and models at a macroscopic scale.First, we studied separately two of the several mechanisms that may cause a lag phase: the lag caused by the initial low mean mass of the inoculum, and the lag caused by a change in the nutrient source. The relationship among lag duration and several variables such as temperature and inoculum size were also checked. This analysis allowed identification of the biomass distribution as a very important variable to follow the evolution of the culture during the growth cycle. A mathematical tool was defined in order to assess its evolution during the different phases of growth: the distance functions.A theoretical approach to the culture lag phase through the dynamics of the growth rate allowed us to split this phase into two stages: initial and transition. A continuous mathematical model was built in order to shape the transition stage, and it was checked with INDISIM simulations. It was seen that the lag phase must be defined as a dynamic process rather than as a simple period of time. The distance functions were also used to discuss the balanced growth conditions.Some of the reported INDISIM simulation results were subjected to experimental corroboration by means of flow cytometry, which allow the assessment of size distributions of a culture through time. The dynamics of biomass distribution given by INDISIM simulations were checked, as well as the distance function evolution during the different phases of growth. The coincidence between simulations and experiments is not trivial: the system under study is complex; therefore, the coincidence in the dynamics of the different modelled parameters is a validation of both the model and the simulation methodology.Finally, we have made progress in IbM parameter estimation methods, which is essential to improve quantitative processing of INDISIM simulations.Classic grid search, NMTA and NEWUOA methods were adapted and tested, the latter providing better results with regard to time spent, which maintains satisfactory precision in the parameter estimation results.Above all, the validity of INDISIM as a useful tool to tackle transient processes such as the bacterial lag phase has been amply demonstrated.
234

A Bayesian Approach for Inverse Problems in Synthetic Aperture Radar Imaging

Zhu, Sha 23 October 2012 (has links) (PDF)
Synthetic Aperture Radar (SAR) imaging is a well-known technique in the domain of remote sensing, aerospace surveillance, geography and mapping. To obtain images of high resolution under noise, taking into account of the characteristics of targets in the observed scene, the different uncertainties of measure and the modeling errors becomes very important.Conventional imaging methods are based on i) over-simplified scene models, ii) a simplified linear forward modeling (mathematical relations between the transmitted signals, the received signals and the targets) and iii) using a very simplified Inverse Fast Fourier Transform (IFFT) to do the inversion, resulting in low resolution and noisy images with unsuppressed speckles and high side lobe artifacts.In this thesis, we propose to use a Bayesian approach to SAR imaging, which overcomes many drawbacks of classical methods and brings high resolution, more stable images and more accurate parameter estimation for target recognition.The proposed unifying approach is used for inverse problems in Mono-, Bi- and Multi-static SAR imaging, as well as for micromotion target imaging. Appropriate priors for modeling different target scenes in terms of target features enhancement during imaging are proposed. Fast and effective estimation methods with simple and hierarchical priors are developed. The problem of hyperparameter estimation is also handled in this Bayesian approach framework. Results on synthetic, experimental and real data demonstrate the effectiveness of the proposed approach.
235

An Inverse Finite Element Approach for Identifying Forces in Biological Tissues

Cranston, Graham January 2009 (has links)
For centuries physicians, scientists, engineers, mathematicians, and many others have been asking: 'what are the forces that drive tissues in an embryo to their final geometric forms?' At the tissue and whole embryo level, a multitude of very different morphogenetic processes, such as gastrulation and neurulation are involved. However, at the cellular level, virtually all of these processes are evidently driven by a relatively small number of internal structures all of whose forces can be resolved into equivalent interfacial tensions γ. Measuring the cell-level forces that drive specific morphogenetic events remains one of the great unsolved problems of biomechanics. Here I present a novel approach that allows these forces to be estimated from time lapse images. In this approach, the motions of all visible triple junctions formed between trios of cells adjacent to each other in epithelia (2D cell sheets) are tracked in time-lapse images. An existing cell-based Finite Element (FE) model is then used to calculate the viscous forces needed to deform each cell in the observed way. A recursive least squares technique with variable forgetting factors is then used to estimate the interfacial tensions that would have to be present along each cell-cell interface to provide those forces, along with the attendant pressures in each cell. The algorithm is tested extensively using synthetic data from an FE model. Emphasis is placed on features likely to be encountered in data from live tissues during morphogenesis and wound healing. Those features include algorithm stability and tracking despite input noise, interfacial tensions that could change slowly or suddenly, and complications from imaging small regions of a larger epithelial tissue (the frayed boundary problem). Although the basic algorithm is highly sensitive to input noise due to the ill-conditioned nature of the system of equations that must be solved to obtain the interfacial tensions, methods are introduced to improve the resulting force and pressure estimates. The final algorithm returns very good estimates for interfacial tensions and intracellular cellular pressures when used with synthetic data, and it holds great promise for calculating the forces that remodel live tissue.
236

Performance comparison of the Extended Kalman Filter and the Recursive Prediction Error Method / Jämförelse mellan Extended Kalmanfiltret och den Rekursiva Prediktionsfelsmetoden

Wiklander, Jonas January 2003 (has links)
In several projects within ABB there is a need of state and parameter estimation for nonlinear dynamic systems. One example is a project investigating optimisation of gas turbine operation. In a gas turbine there are several parameters and states which are not measured, but are crucial for the performance. Such parameters are polytropic efficiencies in compressor and turbine stages, cooling mass flows, friction coefficients and temperatures. Different methods are being tested to solve this problem of system identification or parameter estimation. This thesis describes the implementation of such a method and compares it with previously implemented identification methods. The comparison is carried out in the context of parameter estimation in gas turbine models, a dynamic load model used in power systems as well as models of other dynamic systems. Both simulated and real plant measurements are used in the study.
237

Fysikalisk modellering av klimat i entreprenadmaskin / Physical Modeling of Climate in Construction Vehicles

Nilsson, Sebastian January 2005 (has links)
This masters thesis concerns a modeling project performed at Volvo Technology in Gothenburg, Sweden. The main purpose of the project has been to develop a physical model of the climate in construction vehicles that later on can be used in the development of an electronic climate controller. The focus of the work has been on one type of wheel loader and one type of excavator. The temperature inside the compartment has been set equal to the notion climate. With physical theories about air flow and heat transfer in respect, relations between the components in the climate unit and the compartment has been calculated. Parameters that has had unknown values has been estimated. The relations have then been implemented in the modeling tool Simulink. The validation of the model has been carried out by comparison between measured data and modeled values by calculation of Root Mean Square and correlation. Varying the estimated parameters and identifying the change in the output signal, i.e the temperature of the compartment, have performed a sensitivity analysis. The result of the validation has shown that the factor with the greatest influence on the temperature in the vehicle is the airflow through the climate unit and the outlets. Minor changes of airflow have resulted in major changes in temperature. The validation principally shows that the model gives a good estimation of the temperature in the compartment. The static values of the model differs from the values of the measured data but is regarded being as within an acceptable margin of error. The weakness of the model is mainly its predictions of the dynamics, which does not correlate satisfyingly with the data.
238

An Inverse Finite Element Approach for Identifying Forces in Biological Tissues

Cranston, Graham January 2009 (has links)
For centuries physicians, scientists, engineers, mathematicians, and many others have been asking: 'what are the forces that drive tissues in an embryo to their final geometric forms?' At the tissue and whole embryo level, a multitude of very different morphogenetic processes, such as gastrulation and neurulation are involved. However, at the cellular level, virtually all of these processes are evidently driven by a relatively small number of internal structures all of whose forces can be resolved into equivalent interfacial tensions γ. Measuring the cell-level forces that drive specific morphogenetic events remains one of the great unsolved problems of biomechanics. Here I present a novel approach that allows these forces to be estimated from time lapse images. In this approach, the motions of all visible triple junctions formed between trios of cells adjacent to each other in epithelia (2D cell sheets) are tracked in time-lapse images. An existing cell-based Finite Element (FE) model is then used to calculate the viscous forces needed to deform each cell in the observed way. A recursive least squares technique with variable forgetting factors is then used to estimate the interfacial tensions that would have to be present along each cell-cell interface to provide those forces, along with the attendant pressures in each cell. The algorithm is tested extensively using synthetic data from an FE model. Emphasis is placed on features likely to be encountered in data from live tissues during morphogenesis and wound healing. Those features include algorithm stability and tracking despite input noise, interfacial tensions that could change slowly or suddenly, and complications from imaging small regions of a larger epithelial tissue (the frayed boundary problem). Although the basic algorithm is highly sensitive to input noise due to the ill-conditioned nature of the system of equations that must be solved to obtain the interfacial tensions, methods are introduced to improve the resulting force and pressure estimates. The final algorithm returns very good estimates for interfacial tensions and intracellular cellular pressures when used with synthetic data, and it holds great promise for calculating the forces that remodel live tissue.
239

Enhancement of Modeling Phased Anaerobic Digestion Systems through Investigation of Their Microbial Ecology and Biological Activity

Zamanzadeh, Mirzaman January 2012 (has links)
Anaerobic digestion (AD) is widely used in wastewater treatment plants for stabilisation of primary and waste activated sludges. Increasingly energy prices as well as stringent environmental and public health regulations ensure the ongoing popularity of anaerobic digestion. Reduction of volatile solids, methane production and pathogen reduction are the major objectives of anaerobic digestion. Phased anaerobic digestion is a promising technology that may allow improved volatile solids destruction and methane gas production. In AD models, microbially-mediated processes are described by functionally-grouped microorganisms. Ignoring the presence of functionally-different species in the separate phases may influence the output of AD modeling. The objective of this research was to thoroughly investigate the kinetics of hydrolysis, acetogenesis (i.e., propionate oxidation) and methanogenesis (i.e., acetoclastic) in phased anaerobic digestion systems. Using a denaturing gradient gel electrophoresis (DGGE) technique, bacterial and archaeal communities were compared to complement kinetics studies. Four phased digesters including Mesophilic-Mesophilic, Thermophilic-Mesophilic, Thermophilic-Thermophilic and Mesophilic-Thermophilic were employed to investigate the influence of phase separation and temperature on the microbial activity of the digestion systems. Two more digesters were used as control, one at mesophilic 35 0C (C1) and one at thermophilic 55 0C (C2) temperatures. The HRTs in the first-phase, second-phase and single-phase digesters were approximately 3.5, 14, and 17 days, respectively. All the digesters were fed a mixture of primary and secondary sludges. Following achievement of steady-state in the digesters, a series of batch experiments were conducted off-line to study the impact of the digester conditions on the kinetics of above-mentioned processes. A Monod-type equation was used to study the kinetics of acetoclastic methanogens and POB in the digesters, while a first-order model was used for the investigation of hydrolysis kinetics. Application of an elevated temperature (55 0C) in the first-phase was found to be effective in enhancing solubilisation of particulate organics. This improvement was more significant for nitrogen-containing material (28%) as compared to the PCOD removal (5%) when the M1 and T1 digesters were compared. Among all the configurations, the highest PCOD removal was achieved in the T1T2 system (pvalue<0.05). In contrast to the solubilisation efficiencies, the mesophilic digesters (C1, M1M2 and T1M3) outperformed the thermophilic digesters (C2, T1T2 and M1T3) in COD removal. The highest COD removal was obtained in the T1M3 digestion system, indicating a COD removal efficiency of 50.7±2.1%. The DGGE fingerprints from digesters demonstrated that digester parameters (i.e., phase separation and temperature) influenced the structure of the bacterial and archaeal communities. This resulted in distinct clustering of DGGE profiles from the 1st-phase digesters as compared to the 2nd-phase digesters and from the mesophilic digesters as compared to the thermophilic ones. Based on the bio-kinetic parameters estimated for the various digesters and analysis of the confidence regions of the kinetic sets (kmax and Ks), the batch experiment studies revealed that the kinetic characteristics of the acetoclastic methanogens and POB developed in the heavily loaded digesters (M1 and T1) were different from those species developed in the remaining mesophilic digesters (M2, M3 and C1). As with the results from the mesophilic digesters, a similar observation was made for the thermophilic digesters. The species of acetoclastic methanogens and POB within the T1 digester had greater kmax and Ks values in comparison to the values of the T3 and C2 digesters. However, the bio-kinetic parameters of the T2 digester showed a confidence region that overlapped with both the T1 and T3 digesters. The acetate and propionate concentrations in the digesters supported these results. The acetate and propionate concentrations in the M1 digesters were, respectively, 338±48 and 219±17 mgCOD/L, while those of the M2, M3 and C1 digesters were less than 60 mg/L as COD. The acetate and propionate concentrations were, respectively, 872±38 and 1220±66 in T1 digester, whereas their concentrations ranged 140-184 and 209-309 mg/L as COD in the T2, T3 and C2 digesters. In addition, the DGGE results displayed further evidence on the differing microbial community in the 1st- and 2nd-phase digesters. Two first-order hydrolysis models (single- and dual-pathway) were employed to study the hydrolysis process in the phased and single-stage digesters. The results demonstrated that the dual-pathway hydrolysis model better fit the particulate COD solubilisation as compared to the single-pathway model. The slowly (F0,s) and rapidly (F0,r) hydrolysable fractions of the raw sludge were 36% and 25%, respectively. A comparison of the estimated coefficients for the mesophilic digesters revealed that the hydrolysis coefficients (both Khyd,s and Khyd,r) of the M1 digester were greater than those of the M2 and M3 digesters. In the thermophilic digesters it was observed that the Khyd,r value of the T1 digester differed from those of the T2, T3 and C2 digesters; whereas, the hydrolysis rate of slowly hydrolysable matter (i.e., Khyd,s) did not differ significantly among these digesters. The influence of the facultative bacteria, that originated from the WAS fraction of the raw sludge, and/or the presence of hydrolytic biomass with different enzymatic systems may have contributed to the different hydrolysis rates in the M1 and T1 digesters from the corresponding mesophilic (i.e, M2 and M3) and thermophilic (i.e., T2 and T3) 2nd-phase digesters.
240

Estimation of Stochastic Degradation Models Using Uncertain Inspection Data

Lu, Dongliang January 2012 (has links)
Degradation of components and structures is a major threat to the safety and reliability of large engineering systems, such as the railway networks or the nuclear power plants. Periodic inspection and maintenance are thus required to ensure that the system is in good condition for continued service. A key element for the optimal inspection and maintenance is to accurately model and forecast the degradation progress, such that inspection and preventive maintenance can be scheduled accordingly. In recently years, probabilistic models based on stochastic process have become increasingly popular in degradation modelling, due to their flexibility in modelling both the temporal and sample uncertainties of the degradation. However, because of the often complex structure of stochastic degradation models, accurate estimate of the model parameters can be quite difficult, especially when the inspection data are noisy or incomplete. Not considering the effect of uncertain inspection data is likely to result in biased parameter estimates and therefore erroneous predictions of future degradation. The main objective of the thesis is to develop formal methods for the parameter estimation of stochastic degradation models using uncertain inspection data. Three typical stochastic models are considered. They are the random rate model, the gamma process model and the Poisson process model, among which the random rate model and the gamma process model are used to model the flaw growth, and the Poisson process model is used to model the flaw generation. Likelihood functions of the three stochastic models given noisy or incomplete inspection data are derived, from which maximum likelihood estimates can be obtained. The thesis also investigates Bayesian inference of the stochastic degradation models. The most notable advantage of Bayesian inference over classical point estimates is its ability to incorporate background information in the estimation process, which is especially useful when inspection data are scarce. A major obstacle for accurate parameter inference of stochastic models from uncertain inspection data is the computational difficulties of the likelihood evaluation, as it often involves calculation of high dimensional integrals or large number of convolutions. To overcome the computational difficulties, a number of numerical methods are developed in the thesis. For example, for the gamma process model subject to sizing error, an efficient maximum likelihood method is developed using the Genz's transform and quasi-Monte Carlo simulation. A Markov Chain Monte Carlo simulation with sizing error as auxiliary variables is developed for the Poisson flaw generation model, A sequential Bayesian updating using approximate Bayesian computation and weighted samples is also developed for Bayesian inference of the gamma process subject to sizing error. Examples on the degradation of nuclear power plant components are presented to illustrate the use of the stochastic degradation models using practical uncertain inspection data. It is shown from the examples that the proposed methods are very effective in terms of accuracy and computational efficiency.

Page generated in 0.1657 seconds