Spelling suggestions: "subject:"[een] PARAMETER ESTIMATION"" "subject:"[enn] PARAMETER ESTIMATION""
91 |
Parameter Estimation and Uncertainty Analysis of Contaminant First Arrival Times at Household Drinking Water WellsKang, Mary January 2007 (has links)
Exposure assessment, which is an investigation of the extent of human exposure to a specific contaminant, must include estimates of the duration and frequency of exposure. For a groundwater system, the duration of exposure is controlled largely by the arrival time of the contaminant of concern at a drinking water well. This arrival time, which is normally estimated by using groundwater flow and transport models, can have a range of possible values due to the uncertainties that are typically present in real problems. Earlier arrival times generally represent low likelihood events, but play a crucial role in the decision-making process that must be conservative and precautionary, especially when evaluating the potential for adverse health impacts. Therefore, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times.
To demonstrate an approach to quantify the uncertainty of arrival times, a real contaminant transport problem which involves TCE contamination due to releases from the Lockformer Company Facility in Lisle, Illinois is used. The approach used in this research consists of two major components: inverse modelling or parameter estimation, and uncertainty analysis.
The parameter estimation process for this case study was selected based on insufficiencies in the model and observational data due to errors, biases, and limitations. A consideration of its purpose, which is to aid in characterising uncertainty, was also made in the process by including many possible variations in attempts to minimize assumptions. A preliminary investigation was conducted using a well-accepted parameter estimation method, PEST, and the corresponding findings were used to define characteristics of the parameter estimation process applied to this case study. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and deadzones, were incorporated in the parameter estimation process to treat specific insufficiencies. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. For each objective function, three procedures were implemented as a part of the parameter estimation approach for the given case study: a multistart procedure, a stochastic search using the Dynamically-Dimensioned Search (DDS), and a test for acceptance based on predefined physical criteria. The best performance in terms of the ability of parameter sets to satisfy the physical criteria was achieved using a Cauchy’s M-estimator that was modified for this study and designated as the LRS1 M-estimator. Due to uncertainties, multiple parameter sets obtained with the LRS1 M-estimator, the L1-estimator, and the L2-estimator are recommended for use in uncertainty analysis. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets; in contrast, deadzones proved to produce negligible benefits. The characteristics for parameter sets were examined in terms of frequency histograms and plots of parameter value versus objective function value to infer the nature of the likelihood distributions of parameters. The correlation structure was estimated using Pearson’s product-moment correlation coefficient. The parameters are generally distributed uniformly or appear to follow a random nature with few correlations in the parameter space that results after the implementation of the multistart procedure. The execution of the search procedure results in the introduction of many correlations and in parameter distributions that appear to follow lognormal, normal, or uniform distributions. The application of the physical criteria refines the parameter characteristics in the parameter space resulting from the search procedure by reducing anomalies. The combined effect of optimization and the application of the physical criteria performs the function of behavioural thresholds by removing parameter sets with high objective function values.
Uncertainty analysis is performed with parameter sets obtained through two different sampling methodologies: the Monte Carlo sampling methodology, which randomly and independently samples from user-defined distributions, and the physically-based DDS-AU (P-DDS-AU) sampling methodology, which is developed based on the multiple parameter sets acquired during the parameter estimation process. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using the P-DDS-AU sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For the P-DDS-AU samples, uncertainty representation is performed using four definitions based on pseudo-likelihoods: two based on the Nash and Sutcliffe efficiency criterion, and two based on inverse error or residual variance. The definitions consist of shaping factors that strongly affect the resulting likelihood distribution. In addition, some definitions are affected by the objective function definition. Therefore, all variations are considered in the development of likelihood distribution envelopes, which are designed to maximize the amount of information available to decision-makers. The considerations that are important to the creation of an uncertainty envelope are outlined in this thesis. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria is recommended.
The selection of likelihood and objective function definitions and their properties are made based on the needs of the problem; therefore, preliminary investigations should always be conducted to provide a basis for selecting appropriate methods and definitions. It is imperative to remember that the communication of assumptions and definitions used in both parameter estimation and uncertainty analysis is crucial in decision-making scenarios.
|
92 |
Constrained expectation-maximization (EM), dynamic analysis, linear quadratic tracking, and nonlinear constrained expectation-maximation (EM) for the analysis of genetic regulatory networks and signal transduction networksXiong, Hao 15 May 2009 (has links)
Despite the immense progress made by molecular biology in cataloging andcharacterizing molecular elements of life and the success in genome sequencing, therehave not been comparable advances in the functional study of complex phenotypes.This is because isolated study of one molecule, or one gene, at a time is not enough byitself to characterize the complex interactions in organism and to explain the functionsthat arise out of these interactions. Mathematical modeling of biological systems isone way to meet the challenge.My research formulates the modeling of gene regulation as a control problem andapplies systems and control theory to the identification, analysis, and optimal controlof genetic regulatory networks. The major contribution of my work includes biologicallyconstrained estimation, dynamical analysis, and optimal control of genetic networks.In addition, parameter estimation of nonlinear models of biological networksis also studied, as a parameter estimation problem of a general nonlinear dynamicalsystem. Results demonstrate the superior predictive power of biologically constrainedstate-space models, and that genetic networks can have differential dynamic propertieswhen subjected to different environmental perturbations. Application of optimalcontrol demonstrates feasibility of regulating gene expression levels. In the difficultproblem of parameter estimation, generalized EM algorithm is deployed, and a set of explicit formula based on extended Kalman filter is derived. Application of themethod to synthetic and real world data shows promising results.
|
93 |
Parameter Estimation of Dynamic Air-conditioning Component Models Using Limited Sensor DataHariharan, Natarajkumar 2010 May 1900 (has links)
This thesis presents an approach for identifying critical model parameters
in dynamic air-conditioning systems using limited sensor information. The expansion
valve model and the compressor model parameters play a crucial role in the system
model's accuracy. In the past, these parameters have been estimated using a mass flow
meter; however, this is an expensive devise and at times, impractical. In response to
these constraints, a novel method to estimate the unknown parameters of the expansion
valve model and the compressor model is developed. A gray box model obtained by
augmenting the expansion valve model, the evaporator model, and the compressor model
is used. Two numerical search algorithms, nonlinear least squares and Simplex search,
are used to estimate the parameters of the expansion valve model and the compressor
model. This parameter estimation is done by minimizing the error between the model
output and the experimental systems output. Results demonstrate that the nonlinear least
squares algorithm was more robust for this estimation problem than the Simplex search
algorithm.
In this thesis, two types of expansion valves, the Electronic Expansion Valve and
the Thermostatic Expansion Valve, are considered. The Electronic Expansion Valve
model is a static model due to its dynamics being much faster than the systems
dynamics; the Thermostatic expansion valve model, however, is a dynamic one. The
parameter estimation algorithm developed is validated on two different experimental
systems to confirm the practicality of its approach. Knowing the model parameters
accurately can lead to a better model for control and fault detection applications. In
addition to parameter estimation, this thesis also provides and validates a simple usable
mathematical model for the Thermostatic expansion valve.
|
94 |
Estimation of critical forest structure metrics through the spatial analysis of airborne laser scanner data /Andersen, Hans-Erik. January 2003 (has links)
Thesis (Ph. D.)--University of Washington, 2003. / Vita. Includes bibliographical references (p. 152-162).
|
95 |
Semiparametric analysis of interval censored survival dataLong, Yongxian., 龙泳先. January 2010 (has links)
published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
|
96 |
Multiple comparison and selection of location parameters of exponential populations吳焯基, Ng, Cheuk-key, Allen. January 1990 (has links)
published_or_final_version / Statistics / Doctoral / Doctor of Philosophy
|
97 |
Generic Wind Turbine Generator Model Comparison Based on Optimal Parameter FittingDai, Zhen 18 March 2014 (has links)
Parameter tting will facilitate model validation of the generic dynamic model for type-3
WTGs. In this thesis, a test system including a single 1.5 MW DFIG has been built and
tested in the PSCAD/EMTDC environment for dynamic responses. The data generated
during these tests have been used as measurements for the parameter tting which is
carried out using the unscented Kalman lter. Two variations of the generic type-3
WTG model (the detailed model and the simpli ed model) have been compared and
used for parameter estimation. The detailed model is able to capture the dynamics
caused by the converter and thus has been used for parameter tting when inputs are
from a fault scenario. On the other hand, the simpli ed model works well for parameter
tting when a wind speed disturbance is of interest. Given measurements from PSCAD,
the estimated parameters using both models are indeed improvements compared to the
original belief of the parameters in terms of prediction error.
|
98 |
Parameters related to fractional domination in graphs.Erwin, D. J. January 1995 (has links)
The use of characteristic functions to represent well-known sets in graph theory such as dominating, irredundant, independent, covering and packing sets - leads naturally to fractional versions of these sets and corresponding fractional parameters. Let S be a dominating set of a graph G and f : V(G)~{0,1} the characteristic function of that set. By first translating the restrictions which define a dominating set from a set-based to a function-based form, and then allowing the function f to map the vertex set to the unit closed interval, we obtain the fractional generalisation of the dominating set S. In chapter 1, known domination-related parameters and their fractional generalisations are introduced, relations between them are investigated, and Gallai type results are derived. Particular attention is given to graphs with symmetry and to products of graphs. If instead of replacing the function f : V(G)~{0,1} with a function which maps the vertex set to the unit closed interval we introduce a function f' which maps the vertex set to {0, 1, ... ,k} (where k is some fixed, non-negative integer) and a corresponding change in the restrictions on the dominating set, we obtain a k-dominating function. In chapter 2 corresponding k-parameters are considered and are related to the classical and fractional parameters. The calculations of some well known fractional parameters are expressed as optimization problems involving the k- parameters. An e = 1 function is a function f : V(G)~[0,1] which obeys the restrictions that (i) every non-isolated vertex u is adjacent to some vertex v such that f(u)+f(v) = 1, and every isolated vertex w has f(w) = 1. In chapter 3 a theory of e = 1 functions and parameters is developed. Relationships are traced between e = 1 parameters and those previously introduced, some Gallai type results are derived for the e = 1
parameters, and e = 1 parameters are determined for several classes of graphs. The e = 1 theory is applied to derive new results about classical and fractional domination parameters. / Thesis (M.Sc.)-University of Natal, 1995.
|
99 |
Generic Wind Turbine Generator Model Comparison Based on Optimal Parameter FittingDai, Zhen 18 March 2014 (has links)
Parameter tting will facilitate model validation of the generic dynamic model for type-3
WTGs. In this thesis, a test system including a single 1.5 MW DFIG has been built and
tested in the PSCAD/EMTDC environment for dynamic responses. The data generated
during these tests have been used as measurements for the parameter tting which is
carried out using the unscented Kalman lter. Two variations of the generic type-3
WTG model (the detailed model and the simpli ed model) have been compared and
used for parameter estimation. The detailed model is able to capture the dynamics
caused by the converter and thus has been used for parameter tting when inputs are
from a fault scenario. On the other hand, the simpli ed model works well for parameter
tting when a wind speed disturbance is of interest. Given measurements from PSCAD,
the estimated parameters using both models are indeed improvements compared to the
original belief of the parameters in terms of prediction error.
|
100 |
The Application of Markov Chain Monte Carlo Techniques in Non-Linear Parameter Estimation for Chemical Engineering ModelsMathew, Manoj January 2013 (has links)
Modeling of chemical engineering systems often necessitates using non-linear models. These models can range in complexity, from a simple analytical equation to a system of differential equations. Regardless of what type of model is being utilized, determining parameter estimates is essential in everyday chemical engineering practice. One promising approach to non-linear regression is a technique called Markov Chain Monte Carlo (MCMC).This method produces reliable parameter estimates and generates joint confidence regions (JCRs) with correct shape and correct probability content. Despite these advantages, its application in chemical engineering literature has been limited. Therefore, in this project, MCMC methods were applied to a variety of chemical engineering models. The objectives of this research is to (1) illustrate how to implement MCMC methods in complex non-linear models (2) show the advantages of using MCMC techniques over classical regression approaches and (3) provide practical guidelines on how to reduce the computational time.
MCMC methods were first applied to the biological oxygen demand (BOD) problem. In this case study, an implementation procedure was outlined using specific examples from the BOD problem. The results from the study illustrated the importance of estimating the pure error variance as a parameter rather than fixing its value based on the mean square error. In addition, a comparison was carried out between the MCMC results and the results obtained from using classical regression approaches. The findings show that although similar point estimates are obtained, JCRs generated from approximation methods cannot model the parameter uncertainty adequately.
Markov Chain Monte Carlo techniques were then applied in estimating reactivity ratios in the Mayo-Lewis model, Meyer-Lowry model, the direct numerical integration model and the triad fraction multiresponse model. The implementation steps for each of these models were discussed in detail and the results from this research were once again compared to previously used approximation methods. Once again, the conclusion drawn from this work showed that MCMC methods must be employed in order to obtain JCRs with the correct shape and correct probability content.
MCMC methods were also applied in estimating kinetic parameter used in the solid oxide fuel cell study. More specifically, the kinetics of the water-gas shift reaction, which is used in generating hydrogen for the fuel cell, was studied. The results from this case study showed how the MCMC output can be analyzed in order to diagnose parameter observability and correlation. A significant portion of the model needed to be reduced due to these issues of observability and correlation. Point estimates and JCRs were then generated using the reduced model and diagnostic checks were carried out in order to ensure the model was able to capture the data adequately.
A few select parameters in the Waterloo Polymer Simulator were estimated using the MCMC algorithm. Previous studies have shown that accurate parameter estimates and JCRs could not be obtained using classical regression approaches. However, when MCMC techniques were applied to the same problem, reliable parameter estimates and correct shape and correct probability content confidence regions were observed. This case study offers a strong argument as to why classical regression approaches should be replaced by MCMC techniques.
Finally, a very brief overview of the computational times for each non-linear model used in this research was provided. In addition, a serial farming approach was proposed and a significant decrease in computational time was observed when this procedure was implemented.
|
Page generated in 0.0492 seconds