Marais, Christiaan Antonie
15 September 2010
There exists a need to estimate the potential financial, epidemiological and societal impact that diseases, and the treatment thereof, can have on society. Markov processes are often used to model diseases to estimate these quantities of interest and have an advantage over standard survival analysis techniques in that multiple events can be studied simultaneously. The theory of Markov processes is well established for processes for which the process parameters are known but not as much of the literature has focussed on the estimation of these transition parameters. This dissertation investigates and implements maximum likelihood estimators for Markov processes based on longitudinal data. The methods are described based on processes that are observed such that all transitions are recorded exactly, processes of which the state of the process is recorded at equidistant time points, at irregular time points and processes for which each process is observed at a possibly different irregular time point. Methods for handling right censoring and estimating the effect of covariates on parameters are described. The estimation methods are implemented by simulating Markov processes and estimating the parameters based on the simulated data so that the accuracy of the estimators can be investigated. We show that the estimators can provide accurate estimates of state prevalence if the process is stationary, even with relatively small sample sizes. Furthermore, we indicate that the estimators lack good accuracy in estimating the effect of covariates on parameters unless state transitions are recorded exactly. The methods are discussed with reference to the msm package for R which is freely available and a popular tool for estimating and implementing Markov processes in disease modelling. Methods are mentioned for the treatment of aggregate data, diseases where the state of patients are not known with complete certainty at every observation and diseases where patient interaction plays a role. / Dissertation (MSc)--University of Pretoria, 2010. / Statistics / unrestricted
Hayslett, Homer T. (Homer Thornton)
10 June 2012
In this thesis, confidence bounds on simple and more complex parameters are stated along with detailed computational procedures for finding these confidence bounds from the given data. Confidence bounds on the more familiar parameters, i.e., μ, ơ², μ₁ - μ₂, and ơ²₁/ơ²₂, are briefly presented for the sake of completeness. The confidence statements for the less familiar parameters and combinations of parameters are treated in more detail. In the cases of the non-centrality parameters of the non-central t², F and X² distributions, a variance-stabilizing transformation is used, a normal approximation is utilized, and confidence bounds are pub on the parameter. In the non-central t² and non-central F distributions iterative procedures are used to obtain confidence bounds on the non-centrality parameter, i.e., a first guess is made which is improved until the desired accuracy is obtained This procedure is unnecessary in the non-central X² distribution, since the expressions for the upper and lower limits can be reduced to closed form. Computational procedures and completely worked examples are included. / Master of Science
A non-clinical method to simultaneously estimate thermal conductivity, volumetric specific heat, and perfusion of in-vivo tissueMadden, Marie Catherine 02 September 2004 (has links)
Many medical therapies, such as thermal tumor detection and hypothermia cancer treatments, utilize heat transfer mechanisms of the body. The focus of this work is the development and experimental validation of a method to simultaneously estimate thermal conductivity, volumetric specific heat, and perfusion of in-vivo tissue. The heat transfer through the tissue was modeled using a modified Pennes' equation. Using a least-squares parameter estimation method with regularization, the thermal properties could be estimated from the temperature response to the known applied heat flux. The method was tested experimentally using a new agar-water tissue phantom designed for this purpose. A total of 40 tests were performed. The results of the experiments show that conductivity can be successfully estimated for perfused tissue phantoms. The values returned for volumetric specific heat are lower than expected, while the estimated values of perfusion are far greater than expected. It is believed that the mathematical model is incorrectly accounting between these two terms. Both terms were treated as heat sinks, so it is conceivable that it is not discriminating between them correctly. Although the method can estimate all three parameters simultaneously, but it seems that the mathematical model is not accurately describing the system. In the future, improvements to the model could be made to allow the method to function accurately. / Master of Science
Hanuska, Alexander Robert Jr.
24 April 1998
Predicting the performance of complex structures exposed to harsh thermal environments is a crucial issue in many of today's aerospace and space designs. To predict the thermal stresses a structure might be exposed to, the thermal properties of the independent materials used in the design of the structure need to be known. Therefore, a noninvasive estimation procedure involving Genetic Algorithms was developed to determine the various thermal properties needed to adequately model the Outer Wing Subcomponent (OWS), a structure located at the trailing edge of the High Speed Civil Transport's (HSCT) wing tip. Due to the nature of the nonlinear least-squares estimation method used in this study, both theoretical and experimental temperature histories were required. Several one-dimensional and two-dimensional finite element models of the OWS were developed to compute the transient theoretical temperature histories. The experimental data were obtained from optimized experiments that were run at various surrounding temperature settings to investigate the temperature dependence of the estimated properties. An experimental optimization was performed to provide the most accurate estimates and reduce the confidence intervals. The simultaneous estimation of eight thermal properties, including the volumetric heat capacities and out-of-plane thermal conductivities of the facesheets, the honeycomb, the skins, and the torque tubes, was successfully completed with the one-dimensional model and the results used to evaluate the remaining in-plane thermal conductivities of the facesheets, the honeycomb, the skins, and the torque tubes with the two-dimensional model. Although experimental optimization did not eliminate all correlation between the parameters, the minimization procedure based on the Genetic Algorithm performed extremely well, despite the high degree of correlation and low sensitivity of many of the parameters. / Master of Science
An improved algorithm for identification of time varying parameters using recursive digital techniquesMaloney, Bernard Christopher Patrick January 1986 (has links)
Identification is the process of determining values for the characteristic quantities, called parameters, of a system. Examples of such quantities are mass, inductance, resistance, spring coefficient, gain, et cetera. The decreasing cost of digital processors and the versatility of digital programming make digital methods an attractive means of accomplishing identification. It is important, however, that an identifier be able to track any change in a parameter if its output is to be used in any predictive capacity, such as in an adaptive controller. Most studies of digital identification have avoided the topic of time variations by using batch processing methods that implicitly assume constant parameters; this thesis does not. This thesis first investigates the parameter-tracking capabilities of a popular, real-time digital identification algorithm, the recursive weighted least squares method. This method is claimed to be able to track only slowly time-varying parameters. Based on the results of this study, a method of improving the accuracy of estimates of time-varying parameters is developed. This method, called conditioning, is a post-processor to the recursive weighted least squares algorithm. The results of tests of this method using three different plant simulations are presented, demonstrating the improved accuracy achieved by conditioning estimates of time-varying parameters. / M.S.
Several biased estimators have been proposed as alternatives to the least squares estimator when multicollinearity is present in the multiple linear regression model. Though the ridge estimator and the principal components estimator have been widely used for such problems, it should be noted that their performances in terms of mean square error are dependent upon the orientation of the unknown parameter vector and the magnitude of σ². By defining the fractional principal components regression model as y̲ = Zα̲ + 𝛜̲ = ZF⁻α<sub>F</sub> + 𝛜̲ where α<sub>F</sub> = Fα̲ and F⁻ is a generalized inverse of a diagonal matrix P, the resulting estimators of α̲<sub>F</sub>, based on various forms of F, are shown to define the class of the fractional principal components estimators. In the fractional principal components framework, several new estimation techniques are developed. The performances of the new estimators are evaluated and compared with other commonly used biased estimators both theoretically and by simulation studies. / Ph. D. / incomplete_metadata
Underwood, Samuel J.
17 May 2006
No description available.
John, Yakubu M., Patel, Rajnikant, Mujtaba, Iqbal M.
15 May 2018
Yes / The FCC riser cracks gas oil into useful fuels such as gasoline, diesel and some lighter products such as ethylene and propylene, which are major building blocks for the polyethylene and polypropylene production. The production objective of the riser is usually the maximization of gasoline and diesel, but it can also be to maximize propylene. The optimization and parameter estimation of a six-lumped catalytic cracking reaction of gas oil in FCC is carried out to maximize the yield of propylene using an optimisation framework developed in gPROMS software 5.0 by optimizing mass flow rates and temperatures of catalyst and gas oil. The optimal values of 290.8 kg/s mass flow rate of catalyst and 53.4 kg/s mass flow rate of gas oil were obtained as propylene yield is maximized to give 8.95 wt%. When compared with the base case simulation value of 4.59 wt% propylene yield, the maximized propylene yield is increased by 95%.
No description available.
07 May 2005
The STREAMC model is based on the same algorithm as implemented by the Steady Riverine Environmental Assessment Model (STREAM), a mathematical model for the dissolved oxygen (DO) distribution in freshwater streams used by Mississippi Department of Environmental Quality (MDEQ). Typically the water quality models are calibrated manually. In some cases where some objective criterion can be identified to quantify a successful calibration, an auto calibration may be preferable to the manual calibration approach. The auto calibration may be particularly applicable to relatively simple analytical models such as the steady-state STREAMC model. Various techniques of parameter estimation were identified for the model. The model was then subjected to various techniques of parameter estimation identified and/or developed. The parameter estimates obtained by different techniques were tabulated and compared. A final recommendation regarding a preferable parameter estimation technique leading to the auto calibration of the STREAMC model was made.
Page generated in 0.2613 seconds