Spelling suggestions: "subject:"aximum"" "subject:"amaximum""
311 |
Parameter Estimation Techniques for Nonlinear Dynamic Models with Limited Data, Process Disturbances and Modeling ErrorsKarimi, Hadiseh 23 December 2013 (has links)
In this thesis appropriate statistical methods to overcome two types of problems that occur during parameter estimation in chemical engineering systems are studied. The first problem is having too many parameters to estimate from limited available data, assuming that the model structure is correct, while the second problem involves estimating unmeasured disturbances, assuming that enough data are available for parameter estimation. In the first part of this thesis, a model is developed to predict rates of undesirable reactions during the finishing stage of nylon 66 production. This model has too many parameters to estimate (56 unknown parameters) and not having enough data to reliably estimating all of the parameters. Statistical techniques are used to determine that 43 of 56 parameters should be estimated. The proposed model matches the data well. In the second part of this thesis, techniques are proposed for estimating parameters in Stochastic Differential Equations (SDEs). SDEs are fundamental dynamic models that take into account process disturbances and model mismatch. Three new approximate maximum likelihood methods are developed for estimating parameters in SDE models. First, an Approximate Expectation Maximization (AEM) algorithm is developed for estimating model parameters and process disturbance intensities when measurement noise variance is known. Then, a Fully-Laplace Approximation Expectation Maximization (FLAEM) algorithm is proposed for simultaneous estimation of model parameters, process disturbance intensities and measurement noise variances in nonlinear SDEs. Finally, a Laplace Approximation Maximum Likelihood Estimation (LAMLE) algorithm is developed for estimating measurement noise variances along with model parameters and disturbance intensities in nonlinear SDEs. The effectiveness of the proposed algorithms is compared with a maximum-likelihood based method. For the CSTR examples studied, the proposed algorithms provide more accurate estimates for the parameters. Additionally, it is shown that the performance of LAMLE is superior to the performance of FLAEM. SDE models and associated parameter estimates obtained using the proposed techniques will help engineers who implement on-line state estimation and process monitoring schemes. / Thesis (Ph.D, Chemical Engineering) -- Queen's University, 2013-12-23 15:12:35.738
|
312 |
Southern African Climate Dynamics and Archaeology during the Last Glacial MaximumPhillips, Anna 09 December 2013 (has links)
There is little consensus on what forced the climate of southern Africa to change during the Last Glacial Maximum (LGM). Because of southern Africa's latitudinal position, changes in seasonal precipitation can help resolve the influence of internal climate factors such as groundwater and external climate forcers such as large scale atmospheric circulation patterns. This paper presents a simple model of groundwater discharge based on permeability and topography in comparison with general circulation model precipitation results and paleoenvironmental proxy records. Results show that during the LGM the Intertropical Convergence Zone (ITCZ) likely weakened and moved slightly further south while the westerlies likely expanded slightly northward, with no significant change in strength. The climate and groundwater results were compared to the distribution of LGM and pre-LGM archaeological sites. Results show that the Later Stone Age peoples of southern Africa were likely inhabiting a relatively wet environment rather than an arid one.
|
313 |
Likelihood-Based Tests for Common and Idiosyncratic Unit Roots in the Exact Factor ModelSolberger, Martin January 2013 (has links)
Dynamic panel data models are widely used by econometricians to study over time the economics of, for example, people, firms, regions, or countries, by pooling information over the cross-section. Though much of the panel research concerns inference in stationary models, macroeconomic data such as GDP, prices, and interest rates are typically trending over time and require in one way or another a nonstationary analysis. In time series analysis it is well-established how autoregressive unit roots give rise to stochastic trends, implying that random shocks to a dynamic process are persistent rather than transitory. Because the implications of, say, government policy actions are fundamentally different if shocks to the economy are lasting than if they are temporary, there are now a vast number of univariate time series unit root tests available. Similarly, panel unit root tests have been designed to test for the presence of stochastic trends within a panel data set and to what degree they are shared by the panel individuals. Today, growing data certainly offer new possibilities for panel data analysis, but also pose new problems concerning double-indexed limit theory, unobserved heterogeneity, and cross-sectional dependencies. For example, economic shocks, such as technological innovations, are many times global and make national aggregates cross-country dependent and related in international business cycles. Imposing a strong cross-sectional dependence, panel unit root tests often assume that the unobserved panel errors follow a dynamic factor model. The errors will then contain one part which is shared by the panel individuals, a common component, and one part which is individual-specific, an idiosyncratic component. This is appealing from the perspective of economic theory, because unobserved heterogeneity may be driven by global common shocks, which are well captured by dynamic factor models. Yet, only a handful of tests have been derived to test for unit roots in the common and in the idiosyncratic components separately. More importantly, likelihood-based methods, which are commonly used in classical factor analysis, have been ruled out for large dynamic factor models due to the considerable number of parameters. This thesis consists of four papers where we consider the exact factor model, in which the idiosyncratic components are mutually independent, and so any cross-sectional dependence is through the common factors only. Within this framework we derive some likelihood-based tests for common and idiosyncratic unit roots. In doing so we address an important issue for dynamic factor models, because likelihood-based tests, such as the Wald test, the likelihood ratio test, and the Lagrange multiplier test, are well-known to be asymptotically most powerful against local alternatives. Our approach is specific-to-general, meaning that we start with restrictions on the parameter space that allow us to use explicit maximum likelihood estimators. We then proceed with relaxing some of the assumptions, and consider a more general framework requiring numerical maximum likelihood estimation. By simulation we compare size and power of our tests with some established panel unit root tests. The simulations suggest that the likelihood-based tests are locally powerful and in some cases more robust in terms of size. / Solving Macroeconomic Problems Using Non-Stationary Panel Data
|
314 |
Sensory Integration During Goal Directed Reaches: The Effects of Manipulating Target AvailabilityKhanafer, Sajida 19 October 2012 (has links)
When using visual and proprioceptive information to plan a reach, it has been proposed that the brain combines these cues to estimate the object and/or limb’s location. Specifically, according to the maximum-likelihood estimation (MLE) model, more reliable sensory inputs are assigned a greater weight (Ernst & Banks, 2002). In this research we examined if the brain is able to adjust which sensory cue it weights the most. Specifically, we asked if the brain changes how it weights sensory information when the availability of a visual cue is manipulated. Twenty-four healthy subjects reached to visual (V), proprioceptive (P), or visual + proprioceptive (VP) targets under different visual delay conditions (e.g. on V and VP trials, the visual target was available for the entire reach, it was removed with the go-signal or it was removed 1, 2 or 5 seconds before the go-signal). Subjects completed 5 blocks of trials, with 90 trials per block. For 12 subjects, the visual delay was kept consistent within a block of trials, while for the other 12 subjects, different visual delays were intermixed within a block of trials. To establish which sensory cue subjects weighted the most, we compared endpoint positions achieved on V and P reaches to VP reaches. Results indicated that all subjects weighted sensory cues in accordance with the MLE model across all delay conditions and that these weights were similar regardless of the visual delay. Moreover, while errors increased with longer visual delays, there was no change in reaching variance. Thus, manipulating the visual environment was not enough to change subjects’ weighting strategy, further i
|
315 |
NICHE CONSERVATISM OR DIVERGENCE: INSIGHTS INTO THE EVOLUTIONARY HISTORIES OF Pinus taeda, Pinus rigida, AND Pinus pungensBolte, Constance E 01 January 2017 (has links)
Environmentally related selective pressures and community interactions are well-documented drivers for niche differentiation, as natural selection acts on adaptive traits best fit for survival. Here, we investigated niche evolution between and within Pinus taeda, Pinus rigida, and Pinus pungens and sought to identify which climate variables contributed to species divergence. We also sought to describe niche differentiation across genetic groupings previously identified for P. taeda and P. rigida. Ecological niche models were produced using Maximum Entropy followed by statistical testing based on a measure of niche overlap, Schoener’s D. Both niche conservatism and niche divergence were detected, thus leading us to conclude that directional or disruptive selection drove divergence of the P. taeda lineage from its ancestor with P. rigida and P. pungens, while stabilizing selection was associated with the divergence of P. rigida and P. pungens. The latter implies that factors beyond climate are important drivers of speciation within Pinus.
|
316 |
Statistical design of phase I clinical trialsZhang, Weijia 16 September 2016 (has links)
My MSc thesis is focused on parametric designs of Phase I clinical trials, using the continual reassessment method. A parametric model with unknown parameters is assumed. The observations are either toxic or nontoxic. Observations of toxicities are used to update the posterior distribution. Dose selection for the next patient is based on the estimated toxicity probability. The objective is to identify the maximum tolerated dose to be used in Phase II clinical trials. We introduce a new class of parametric functions for the continual reassessment method. This class is formed with the cumulative distribution function of the normal distribution. The major advantage is that we can choose different normal distributions to model different toxicity probability functions. We conduct simulation studies and compare our new design with the existing parametric designs, and have found that our design performs better by choosing the appropriate values of the mean and variance. / October 2016
|
317 |
Statistical Models and Analysis of Growth Processes in Biological TissueXia, Jun 15 December 2016 (has links)
The mechanisms that control growth processes in biology tissues have attracted continuous research interest despite their complexity. With the emergence of big data experimental approaches there is an urgent need to develop statistical and computational models to fit the experimental data and that can be used to make predictions to guide future research. In this work we apply statistical methods on growth process of different biological tissues, focusing on development of neuron dendrites and tumor cells.
We first examine the neuron cell growth process, which has implications in neural tissue regenerations, by using a computational model with uniform branching probability and a maximum overall length constraint. One crucial outcome is that we can relate the parameter fits from our model to real data from our experimental collaborators, in order to examine the usefulness of our model under different biological conditions. Our methods can now directly compare branching probabilities of different experimental conditions and provide confidence intervals for these population-level measures. In addition, we have obtained analytical results that show that the underlying probability distribution for this process follows a geometrical progression increase at nearby distances and an approximately geometrical series decrease for far away regions, which can be used to estimate the spatial location of the maximum of the probability distribution. This result is important, since we would expect maximum number of dendrites in this region; this estimate is related to the probability of success for finding a neural target at that distance during a blind search.
We then examined tumor growth processes which have similar evolutional evolution in the sense that they have an initial rapid growth that eventually becomes limited by the resource constraint. For the tumor cells evolution, we found an exponential growth model best describes the experimental data, based on the accuracy and robustness of models. Furthermore, we incorporated this growth rate model into logistic regression models that predict the growth rate of each patient with biomarkers; this formulation can be very useful for clinical trials. Overall, this study aimed to assess the molecular and clinic pathological determinants of breast cancer (BC) growth rate in vivo.
|
318 |
From Linkage to GWAS: A Multifaceted Exploration of the Genetic Risk for Alcohol DependenceAdkins, Amy 10 December 2012 (has links)
Family, twin and adoption studies consistently suggest that genetic factors strongly influence the risk for alcohol dependence (AD). Although the literature supports the role of genetics in AD, identification of specific genes contributing to the etiology of AD has proven difficult. These difficulties are due in part to the complex set of risk factors contributing to the development of AD. These risk factors include comorbidities with other clinical diagnoses and behavioral phenotypes (e.g., major depression), physiological differences that contribute to the differences between people in their level of response to ethanol (e.g., initial sensitivity) and finally the large number of biological pathways targeted by and involved in the processing of ethanol. These complexities have probably contributed to the limited success of linkage and candidate gene association studies in finding genes underlying AD. The powerful and unbiased genome-wide association study (GWAS) offers promise in the study of complex diseases. However, due to the complexities of known risk factors, GWAS data has yet to provide consistent, replicable results. In light of these difficulties, this dissertation has five specific aims which attempt to investigate genetic risk loci for AD and related phenotypes through improved methods for candidate gene selection, analysis of a pooled genome-wide association study, genome-wide analyses of initial sensitivity and maximum alcohol consumption in a twenty-four hour period and finally, creation of a multivariate AD/internalizing phenotype.
|
319 |
Optimal Control and Its Application to the Life-Cycle Savings ProblemTaylor, Tracy A 01 January 2016 (has links)
Throughout the course of this thesis, we give an introduction to optimal control theory and its necessary conditions, prove Pontryagin's Maximum Principle, and present the life-cycle saving under uncertain lifetime optimal control problem. We present a very involved sensitivity analysis that determines how a change in the initial wealth, discount factor, or relative risk aversion coefficient may affect the model the terminal depletion of wealth time, optimal consumption path, and optimal accumulation of wealth path. Through simulation of the life-cycle saving under uncertain lifetime model, we are not only able to present the model dynamics through time, but also to demonstrate the feasibility of the model.
|
320 |
SENSITIVITY ANALYSIS IN HANDLING DISCRETE DATA MISSING AT RANDOM IN HIERARCHICAL LINEAR MODELS VIA MULTIVARIATE NORMALITYZheng, Xiyu 01 January 2016 (has links)
Abstract
In a two-level hierarchical linear model(HLM2), the outcome as well as covariates may have missing values at any of the levels. One way to analyze all available data in the model is to estimate a multivariate normal joint distribution of variables, including the outcome, subject to missingness conditional on covariates completely observed by maximum likelihood(ML); draw multiple imputation (MI) of missing values given the estimated joint model; and analyze the hierarchical model given the MI [1,2]. The assumption is data missing at random (MAR). While this method yields efficient estimation of the hierarchical model, it often estimates the model given discrete missing data that is handled under multivariate normality. In this thesis, we evaluate how robust it is to estimate a hierarchical linear model given discrete missing data by the method. We simulate incompletely observed data from a series of hierarchical linear models given discrete covariates MAR, estimate the models by the method, and assess the sensitivity of handling discrete missing data under the multivariate normal joint distribution by computing bias, root mean squared error, standard error, and coverage probability in the estimated hierarchical linear models via a series of simulation studies. We want to achieve the following aim: Evaluate the performance of the method handling binary covariates MAR. We let the missing patterns of level-1 and -2 binary covariates depend on completely observed variables and assess how the method handles binary missing data given different values of success probabilities and missing rates.
Based on the simulation results, the missing data analysis is robust under certain parameter settings. Efficient analysis performs very well for estimation of level-1 fixed and random effects across varying success probabilities and missing rates. MAR estimation of level-2 binary covariate is not well estimated when the missing rate in level-2 binary covariate is greater than 10%.
The rest of the thesis is organized as follows: Section 1 introduces the background information including conventional methods for hierarchical missing data analysis, different missing data mechanisms, and the innovation and significance of this study. Section 2 explains the efficient missing data method. Section 3 represents the sensitivity analysis of the missing data method and explain how we carry out the simulation study using SAS, software package HLM7, and R. Section 4 illustrates the results and useful recommendations for researchers who want to use the missing data method for binary covariates MAR in HLM2. Section 5 presents an illustrative analysis National Growth of Health Study (NGHS) by the missing data method. The thesis ends with a list of useful references that will guide the future study and simulation codes we used.
|
Page generated in 0.0358 seconds