121 |
Case studies in omniparametric simulation /Lundin, Fredrik, January 2006 (has links)
Thesis (Ph. D.)--Chalmers tekniska högskola and Göteborgs universitet, 2006. / Includes bibliographical references (p. 219-224) and index.
|
122 |
Constrained expectation-maximization (EM), dynamic analysis, linear quadratic tracking, and nonlinear constrained expectation-maximation (EM) for the analysis of genetic regulatory networks and signal transduction networksXiong, Hao 15 May 2009 (has links)
Despite the immense progress made by molecular biology in cataloging andcharacterizing molecular elements of life and the success in genome sequencing, therehave not been comparable advances in the functional study of complex phenotypes.This is because isolated study of one molecule, or one gene, at a time is not enough byitself to characterize the complex interactions in organism and to explain the functionsthat arise out of these interactions. Mathematical modeling of biological systems isone way to meet the challenge.My research formulates the modeling of gene regulation as a control problem andapplies systems and control theory to the identification, analysis, and optimal controlof genetic regulatory networks. The major contribution of my work includes biologicallyconstrained estimation, dynamical analysis, and optimal control of genetic networks.In addition, parameter estimation of nonlinear models of biological networksis also studied, as a parameter estimation problem of a general nonlinear dynamicalsystem. Results demonstrate the superior predictive power of biologically constrainedstate-space models, and that genetic networks can have differential dynamic propertieswhen subjected to different environmental perturbations. Application of optimalcontrol demonstrates feasibility of regulating gene expression levels. In the difficultproblem of parameter estimation, generalized EM algorithm is deployed, and a set of explicit formula based on extended Kalman filter is derived. Application of themethod to synthetic and real world data shows promising results.
|
123 |
Parameter identification for biological models / Identification de paramètres de modèles biologiquesFey, Dirk 31 March 2011 (has links)
This thesis concerns the identification of dynamic models in systems biology.
and is structured into two parts.
Both parts concern building dynamic models from observed data, but are quite different in perspective, rationale and mathematics.
The first part considers the development of novel identification techniques that are particularly tailored to (molecular) biology and considers two approaches. The first approach reformulates the parameter estimation problem as a feasibility problem. This reformulation allows the invalidation of models by analysing entire parameter regions. The second approach utilises nonlinear observers and a transformation of the model equations into parameter free coordinates. The parameter free coordinates allow the design of a globally convergent observer, which in turn estimates the parameter values, and further, allows to identify modelling errors or unknown inputs/influences. Both approaches are bottom up approaches that require a mechanistic understanding of the underlying processes (in terms of a biochemical reaction network) leading to complex nonlinear models.
The second part is an example of what can be done with classical, well developed tools from systems identification when applied to hitherto unattended problems.In particular, part two of my thesis develops a modelling framework for rat movements in an experimental setup that it widely used to study learning and memory.The approach is a top down approach that is data driven resulting in simple linear models.
|
124 |
Parameter Estimation and Uncertainty Analysis of Contaminant First Arrival Times at Household Drinking Water WellsKang, Mary January 2007 (has links)
Exposure assessment, which is an investigation of the extent of human exposure to a specific contaminant, must include estimates of the duration and frequency of exposure. For a groundwater system, the duration of exposure is controlled largely by the arrival time of the contaminant of concern at a drinking water well. This arrival time, which is normally estimated by using groundwater flow and transport models, can have a range of possible values due to the uncertainties that are typically present in real problems. Earlier arrival times generally represent low likelihood events, but play a crucial role in the decision-making process that must be conservative and precautionary, especially when evaluating the potential for adverse health impacts. Therefore, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times.
To demonstrate an approach to quantify the uncertainty of arrival times, a real contaminant transport problem which involves TCE contamination due to releases from the Lockformer Company Facility in Lisle, Illinois is used. The approach used in this research consists of two major components: inverse modelling or parameter estimation, and uncertainty analysis.
The parameter estimation process for this case study was selected based on insufficiencies in the model and observational data due to errors, biases, and limitations. A consideration of its purpose, which is to aid in characterising uncertainty, was also made in the process by including many possible variations in attempts to minimize assumptions. A preliminary investigation was conducted using a well-accepted parameter estimation method, PEST, and the corresponding findings were used to define characteristics of the parameter estimation process applied to this case study. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and deadzones, were incorporated in the parameter estimation process to treat specific insufficiencies. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. For each objective function, three procedures were implemented as a part of the parameter estimation approach for the given case study: a multistart procedure, a stochastic search using the Dynamically-Dimensioned Search (DDS), and a test for acceptance based on predefined physical criteria. The best performance in terms of the ability of parameter sets to satisfy the physical criteria was achieved using a Cauchy’s M-estimator that was modified for this study and designated as the LRS1 M-estimator. Due to uncertainties, multiple parameter sets obtained with the LRS1 M-estimator, the L1-estimator, and the L2-estimator are recommended for use in uncertainty analysis. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets; in contrast, deadzones proved to produce negligible benefits. The characteristics for parameter sets were examined in terms of frequency histograms and plots of parameter value versus objective function value to infer the nature of the likelihood distributions of parameters. The correlation structure was estimated using Pearson’s product-moment correlation coefficient. The parameters are generally distributed uniformly or appear to follow a random nature with few correlations in the parameter space that results after the implementation of the multistart procedure. The execution of the search procedure results in the introduction of many correlations and in parameter distributions that appear to follow lognormal, normal, or uniform distributions. The application of the physical criteria refines the parameter characteristics in the parameter space resulting from the search procedure by reducing anomalies. The combined effect of optimization and the application of the physical criteria performs the function of behavioural thresholds by removing parameter sets with high objective function values.
Uncertainty analysis is performed with parameter sets obtained through two different sampling methodologies: the Monte Carlo sampling methodology, which randomly and independently samples from user-defined distributions, and the physically-based DDS-AU (P-DDS-AU) sampling methodology, which is developed based on the multiple parameter sets acquired during the parameter estimation process. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using the P-DDS-AU sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For the P-DDS-AU samples, uncertainty representation is performed using four definitions based on pseudo-likelihoods: two based on the Nash and Sutcliffe efficiency criterion, and two based on inverse error or residual variance. The definitions consist of shaping factors that strongly affect the resulting likelihood distribution. In addition, some definitions are affected by the objective function definition. Therefore, all variations are considered in the development of likelihood distribution envelopes, which are designed to maximize the amount of information available to decision-makers. The considerations that are important to the creation of an uncertainty envelope are outlined in this thesis. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria is recommended.
The selection of likelihood and objective function definitions and their properties are made based on the needs of the problem; therefore, preliminary investigations should always be conducted to provide a basis for selecting appropriate methods and definitions. It is imperative to remember that the communication of assumptions and definitions used in both parameter estimation and uncertainty analysis is crucial in decision-making scenarios.
|
125 |
The Application of Markov Chain Monte Carlo Techniques in Non-Linear Parameter Estimation for Chemical Engineering ModelsMathew, Manoj January 2013 (has links)
Modeling of chemical engineering systems often necessitates using non-linear models. These models can range in complexity, from a simple analytical equation to a system of differential equations. Regardless of what type of model is being utilized, determining parameter estimates is essential in everyday chemical engineering practice. One promising approach to non-linear regression is a technique called Markov Chain Monte Carlo (MCMC).This method produces reliable parameter estimates and generates joint confidence regions (JCRs) with correct shape and correct probability content. Despite these advantages, its application in chemical engineering literature has been limited. Therefore, in this project, MCMC methods were applied to a variety of chemical engineering models. The objectives of this research is to (1) illustrate how to implement MCMC methods in complex non-linear models (2) show the advantages of using MCMC techniques over classical regression approaches and (3) provide practical guidelines on how to reduce the computational time.
MCMC methods were first applied to the biological oxygen demand (BOD) problem. In this case study, an implementation procedure was outlined using specific examples from the BOD problem. The results from the study illustrated the importance of estimating the pure error variance as a parameter rather than fixing its value based on the mean square error. In addition, a comparison was carried out between the MCMC results and the results obtained from using classical regression approaches. The findings show that although similar point estimates are obtained, JCRs generated from approximation methods cannot model the parameter uncertainty adequately.
Markov Chain Monte Carlo techniques were then applied in estimating reactivity ratios in the Mayo-Lewis model, Meyer-Lowry model, the direct numerical integration model and the triad fraction multiresponse model. The implementation steps for each of these models were discussed in detail and the results from this research were once again compared to previously used approximation methods. Once again, the conclusion drawn from this work showed that MCMC methods must be employed in order to obtain JCRs with the correct shape and correct probability content.
MCMC methods were also applied in estimating kinetic parameter used in the solid oxide fuel cell study. More specifically, the kinetics of the water-gas shift reaction, which is used in generating hydrogen for the fuel cell, was studied. The results from this case study showed how the MCMC output can be analyzed in order to diagnose parameter observability and correlation. A significant portion of the model needed to be reduced due to these issues of observability and correlation. Point estimates and JCRs were then generated using the reduced model and diagnostic checks were carried out in order to ensure the model was able to capture the data adequately.
A few select parameters in the Waterloo Polymer Simulator were estimated using the MCMC algorithm. Previous studies have shown that accurate parameter estimates and JCRs could not be obtained using classical regression approaches. However, when MCMC techniques were applied to the same problem, reliable parameter estimates and correct shape and correct probability content confidence regions were observed. This case study offers a strong argument as to why classical regression approaches should be replaced by MCMC techniques.
Finally, a very brief overview of the computational times for each non-linear model used in this research was provided. In addition, a serial farming approach was proposed and a significant decrease in computational time was observed when this procedure was implemented.
|
126 |
Diagnosis of a compressed air system in a heavy vehicle / Diagnos av tryckluftssystem i ett tungt fordonMartin, Kågebjer January 2011 (has links)
Compressed air has in the past been considered as a free resource in heavy vehicles.The recent years work to minimize fuel consumption has however made airconsumption an interesting topic for the manufactures to investigate further. Compressed air has many different applications in heavy vehicles. One importantconsumer of compressed air is the brake system, which would not work at allwithout compressed air. The compressed air is produced by a compressor attachedto the engine. A leakage in the system will force the compressor to work longer,which leads to an increased fuel consumption. It is of large interest to have a diagnosis system that can detect leakages, and ifpossible also provide information about where in the system the leakage is present.This information can then be used to repair the leakage at the next service stop. The diagnosis system that is developed in this thesis is based on model baseddiagnosis and uses a recursive least mean square method to estimate the leakagearea. The results from the validation show that the algorithm works well forleakages of the size 1-10 litres/minute. The innovative isolation algorithm givesfull fault isolation for a five circuit system with only three pressure sensors. / Tryckluft i lastbilar har tidigare ansetts vara en fri resurs. Den senaste tidens försökatt minimera bränsleförbrukningen har dock lett fram till att även användandetav tryckluft har börjat ses över. Tryckluft används i dagens lastbilar av flera olika förbrukare. En viktig förbrukareav tryckluft är bromsarna som inte fungerar överhuvudtaget utan tryckluft.Tryckluften produceras av en kompressor som sitter kopplad på förbränningsmotorn.Om det finns ett läckage i tryckluftsystemet leder detta till att kompressornmåste arbeta oftare vilket i sin tur leder till en ökad bränsleförbrukning. Det finns stort intresse av att kunna detektera dessa läckage och om möjligtäven avgöra var i systemet som läckaget finns. Informationen kan sedan användasvid nästa servicetillfälle för att laga läckaget. Diagnossystemet som utvecklats i detta examensarbete bygger på modellbaseraddiagnos och använder en rekursiv implementering av minstakvadratmetodenför att skatta läckagets storlek. Resultat från validering av algoritmen visar attdiagnossystemet fungerar bra för läckage i storleksordningen 1-10 liter/minut. Deninnovativa isoleringsalgoritmen ger full felisolerbarhet för ett system med fem kretsarmen bara tre tryckgivare.
|
127 |
A New Third Compartment Significantly Improves Fit and Identifiability in a Model for Ace2p Distribution in Saccharomyces cerevisiae after Cytokinesis.Järvstråt, Linnea January 2011 (has links)
Asymmetric cell division is an important mechanism for the differentiation of cells during embryogenesis and cancer development. Saccharomyces cerevisiae divides asymmetrically and is therefore used as a model system for understanding the mechanisms behind asymmetric cell division. Ace2p is a transcriptional factor in yeast that localizes primarily to the daughter nucleus during cell division. The distribution of Ace2p is visualized using a fusion protein with yellow fluorescent protein (YFP) and confocal microscopy. Systems biology provides a new approach to investigating biological systems through the use of quantitative models. The localization of the transcriptional factor Ace2p in yeast during cell division has been modelled using ordinary differential equations. Herein such modelling has been evaluated. A 2-compartment model for the localization of Ace2p in yeast post-cytokinesis proposed in earlier work was found to be insufficient when new data was included in the model evaluation. Ace2p localization in the dividing yeast cell pair before cytokinesis has been investigated using a similar approach and was found to not explain the data to a significant degree. A 3-compartment model is proposed. The improvement in comparison to the 2-compartment model was statistically significant. Simulations of the 3-compartment model predicts a fast decrease in the amount of Ace2p in the cytosol close to the nucleus during the first seconds after each bleaching of the fluorescence. Experimental investigation of the cytosol close to the nucleus could test if the fast dynamics are present after each bleaching of the fluorescence. The parameters in the model have been estimated using the profile likelihood approach in combination with global optimization with simulated annealing. Confidence intervals for parameters have been found for the 3-compartment model of Ace2p localization post-cytokinesis. In conclusion, the profile likelihood approach has proven a good method of estimating parameters, and the new 3-compartment model allows for reliable parameter estimates in the post-cytokinesis situation. A new Matlab-implementation of the profile likelihood method is appended.
|
128 |
Parameter estimation of biological pathwaysSvensson, Emil January 2007 (has links)
To determine parameter values for models of reactions in the human body, like the glycolysis, good methods of parameter estimation are needed. Those models are often non-linear and estimation of the parameters can be very time consuming if it is possible at all. The goal of this work is to test different methods to improve the calculation speed of the parameter estimation of an example system. If the parameter estimation speed for the example system can be improved it is likely that the method could also be useful for systems similar to the example system. One approach to improve the calculation speed is to construct a new cost function whose evaluation does not require any simulation of the system. Simulation free parameter estimation can be much quicker than using simulations to evaluate the cost function since the cost function is evaluated many times. Also a modication of the simulated annealing optimization method has been implemented and tested. It turns out that some of the methods significantly reduced the time needed for the parameter estimations. However the quick methods have disadvantages in the form of reduced robustness. The most successful method was using a spline approximation together with a separation of the model into several submodels, and repeated use of the simulated annealing optimization algorithm to estimate the parameters.
|
129 |
Parameter Estimation and Uncertainty Analysis of Contaminant First Arrival Times at Household Drinking Water WellsKang, Mary January 2007 (has links)
Exposure assessment, which is an investigation of the extent of human exposure to a specific contaminant, must include estimates of the duration and frequency of exposure. For a groundwater system, the duration of exposure is controlled largely by the arrival time of the contaminant of concern at a drinking water well. This arrival time, which is normally estimated by using groundwater flow and transport models, can have a range of possible values due to the uncertainties that are typically present in real problems. Earlier arrival times generally represent low likelihood events, but play a crucial role in the decision-making process that must be conservative and precautionary, especially when evaluating the potential for adverse health impacts. Therefore, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times.
To demonstrate an approach to quantify the uncertainty of arrival times, a real contaminant transport problem which involves TCE contamination due to releases from the Lockformer Company Facility in Lisle, Illinois is used. The approach used in this research consists of two major components: inverse modelling or parameter estimation, and uncertainty analysis.
The parameter estimation process for this case study was selected based on insufficiencies in the model and observational data due to errors, biases, and limitations. A consideration of its purpose, which is to aid in characterising uncertainty, was also made in the process by including many possible variations in attempts to minimize assumptions. A preliminary investigation was conducted using a well-accepted parameter estimation method, PEST, and the corresponding findings were used to define characteristics of the parameter estimation process applied to this case study. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and deadzones, were incorporated in the parameter estimation process to treat specific insufficiencies. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. For each objective function, three procedures were implemented as a part of the parameter estimation approach for the given case study: a multistart procedure, a stochastic search using the Dynamically-Dimensioned Search (DDS), and a test for acceptance based on predefined physical criteria. The best performance in terms of the ability of parameter sets to satisfy the physical criteria was achieved using a Cauchy’s M-estimator that was modified for this study and designated as the LRS1 M-estimator. Due to uncertainties, multiple parameter sets obtained with the LRS1 M-estimator, the L1-estimator, and the L2-estimator are recommended for use in uncertainty analysis. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets; in contrast, deadzones proved to produce negligible benefits. The characteristics for parameter sets were examined in terms of frequency histograms and plots of parameter value versus objective function value to infer the nature of the likelihood distributions of parameters. The correlation structure was estimated using Pearson’s product-moment correlation coefficient. The parameters are generally distributed uniformly or appear to follow a random nature with few correlations in the parameter space that results after the implementation of the multistart procedure. The execution of the search procedure results in the introduction of many correlations and in parameter distributions that appear to follow lognormal, normal, or uniform distributions. The application of the physical criteria refines the parameter characteristics in the parameter space resulting from the search procedure by reducing anomalies. The combined effect of optimization and the application of the physical criteria performs the function of behavioural thresholds by removing parameter sets with high objective function values.
Uncertainty analysis is performed with parameter sets obtained through two different sampling methodologies: the Monte Carlo sampling methodology, which randomly and independently samples from user-defined distributions, and the physically-based DDS-AU (P-DDS-AU) sampling methodology, which is developed based on the multiple parameter sets acquired during the parameter estimation process. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using the P-DDS-AU sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For the P-DDS-AU samples, uncertainty representation is performed using four definitions based on pseudo-likelihoods: two based on the Nash and Sutcliffe efficiency criterion, and two based on inverse error or residual variance. The definitions consist of shaping factors that strongly affect the resulting likelihood distribution. In addition, some definitions are affected by the objective function definition. Therefore, all variations are considered in the development of likelihood distribution envelopes, which are designed to maximize the amount of information available to decision-makers. The considerations that are important to the creation of an uncertainty envelope are outlined in this thesis. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria is recommended.
The selection of likelihood and objective function definitions and their properties are made based on the needs of the problem; therefore, preliminary investigations should always be conducted to provide a basis for selecting appropriate methods and definitions. It is imperative to remember that the communication of assumptions and definitions used in both parameter estimation and uncertainty analysis is crucial in decision-making scenarios.
|
130 |
Constrained expectation-maximization (EM), dynamic analysis, linear quadratic tracking, and nonlinear constrained expectation-maximation (EM) for the analysis of genetic regulatory networks and signal transduction networksXiong, Hao 15 May 2009 (has links)
Despite the immense progress made by molecular biology in cataloging andcharacterizing molecular elements of life and the success in genome sequencing, therehave not been comparable advances in the functional study of complex phenotypes.This is because isolated study of one molecule, or one gene, at a time is not enough byitself to characterize the complex interactions in organism and to explain the functionsthat arise out of these interactions. Mathematical modeling of biological systems isone way to meet the challenge.My research formulates the modeling of gene regulation as a control problem andapplies systems and control theory to the identification, analysis, and optimal controlof genetic regulatory networks. The major contribution of my work includes biologicallyconstrained estimation, dynamical analysis, and optimal control of genetic networks.In addition, parameter estimation of nonlinear models of biological networksis also studied, as a parameter estimation problem of a general nonlinear dynamicalsystem. Results demonstrate the superior predictive power of biologically constrainedstate-space models, and that genetic networks can have differential dynamic propertieswhen subjected to different environmental perturbations. Application of optimalcontrol demonstrates feasibility of regulating gene expression levels. In the difficultproblem of parameter estimation, generalized EM algorithm is deployed, and a set of explicit formula based on extended Kalman filter is derived. Application of themethod to synthetic and real world data shows promising results.
|
Page generated in 0.1389 seconds