• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 480
  • 92
  • 36
  • 32
  • 10
  • 5
  • 5
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 823
  • 823
  • 127
  • 122
  • 118
  • 101
  • 85
  • 81
  • 77
  • 70
  • 70
  • 63
  • 62
  • 59
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Parameter estimation and network identification in metabolic pathway systems

Chou, I-Chun 25 August 2008 (has links)
Cells are able to function and survive due to a delicate orchestration of the expression of genes and their downstream products at the genetic, transcriptomic, proteomic, and metabolic levels. Since metabolites are ultimately the causative agents for physiological responses and responsible for much of the functionality of the organism, a comprehensive understanding of cellular functioning mandates deep insights into how metabolism works. Gaining these insights is impeded by the fact that the regulation and dynamics of metabolic networks are often too complex to allow intuitive predictions, which thus renders mathematical modeling necessary as a means for assessing and understanding metabolic systems. The most difficult step of the modeling process is the extraction of information regarding the structure and regulation of the system from experimental data. The work presented here addresses this "inverse" task with three new methods that are applied to models within Biochemical Systems Theory (BST). Alternating Regression (AR) dissects the nonlinear estimation task into iterative steps of linear regression by utilizing the fact that power-law functions are linear in logarithmic space. Eigenvector Optimization (EO) is an extension of AR that is particularly well suited for the identification of model structure. Dynamic Flux Estimation (DFE) is a more general approach that can involve AR and EO and resolves open issues of model validity and quality beyond residual data fitting errors. The necessity of fast solutions to biological inverse problems is discussed in the context of concept map modeling, which allows the conversion of hypothetical network diagrams into mathematical models.
342

Contributions to quantitative dynamic contrast-enhanced MRI

Garpebring, Anders January 2011 (has links)
Background: Dynamic contrast-enhanced MRI (DCE-MRI) has the potential to produce images of physiological quantities such as blood flow, blood vessel volume fraction, and blood vessel permeability. Such information is highly valuable, e.g., in oncology. The focus of this work was to improve the quantitative aspects of DCE-MRI in terms of better understanding of error sources and their effect on estimated physiological quantities. Methods: Firstly, a novel parameter estimation algorithm was developed to overcome a problem with sensitivity to the initial guess in parameter estimation with a specific pharmacokinetic model. Secondly, the accuracy of the arterial input function (AIF), i.e., the estimated arterial blood contrast agent concentration, was evaluated in a phantom environment for a standard magnitude-based AIF method commonly used in vivo. The accuracy was also evaluated in vivo for a phase-based method that has previously shown very promising results in phantoms and in animal studies. Finally, a method was developed for estimation of uncertainties in the estimated physiological quantities. Results: The new parameter estimation algorithm enabled significantly faster parameter estimation, thus making it more feasible to obtain blood flow and permeability maps from a DCE-MRI study. The evaluation of the AIF measurements revealed that inflow effects and non-ideal radiofrequency spoiling seriously degrade magnitude-based AIFs and that proper slice placement and improved signal models can reduce this effect. It was also shown that phase-based AIFs can be a feasible alternative provided that the observed difficulties in quantifying low concentrations can be resolved. The uncertainty estimation method was able to accurately quantify how a variety of different errors propagate to uncertainty in the estimated physiological quantities. Conclusion: This work contributes to a better understanding of parameter estimation and AIF quantification in DCE-MRI. The proposed uncertainty estimation method can be used to efficiently calculate uncertainties in the parametric maps obtained in DCE-MRI.
343

Estimation of the parameters of stochastic differential equations

Jeisman, Joseph Ian January 2006 (has links)
Stochastic di®erential equations (SDEs) are central to much of modern finance theory and have been widely used to model the behaviour of key variables such as the instantaneous short-term interest rate, asset prices, asset returns and their volatility. The explanatory and/or predictive power of these models depends crucially on the particularisation of the model SDE(s) to real data through the choice of values for their parameters. In econometrics, optimal parameter estimates are generally considered to be those that maximise the likelihood of the sample. In the context of the estimation of the parameters of SDEs, however, a closed-form expression for the likelihood function is rarely available and hence exact maximum-likelihood (EML) estimation is usually infeasible. The key research problem examined in this thesis is the development of generic, accurate and computationally feasible estimation procedures based on the ML principle, that can be implemented in the absence of a closed-form expression for the likelihood function. The overall recommendation to come out of the thesis is that an estimation procedure based on the finite-element solution of a reformulation of the Fokker-Planck equation in terms of the transitional cumulative distribution function(CDF) provides the best balance across all of the desired characteristics. The recommended approach involves the use of an interpolation technique proposed in this thesis which greatly reduces the required computational effort.
344

Efficient Calibration and Predictive Error Analysis for Highly-Parameterized Models Combining Tikhonov and Subspace Regularization Techniques

Matthew James Tonkin Unknown Date (has links)
The development and application of environmental models to help understand natural systems, and support decision making, is commonplace. A difficulty encountered in the development of such models is determining which physical and chemical processes to simulate, and on what temporal and spatial scale(s). Modern computing capabilities enable the incorporation of more processes, at increasingly refined scales, than at any time previously. However, the simulation of a large number of fine scale processes has undesirable consequences: first, the execution time of many environmental models has not declined despite advances in processor speed and solution techniques; and second, such complex models incorporate a large number of parameters, for which values must be assigned. Compounding these problems is the recognition that since the inverse problem in groundwater modeling is non-unique the calibration of a single parameter set does not assure the reliability of model predictions. Practicing modelers are, then, faced with complex models that incorporate a large number of parameters whose values are uncertain, and that make predictions that are prone to an unspecified amount of error. In recognition of this, there has been considerable research into methods for evaluating the potential for error in model predictions arising from errors in the values assigned to model parameters. Unfortunately, some common methods employed in the estimation of model parameters, and the evaluation of the potential error associated with model parameters and predictions, suffer from limitations in their application that stem from an emphasis on obtaining an over-determined, parsimonious, inverse problem. That is, common methods of model analysis exhibit artifacts from the propagation of subjective a-priori parameter parsimony throughout the calibration and predictive error analyses. This thesis describes theoretical and practical developments that enable the estimation of a large number of parameters, and the evaluation of the potential for error in predictions made by highly parameterized models. Since the focus of this research is on the use of models in support of decision making, the new methods are demonstrated by application to synthetic applications, where the performance of the method can be evaluated under controlled conditions; and to real-world applications, where the performance of the method can be evaluated in terms of trade-offs in computational effort versus calibration results and the ability to rigorously yet expediently investigate predictive error. The applications suggest that the new techniques are applicable to a range of environmental modeling disciplines. Mathematical innovations described in this thesis focus on combining complementary regularized inversion (calibration) techniques with novel methods for analyzing model predictive error. Several of the innovations are founded on explicit recognition of the existence of the calibration solution and null spaces – that is, that with the available observations there are some (combinations of) parameters that can be estimated; and there are some (combinations of) parameters that cannot. The existence of a non-trivial calibration null space is at the heart of the non-uniqueness problem in model calibration: this research expands upon this concept by recognizing that there are combinations of parameters that lie within the calibration null space yet possess non-trivial projections onto the predictive solution space, and these combinations of parameters are at the heart of predictive error analysis. The most significant contribution of this research is the attempt to develop a framework for model analysis that promotes computational efficiency in both the calibration and the subsequent analysis of the potential for error in model predictions. Fundamental to this framework is the use of a large number of parameters, the use of Tikhonov regularization, and the use of subspace techniques. Use of a large number of parameters enables parameter detail to be represented in the model at a scale approaching true variability; the use of Tikhonov constraints enables the modeler to incorporate preferred conditions on parameter values and/or their variation throughout the calibration and the predictive analysis; and, the use of subspace techniques enables model calibration and predictive analysis to be undertaken expediently, even when undertaken using a large number of parameters. This research focuses on the inability of the calibration process to accurately identify parameter values: it is assumed that the models in question accurately represent the relevant processes at the relevant scales so that parameter and predictive error depend only on parameter detail not represented in the model and/or accurately inferred through the calibration process. Contributions to parameter and predictive error arising from incorrect model identification are outside the scope of this research.
345

Modelling nonlinear time series using selection methods and information criteria

Nakamura, Tomomichi January 2004 (has links)
[Truncated abstract] Time series of natural phenomena usually show irregular fluctuations. Often we want to know the underlying system and to predict future phenomena. An effective way of tackling this task is by time series modelling. Originally, linear time series models were used. As it became apparent that nonlinear systems abound in nature, modelling techniques that take into account nonlinearity in time series were developed. A particularly convenient and general class of nonlinear models is the pseudolinear models, which are linear combinations of nonlinear functions. These models can be obtained by starting with a large dictionary of basis functions which one hopes will be able to describe any likely nonlinearity, selecting a small subset of it, and taking a linear combination of these to form the model. The major component of this thesis concerns how to build good models for nonlinear time series. In building such models, there are three important problems, broadly speaking. The first is how to select basis functions which reflect the peculiarities of the time series as much as possible. The second is how to fix the model size so that the models can reflect the underlying system of the data and the influences of noise included in the data are removed as much as possible. The third is how to provide good estimates for the parameters in the basis functions, considering that they may have significant bias when the noise included in the time series is significant relative to the nonlinearity. Although these problems are mentioned separately, they are strongly interconnected
346

Measurement selection and parameter estimation strategies for structural stiffness and mass updating using non-destructive test data /

Javdekar, Chitra N. January 2004 (has links)
Thesis (Ph.D.)--Tufts University, 2004. / Adviser: Masoud Sanayei. Submitted to the Dept. of Civil Engineering. Includes bibliographical references (leaves 300-305). Access restricted to members of the Tufts University community. Also available via the World Wide Web;
347

Identification of stochastic continuous-time systems : algorithms, irregular sampling and Cramér-Rao bounds /

Larsson, Erik, January 2004 (has links)
Diss. Uppsala : Univ., 2004.
348

Model reduction and parameter estimation for diffusion systems /

Bhikkaji, Bharath, January 2004 (has links)
Diss. (sammanfattning) Uppsala : Univ., 2004. / Härtill 8 uppsatser.
349

Statistical inference on binomial regression models in the presence of over-dispersion /

Lorensu Hewa, Wimali Prasangika, January 1900 (has links)
Thesis (M.Sc.) - Carleton University, 2008. / Includes bibliographical references (p. 116-119). Also available in electronic format on the Internet.
350

Modelling the co-infection dynamics of HIV-1 and M. tuberculosis

Du Toit, Eben Francois. January 2008 (has links)
Thesis (MEng (Electronic engineering))-University of Pretoria, 2008. / Includes bibliographical references.

Page generated in 0.1221 seconds