• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 289
  • 171
  • 44
  • 32
  • 10
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 610
  • 143
  • 104
  • 90
  • 87
  • 78
  • 78
  • 70
  • 68
  • 68
  • 61
  • 60
  • 55
  • 53
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Inverse Sturm-liouville Systems Over The Whole Real Line

Altundag, Huseyin 01 November 2010 (has links) (PDF)
In this thesis we present a numerical algorithm to solve the singular Inverse Sturm-Liouville problems with symmetric potential functions. The singularity, which comes from the unbounded domain of the problem, is treated by considering the limiting case of the associated problem on the symmetric finite interval. In contrast to regular problems which are considered on a finite interval the singular inverse problem has an ill-conditioned structure despite of the limiting treatment. We use the regularization techniques to overcome the ill-posedness difficulty. Moreover, since the problem is nonlinear the iterative solution procedures are needed. Direct computation of the eigenvalues in iterative solution is handled via psoudespectral methods. The numerical examples of the considered problem are given to illustrate the accuracy and convergence behaviour.
142

Use Of Genetic Algorithm For Selection Of Regularization Parameters In Multiple Constraint Inverse Ecg Problem

Mazloumi Gavgani, Alireza 01 January 2011 (has links) (PDF)
The main goal in inverse and forward problems of electrocardiography (ECG) is to better understand the electrical activity of the heart. In the forward problem of ECG, one obtains the body surface potential (BSP) distribution (i.e., the measurements) when the electrical sources in the heart are assumed to be known. The result is a mathematical model that relates the sources to the measurements. In the inverse problem of ECG, the unknown cardiac electrical sources are estimated from the BSP measurements and the mathematical model of the torso. Inverse problem of ECG is an ill-posed problem, and regularization should be applied in order to obtain a good solution. Tikhonov regularization is a well-known method, which introduces a trade-off between how well the solution fits the measurements and how well the constraints on the solution are satisfied. This trade-off is controlled by a regularization parameter, which can be easily calculated by the L-curve method. It is theoretically possible to include more than one constraint in the cost function / however finding more than one regularization parameter to use with each constraint is a challenging problem. It is the aim of this thesis to use genetic algorithm (GA) optimization method to obtain regularization parameters to solve the inverse ECG problem when multiple constraints are used for regularization. The results are presented when there are two spatial constraints, when there is one spatial, one temporal constraint, and when there are two spatial one temporal constraints / the performances of these three applications are compared to Tikhonov regularization results and to each other. As a conlcusion, it is possible to obtain correct regularization parameters using the GA method, and using more than one constraints yields improvements in the results.
143

Statistical validation and calibration of computer models

Liu, Xuyuan 21 January 2011 (has links)
This thesis deals with modeling, validation and calibration problems in experiments of computer models. Computer models are mathematic representations of real systems developed for understanding and investigating the systems. Before a computer model is used, it often needs to be validated by comparing the computer outputs with physical observations and calibrated by adjusting internal model parameters in order to improve the agreement between the computer outputs and physical observations. As computer models become more powerful and popular, the complexity of input and output data raises new computational challenges and stimulates the development of novel statistical modeling methods. One challenge is to deal with computer models with random inputs (random effects). This kind of computer models is very common in engineering applications. For example, in a thermal experiment in the Sandia National Lab (Dowding et al. 2008), the volumetric heat capacity and thermal conductivity are random input variables. If input variables are randomly sampled from particular distributions with unknown parameters, the existing methods in the literature are not directly applicable. The reason is that integration over the random variable distribution is needed for the joint likelihood and the integration cannot always be expressed in a closed form. In this research, we propose a new approach which combines the nonlinear mixed effects model and the Gaussian process model (Kriging model). Different model formulations are also studied to have an better understanding of validation and calibration activities by using the thermal problem. Another challenge comes from computer models with functional outputs. While many methods have been developed for modeling computer experiments with single response, the literature on modeling computer experiments with functional response is sketchy. Dimension reduction techniques can be used to overcome the complexity problem of function response; however, they generally involve two steps. Models are first fit at each individual setting of the input to reduce the dimensionality of the functional data. Then the estimated parameters of the models are treated as new responses, which are further modeled for prediction. Alternatively, pointwise models are first constructed at each time point and then functional curves are fit to the parameter estimates obtained from the fitted models. In this research, we first propose a functional regression model to relate functional responses to both design and time variables in one single step. Secondly, we propose a functional kriging model which uses variable selection methods by imposing a penalty function. we show that the proposed model performs better than dimension reduction based approaches and the kriging model without regularization. In addition, non-asymptotic theoretical bounds on the estimation error are presented.
144

Pontryagin approximations for optimal design

Carlsson, Jesper January 2006 (has links)
<p>This thesis concerns the approximation of optimally controlled partial differential equations for applications in optimal design and reconstruction. Such optimal control problems are often ill-posed and need to be regularized to obtain good approximations. We here use the theory of the corresponding Hamilton-Jacobi-Bellman equations to construct regularizations and derive error estimates for optimal design problems. The constructed Pontryagin method is a simple and general method where the first, analytical, step is to regularize the Hamiltonian. Next its stationary Hamiltonian system, a nonlinear partial differential equation, is computed efficiently with the Newton method using a sparse Jacobian. An error estimate for the difference between exact and approximate objective functions is derived, depending only on the difference of the Hamiltonian and its finite dimensional regularization along the solution path and its<em> L</em><sup>2</sup> projection, i.e. not on the difference of the exact and approximate solutions to the Hamiltonian systems. In the thesis we present solutions to applications such as optimal design and reconstruction of conducting materials and elastic structures.</p>
145

Regularization of Parameter Problems for Dynamic Beam Models

Rydström, Sara January 2010 (has links)
<p>The field of inverse problems is an area in applied mathematics that is of great importance in several scientific and industrial applications. Since an inverse problem is typically founded on non-linear and ill-posed models it is a very difficult problem to solve. To find a regularized solution it is crucial to have <em>a priori</em> information about the solution. Therefore, general theories are not sufficient considering new applications.</p><p>In this thesis we consider the inverse problem to determine the beam bending stiffness from measurements of the transverse dynamic displacement. Of special interest is to localize parts with reduced bending stiffness. Driven by requirements in the wood-industry it is not enough considering time-efficient algorithms, the models must also be adapted to manage extremely short calculation times.</p><p>For the developing of efficient methods inverse problems based on the fourth order Euler-Bernoulli beam equation and the second order string equation are studied. Important results are the transformation of a nonlinear regularization problem to a linear one and a convex procedure for finding parts with reduced bending stiffness.</p>
146

Ill-posedness of parameter estimation in jump diffusion processes

Düvelmeyer, Dana, Hofmann, Bernd 25 August 2004 (has links) (PDF)
In this paper, we consider as an inverse problem the simultaneous estimation of the five parameters of a jump diffusion process from return observations of a price trajectory. We show that there occur some ill-posedness phenomena in the parameter estimation problem, because the forward operator fails to be injective and small perturbations in the data may lead to large changes in the solution. We illustrate the instability effect by a numerical case study. To overcome the difficulty coming from ill-posedness we use a multi-parameter regularization approach that finds a trade-off between a least-squares approach based on empircal densities and a fitting of semi-invariants. In this context, a fixed point iteration is proposed that provides good results for the example under consideration in the case study.
147

Parameter estimation in a generalized bivariate Ornstein-Uhlenbeck model

Krämer, Romy, Richter, Matthias, Hofmann, Bernd 07 October 2005 (has links) (PDF)
In this paper, we consider the inverse problem of calibrating a generalization of the bivariate Ornstein-Uhlenbeck model introduced by Lo and Wang. Even though the generalized Black-Scholes option pricing formula still holds, option prices change in comparison to the classical Black-Scholes model. The time-dependent volatility function and the other (real-valued) parameters in the model are calibrated simultaneously from option price data and from some empirical moments of the logarithmic returns. This gives an ill-posed inverse problem, which requires a regularization approach. Applying the theory of Engl, Hanke and Neubauer concerning Tikhonov regularization we show convergence of the regularized solution to the true data and study the form of source conditions which ensure convergence rates.
148

TOWARDS IMPROVED IDENTIFICATION OF SPATIALLY-DISTRIBUTED RAINFALL RUNOFF MODELS

Pokhrel, Prafulla January 2010 (has links)
Distributed rainfall runoff hydrologic models can be highly effective in improving flood forecasting capabilities at ungauged, interior locations of the watershed. However, their implementation in operational decision-making is hindered by the high dimensionality of the state-parameter space and by lack of methods/understanding on how to properly exploit and incorporate available spatio-temporal information about the system. This dissertation is composed of a sequence of five studies, whose overall goal is to improve understanding on problems relating to parameter identifiability in distributed models and to develop methodologies for their calibration.The first study proposes and investigates an approach for calibrating catchment scale distributed rainfall-runoff models using conventionally available data. The process, called regularization, uses spatial information about soils and land-use that is embedded in prior parameter estimates (Koren et al. 2000) and knowledge of watershed characteristics, to constrain and reduce the dimensionality of the feasible parameter space.The methodology is further extended in the second and third studies to improve extraction of `hydrologically relevant' information from the observed streamflow hydrograph. Hydrological relevance is provided by using signature measures (Yilmaz et al 2008) that correspond to major watershed functions. While the second study applies a manual selection procedure to constrain parameter sets from the subset of post calibrated solutions, the third develops an automatic procedure based on a penalty function optimization approach.The fourth paper investigates the relative impact of using the commonly used multiplier approach to distributed model calibration, in comparison with other spatial regularization strategies and also includes investigations on whether calibration to data at the catchment outlet can provide improved performance at interior locations. The model calibration study conducted for three mid sized catchments in the US led to the important finding that basin outlet hydrographs might not generally contain information regarding spatial variability of the parameters, and that calibration of the overall mean of the spatially distributed parameter fields may be sufficient for flow forecasting at the outlet. This then was the motivation for the fifth paper which investigates to what degree the spatial characteristics of parameter and rainfall fields can be observable in catchment outlet hydrographs.
149

Regularization methods for prediction in dynamic graphs and e-marketing applications

Richard, Émile 21 November 2012 (has links) (PDF)
Predicting connections among objects, based either on a noisy observation or on a sequence of observations, is a problem of interest for numerous applications such as recommender systems for e-commerce and social networks, and also in system biology, for inferring interaction patterns among proteins. This work presents formulations of the graph prediction problem, in both dynamic and static scenarios, as regularization problems. In the static scenario we encode the mixture of two different kinds of structural assumptions in a convex penalty involving the L1 and the trace norm. In the dynamic setting we assume that certain graph features, such as the node degree, follow a vector autoregressive model and we propose to use this information to improve the accuracy of prediction. The solutions of the optimization problems are studied both from an algorithmic and statistical point of view. Empirical evidences on synthetic and real data are presented showing the benefit of using the suggested methods.
150

Application of stable signal recovery to seismic interpolation

Hennenfent, Gilles, Herrmann, Felix J. January 2006 (has links)
We propose a method for seismic data interpolation based on 1) the reformulation of the problem as a stable signal recovery problem and 2) the fact that seismic data is sparsely represented by curvelets. This method does not require information on the seismic velocities. Most importantly, this formulation potentially leads to an explicit recovery condition. We also propose a large-scale problem solver for the l1-regularization minimization involved in the recovery and successfully illustrate the performance of our algorithm on 2D synthetic and real examples.

Page generated in 0.099 seconds