Spelling suggestions: "subject:"recurve"" "subject:"acurve""
1 |
Simulation of Batch Thickening Phenomenon for Young SedimentsTiwari, Brajesh Kumar 04 January 2005 (has links)
The present study consists of the development of a MATLAB version of computer program (FORTRAN) developed by Papanicolaou (1992) to solve the governing small strain consolidation equation of second order non-linear transit partial differential equation of parabolic type. This program is modified to integrate the settling and consolidation processes together in order to provide continuous results from start to end of the process in a single run of MATLAB program. The study also proposes a method to calculate the batch curve by considering the variation of solids concentration in the suspension region. Instead of the graphical approach available in the literature, the program uses numerical approach (Newton-Raphson method) to calculate the solids concentration in suspension region at the interface of suspension and sedimentation regions. This method uses the empirical relationship between solids flux and solids concentration. The study also proposes a method to calculate the solids concentration, throughout the settling column, using the concept of characteristic. The present work also simulates the large strain consolidation model (Gutierrez, 2003). The results of present work closely match with the results of small strain model (Diplas & Papanicolaou, 1997) available in literature. / Master of Science
|
2 |
Using regularization for error reduction in GRACE gravity estimationSave, Himanshu Vijay 02 June 2010 (has links)
The Gravity Recovery and Climate Experiment (GRACE) is a joint
National Aeronautics and Space Administration / Deutsches Zentrum für Luftund
Raumfahrt (NASA/DLR) mission to map the time-variable and mean
gravity field of the Earth, and was launched on March 17, 2002. The nature
of the gravity field inverse problem amplifies the noise in the data that creeps
into the mid and high degree and order harmonic coefficients of the earth's
gravity fields for monthly variability, making the GRACE estimation problem
ill-posed. These errors, due to the use of imperfect models and data noise, are
manifested as peculiar errors in the gravity estimates as north-south striping
in the monthly global maps of equivalent water heights.
In order to reduce these errors, this study develops a methodology
based on Tikhonov regularization technique using the L-curve method in combination
with orthogonal transformation method. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving
linear discrete ill-posed problems using Tikhonov regularization. However, the
computational effort required to determine the L-curve can be prohibitive for
a large scale problem like GRACE. This study implements a parameter-choice
method, using Lanczos bidiagonalization that is a computationally inexpensive
approximation to L-curve called L-ribbon. This method projects a large
estimation problem on a problem of size of about two orders of magnitude
smaller. Using the knowledge of the characteristics of the systematic errors in
the GRACE solutions, this study designs a new regularization matrix that reduces
the systematic errors without attenuating the signal. The regularization
matrix provides a constraint on the geopotential coefficients as a function of its
degree and order. The regularization algorithms are implemented in a parallel
computing environment for this study. A five year time-series of the candidate
regularized solutions show markedly reduced systematic errors without any
reduction in the variability signal compared to the unconstrained solutions.
The variability signals in the regularized series show good agreement with the
hydrological models in the small and medium sized river basins and also show
non-seasonal signals in the oceans without the need for post-processing. / text
|
3 |
Data-driven estimation for Aalen's additive risk modelBoruvka, Audrey 02 August 2007 (has links)
The proportional hazards model developed by Cox (1972) is by far the most widely used method for regression analysis of censored survival data. Application of the Cox model to more general event history data has become possible through extensions using counting process theory (e.g., Andersen and Borgan (1985), Therneau and Grambsch (2000)). With its development based entirely on counting processes, Aalen’s additive risk model offers a flexible, nonparametric alternative. Ordinary least squares, weighted least squares and ridge regression have been proposed in the literature as estimation schemes for Aalen’s model (Aalen (1989), Huffer and McKeague (1991), Aalen et al. (2004)). This thesis develops data-driven parameter selection criteria for the weighted least squares and ridge estimators. Using simulated survival data, these new methods are evaluated against existing approaches. A survey of the literature on the additive risk model and a demonstration of its application to real data sets are also provided. / Thesis (Master, Mathematics & Statistics) -- Queen's University, 2007-07-18 22:13:13.243
|
4 |
A general L-curve technique for ill-conditioned inverse problems based on the Cramer-Rao lower boundKattuparambil Sreenivasan, Sruthi, Farooqi, Simrah January 2024 (has links)
This project is associated with statistical methods to find the unknown parameters of a model. It is the statistical investigation of the algorithm with respect to accuracy (the Cramer-Rao bound and L-curve technique) and optimization of the algorithmic parameters. This project aims to estimate the true temperature (final temperature) of a certain liquid in a container by using initial measurements (readings) from a temperature probe with a known time constant. Basically, the final temperature of the liquid was estimated, before the probe reached its final reading. The probe obeys a simple first-order differential equation model. Based on the model of the probe and the measurement data the estimate was calculated of the ’true’ temperature in the container by using a maximum likelihood approach to parameter estimation. The initial temperature was also investigated. Modelling, analysis, calculations, and simulations of this problem were explored.
|
5 |
Par?metro de regulariza??o em problemas inversos: estudo num?rico com a transformada de RadonPereira, Ivanildo Freire 20 September 2013 (has links)
Made available in DSpace on 2015-03-03T15:32:43Z (GMT). No. of bitstreams: 1
IvanildoFP_DISSERT.pdf: 6193808 bytes, checksum: 2b4b204c68da306ef20f2a99dc91d9c9 (MD5)
Previous issue date: 2013-09-20 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / In general, an inverse problem corresponds to find a value of an element x in a suitable
vector space, given a vector y measuring it, in some sense. When we discretize the
problem, it usually boils down to solve an equation system f(x) = y, where f : U
Rm ! Rn represents the step function in any domain U of the appropriate Rm. As a
general rule, we arrive to an ill-posed problem. The resolution of inverse problems has
been widely researched along the last decades, because many problems in science and
industry consist in determining unknowns that we try to know, by observing its effects
under certain indirect measures.
Our general subject of this dissertation is the choice of Tykhonov?s regulaziration
parameter of a poorly conditioned linear problem, as we are going to discuss on chapter
1 of this dissertation, focusing on the three most popular methods in nowadays literature
of the area. Our more specific focus in this dissertation consists in the simulations
reported on chapter 2, aiming to compare the performance of the three methods in the
recuperation of images measured with the Radon transform, perturbed by the addition
of gaussian i.i.d. noise. We choosed a difference operator as regularizer of the problem.
The contribution we try to make, in this dissertation, mainly consists on the discussion
of numerical simulations we execute, as is exposed in Chapter 2. We understand
that the meaning of this dissertation lays much more on the questions which it raises
than on saying something definitive about the subject. Partly, for beeing based on
numerical experiments with no new mathematical results associated to it, partly for
being about numerical experiments made with a single operator. On the other hand,
we got some observations which seemed to us interesting on the simulations performed,
considered the literature of the area. In special, we highlight observations we resume,
at the conclusion of this work, about the different vocations of methods like GCV and
L-curve and, also, about the optimal parameters tendency observed in the L-curve
method of grouping themselves in a small gap, strongly correlated with the behavior
of the generalized singular value decomposition curve of the involved operators, under
reasonably broad regularity conditions in the images to be recovered / Problemas inversos, usualmente recaem em resolver alguma equa??o do tipo f(x) = b, onde cada equa??o fi(x) = bi pode ser pensada como uma medida de um dado x a ser recuperado. Usualmente s?o mal postos, no sentido de corresponderem a equa??es que podem n?o ter solu??o exata, podem ainda ter muitas solu??es, ou ainda, o que ? o mais comum, ter solu??es muito inst?veis a ru?dos na obten??o de b. H? v?rias formas de regularizar a obten??o de solu??es de tais problemas e a mais popular seria a de Tykhonov, que corresponde a:
Minimizar ||f(x) b||2 + l ||L(x x0) ||2 (I)
A regulariza??o pretendida corresponde a se escolher o operador l, de tal forma que o problema I tenha solu??es est?veis com perturba??es em b e que aproximem solu??es do problema de m?nimos quadrados usual, no caso de se fazer l 0. O primeiro termo de (I) representa o ajuste aos dados e o segundo termo penaliza a solu??o de forma a regularizar o problema e produzir uma solu??o est?vel a ru?dos. Se l = 0, isto significa que estamos procurando uma solu??o de quadrados m?nimos para o problema, o que usualmente ? insuficiente para problemas mal postos. O termo de regulariza??o adicionado introduz um vi?s na solu??o ao penalizar o ajuste com um termo adicional. Se L for a identidade, por exemplo, isto significa que estamos apostando que a solu??o estaria relativamente pr?xima de x0. Se L for o operador gradiente, estamos apostando que a solu??o x ? razoavelmente suave. Nas aplica??es, L usualmente ? escolhido como um operador adaptado ao problema estudado e de forma se valer de informa??es a priori dispon?veis sobre as solu??es procuradas.
A escolha do par?metro l > 0 ? crucial neste m?todos, pelo fato que se l ? excessivo, isto tende a enfraquecer excessivamente o ajuste aos dados, induzindo um ajuste da solu??o ? x0. Se l for pequeno demais a regulariza??o pretendida acaba n?o acontecendo e a solu??o do problema (I) usualmente acaba ficando muito inst?vel e contaminada por ru?dos. H? v?rias t?cnicas dispon?veis na literatura para tal escolha, sobretudo se f ? uma fun??o linear f(x) = Ax. O objetivo da disserta??o ? o de estudar algumas destas t?cnicas de ajuste do par?metro l no caso de operadores discretizados, vale dizer, x no Rn. Em especial, destacamos os m?todos de ajuste do par?metro l reconhecidos na literatura como L-curve, GCV e m?todo da discrep?ncia, e objetiva-se comparar estes m?todos em testes feitos com a transformada de Radon e tendo como regularizador um operador de derivada de primeira ordem. Os resultados dos testes realizados revelam pontos interessantes na rela??o entre os diferentes estimadores para o par?metro de regulariza??o e que sugerem um aprofundamento te?rico al?m do escopo desta disserta??o
|
6 |
Building and Evaluating a 3D Scanning System for Measurementsand Estimation of Antennas and Propagation ChannelsAagaard Fransson, Erik Johannes, Wall-Horgen, Tobias January 2012 (has links)
Wireless communications rely, among other things, on theunderstanding of the properties of the radio propagationchannel, the antennas and their interplay. Adequate measurementsare required to verify theoretical models and togain knowledge of the channel behavior and antenna performance.As a result of this master thesis we built a 3D fieldscanner measurement system to predict multipath propagationand to measure antenna characteristics. The 3Dscanner allows measuring a signal at the point of interestalong a line, on a surface or within a volume in space. In orderto evaluate the system, we have performed narrowbandchannel sounding measurements of the spatial distributionof waves impinging at an imaginary spherical sector. Datawas used to estimate the Angle-of-Arrivals (AoA) and amplitudeof the waves. An estimation method is presented tosolve the resulting inverse problem by means of regularizationwith truncated singular value decomposition. The regularizedsolution was then further improved with the helpof a successive interference cancellation algorithm. Beforeapplying the method to measurement data, it was testedon synthetic data to evaluate its performance as a functionof the noise level and the number of impinging waves. Inorder to minimize estimation errors it was also required tofind the phase center of the horn antenna used in the channelmeasurements. The task was accomplished by directmeasurements and by the regularization method, both resultsbeing in good agreement.
|
7 |
Modélisation et identification de paramètres pour les empreintes des faisceaux de haute énergie. / Modelling and parameter identification for energy beam footprintsBashtova, Kateryna 05 December 2016 (has links)
Le progrès technologique nécessite des techniques de plus en plus sophistiquées et précises de traitement de matériaux. Nous étudions le traitement de matériaux par faisceaux de haute énergie : un jet d’eau abrasif, une sonde ionique focalisée, un laser. L’évolution de la surface du matériau sous l’action du faisceau de haute énergie est modélisée par une EDP. Cette équation contient l’ensemble des coefficients inconnus - les paramètres de calibration de mo- dèle. Les paramètres inconnus peuvent être calibrés par minimisation de la fonction coût, c’est-à-dire, la fonction qui décrit la différence entre le résultat de la modélisation et les données expérimentales. Comme la surface modélisée est une solution du problème d’EDP, cela rentre dans le cadre de l’optimisation sous contrainte d’EDP. L’identification a été rendue bien posée par la régularisation du type Tikhonov. Le gradient de la fonction coût a été obtenu en utilisant les deux méthodes : l’approche adjointe et la différen- ciation automatique. Une fois la fonction coût et son gradient obtenus, nous avons utilisé un minimiseur L-BFGS pour réaliser la minimisation.Le problème de la non-unicité de la solution a été résolu pour le problème de traitement par le jet d’eau abrasif. Des effets secondaires ne sont pas inclus dans le modèle. Leur impact sur le procédé de calibration a été évité. Ensuite, le procédé de calibration a été validé pour les données synthétiques et expérimentales. Enfin, nous avons proposé un critère pour distinguer facilement entre le régime thermique et non- thermique d’ablation par laser. / The technological progress demands more and more sophisticated and precise techniques of the treatment of materials. We study the machining of the material with the high energy beams: the abrasive waterjet, the focused ion beam and the laser. Although the physics governing the energy beam interaction with material is very different for different application, we can use the same approach to the mathematical modeling of these processes.The evolution of the material surface under the energy beam impact is modeled by PDE equation. This equation contains a set of unknown parameters - the calibration parameters of the model. The unknown parameters can be identified by minimization of the cost function, i.e., function that describes the differ- ence between the result of modeling and the corresponding experimental data. As the modeled surface is a solution of the PDE problem, this minimization is an example of PDE-constrained optimization problem. The identification problem was regularized using Tikhonov regularization. The gradient of the cost function was obtained both by using the variational approach and by means of the automatic differentiation. Once the cost function and its gradient calculated, the minimization was performed using L-BFGS minimizer.For the abrasive waterjet application the problem of non-uniqueness of numerical solution is solved. The impact of the secondary effects non included into the model is avoided as well. The calibration procedure is validated on both synthetic and experimental data.For the laser application, we presented a simple criterion that allows to distinguish between the thermal and non-thermal laser ablation regimes.
|
8 |
Studies on two specific inverse problems from imaging and financeRückert, Nadja 20 July 2012 (has links) (PDF)
This thesis deals with regularization parameter selection methods in the context of Tikhonov-type regularization with Poisson distributed data, in particular the reconstruction of images, as well as with the identification of the volatility surface from observed option prices.
In Part I we examine the choice of the regularization parameter when reconstructing an image, which is disturbed by Poisson noise, with Tikhonov-type regularization. This type of regularization is a generalization of the classical Tikhonov regularization in the Banach space setting and often called variational regularization. After a general consideration of Tikhonov-type regularization for data corrupted by Poisson noise, we examine the methods for choosing the regularization parameter numerically on the basis of two test images and real PET data.
In Part II we consider the estimation of the volatility function from observed call option prices with the explicit formula which has been derived by Dupire using the Black-Scholes partial differential equation. The option prices are only available as discrete noisy observations so that the main difficulty is the ill-posedness of the numerical differentiation. Finite difference schemes, as regularization by discretization of the inverse and ill-posed problem, do not overcome these difficulties when they are used to evaluate the partial derivatives. Therefore we construct an alternative algorithm based on the weak formulation of the dual Black-Scholes partial differential equation and evaluate the performance of the finite difference schemes and the new algorithm for synthetic and real option prices.
|
9 |
Studies on two specific inverse problems from imaging and financeRückert, Nadja 16 July 2012 (has links)
This thesis deals with regularization parameter selection methods in the context of Tikhonov-type regularization with Poisson distributed data, in particular the reconstruction of images, as well as with the identification of the volatility surface from observed option prices.
In Part I we examine the choice of the regularization parameter when reconstructing an image, which is disturbed by Poisson noise, with Tikhonov-type regularization. This type of regularization is a generalization of the classical Tikhonov regularization in the Banach space setting and often called variational regularization. After a general consideration of Tikhonov-type regularization for data corrupted by Poisson noise, we examine the methods for choosing the regularization parameter numerically on the basis of two test images and real PET data.
In Part II we consider the estimation of the volatility function from observed call option prices with the explicit formula which has been derived by Dupire using the Black-Scholes partial differential equation. The option prices are only available as discrete noisy observations so that the main difficulty is the ill-posedness of the numerical differentiation. Finite difference schemes, as regularization by discretization of the inverse and ill-posed problem, do not overcome these difficulties when they are used to evaluate the partial derivatives. Therefore we construct an alternative algorithm based on the weak formulation of the dual Black-Scholes partial differential equation and evaluate the performance of the finite difference schemes and the new algorithm for synthetic and real option prices.
|
Page generated in 0.0536 seconds