Spelling suggestions: "subject:"tikhonov regularization"" "subject:"ikhonov regularization""
41 |
Essays in econometrics and energy marketsBenatia, David 05 1900 (has links)
No description available.
|
42 |
Restauração de imagens de AFM com o funcional de regularização de Tikhonov visando a avaliação de superfícies metálicas / Restoration of AFM images with functional Tikhonov regularization for evaluating metallic surfacesAlexander Corrêa dos Santos 29 August 2008 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Problemas durante o processo de aquisição de imagens de AFM têm feito com que pesquisas na área de nanotecnologia busquem a utilização de ferramentas para minimizar esses efeitos degenerativos. Neste sentido, foram desenvolvidas ferramentas computacionais de restauração destas imagens degradadas. Neste trabalho é utilizado o método baseado na Regularização de Tikhonov, cuja aplicação está concentrada principalmente em restaurações de imagens biológicas. A proposta deste trabalho é a utilização deste regularizador também em imagens de interesse em engenharia. Em alguns casos, um pré-processamento anteriormente à aplicação do algoritmo, apresenta boa resposta na restauração das imagens. Na fase de préprocessamento foram utilizados alguns filtros como, filtro de média, filtro de mediana, filtro laplaciano e filtro de média pontual. Com a aplicação deste regularizador em imagens foi possível obter perfis de distribuição dos pixels onde é mostrado que na medida em que se aumenta a carga de dissolução de ferro puro em ácido sulfúrico, percebe-se que a razão de aspecto aumenta e características de superfície ficam mais visíveis. / Problems during the process of acquisition of images of AFM have been doing with that research in the nanotechnology searchs the use of tools to minimize those degenerative effects. Computational tools for restoration of these degraded images have developed, in this work the method is used based on Regularization of Tikhonov. This method is usually used for restoration of biological images. It is proposed the use of this regularization functional also in images of interest in engineering. In some cases, a previously processing to the application of the algorithm, it presents good answer in the restoration of the images. The previously processing phase some were used filters as, average filter, median filter, laplacian
filter and filter of punctual average, besides combination of filters. With the application of this regularizator it was possible to obtain profiles of distribution of the pixels where is shown that in the measure in that he increases the dissolution charge of iron in sulphuric acid, it is noticed that the aspect reason increases and surface characteristics are more visible.
|
43 |
Restauração de imagens de AFM com o funcional de regularização de Tikhonov visando a avaliação de superfícies metálicas / Restoration of AFM images with functional Tikhonov regularization for evaluating metallic surfacesAlexander Corrêa dos Santos 29 August 2008 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Problemas durante o processo de aquisição de imagens de AFM têm feito com que pesquisas na área de nanotecnologia busquem a utilização de ferramentas para minimizar esses efeitos degenerativos. Neste sentido, foram desenvolvidas ferramentas computacionais de restauração destas imagens degradadas. Neste trabalho é utilizado o método baseado na Regularização de Tikhonov, cuja aplicação está concentrada principalmente em restaurações de imagens biológicas. A proposta deste trabalho é a utilização deste regularizador também em imagens de interesse em engenharia. Em alguns casos, um pré-processamento anteriormente à aplicação do algoritmo, apresenta boa resposta na restauração das imagens. Na fase de préprocessamento foram utilizados alguns filtros como, filtro de média, filtro de mediana, filtro laplaciano e filtro de média pontual. Com a aplicação deste regularizador em imagens foi possível obter perfis de distribuição dos pixels onde é mostrado que na medida em que se aumenta a carga de dissolução de ferro puro em ácido sulfúrico, percebe-se que a razão de aspecto aumenta e características de superfície ficam mais visíveis. / Problems during the process of acquisition of images of AFM have been doing with that research in the nanotechnology searchs the use of tools to minimize those degenerative effects. Computational tools for restoration of these degraded images have developed, in this work the method is used based on Regularization of Tikhonov. This method is usually used for restoration of biological images. It is proposed the use of this regularization functional also in images of interest in engineering. In some cases, a previously processing to the application of the algorithm, it presents good answer in the restoration of the images. The previously processing phase some were used filters as, average filter, median filter, laplacian
filter and filter of punctual average, besides combination of filters. With the application of this regularizator it was possible to obtain profiles of distribution of the pixels where is shown that in the measure in that he increases the dissolution charge of iron in sulphuric acid, it is noticed that the aspect reason increases and surface characteristics are more visible.
|
44 |
Automated Selection of Hyper-Parameters in Diffuse Optical Tomographic Image ReconstructionJayaprakash, * January 2013 (has links) (PDF)
Diffuse optical tomography is a promising imaging modality that provides functional information of the soft biological tissues, with prime imaging applications including breast and brain tissue in-vivo. This modality uses near infrared light( 600nm-900nm) as the probing media, giving an advantage of being non-ionizing imaging modality.
The image reconstruction problem in diffuse optical tomography is typically posed as a least-squares problem that minimizes the difference between experimental and modeled data with respect to optical properties. This problem is non-linear and ill-posed, due to multiple scattering of the near infrared light in the biological tissues, leading to infinitely many possible solutions. The traditional methods employ a regularization term to constrain the solution space as well as stabilize the solution, with Tikhonov type regularization being the most popular one. The choice of this regularization parameter, also known as hyper parameter, dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience.
In this thesis, a simple back projection type image reconstruction algorithm is taken up, as they are known to provide computationally efficient solution compared to regularized solutions. In these algorithms, the hyper parameter becomes equivalent to filter factor and choice of which is typically dependent on the sampling interval used for acquiring data in each projection and the angle of projection. Determining these parameters for diffuse optical tomography is not so straightforward and requires usage of advanced computational models. In this thesis, a computationally efficient simplex
Method based optimization scheme for automatically finding this filter factor is proposed and its performances is evaluated through numerical and experimental phantom data. As back projection type algorithms are approximations to traditional methods, the absolute quantitative accuracy of the reconstructed optical properties is poor .In scenarios, like dynamic imaging, where the emphasis is on recovering relative difference in the optical properties, these algorithms are effective in comparison to traditional methods, with an added advantage being highly computationally efficient.
In the second part of this thesis, this hyper parameter choice for traditional Tikhonov type regularization is attempted with the help of Least-Squares QR-decompisition (LSQR) method. The established techniques that enable the automated choice of hyper parameters include Generalized Cross-Validation(GCV) and regularized Minimal Residual Method(MRM), where both of them come with higher over head of computation time, making it prohibitive to be used in the real-time. The proposed LSQR algorithm uses bidiagonalization of the system matrix to result in less computational cost. The proposed LSQR-based algorithm for automated choice of hyper parameter is compared with MRM methods and is proven to be computationally optimal technique through numerical and experimental phantom cases.
|
45 |
Iterative tensor factorization based on Krylov subspace-type methods with applications to image processingUGWU, UGOCHUKWU OBINNA 06 October 2021 (has links)
No description available.
|
46 |
Nové typy a principy optimalizace digitálního zpracování obrazů v EIT / New Optimization Algorithms for a Digital Image Reconstruction in EITKříž, Tomáš January 2016 (has links)
This doctoral thesis proposes a new algorithm for the reconstruction of impedance images in monitored objects. The algorithm eliminates the spatial resolution problems present in existing reconstruction methods, and, with respect to the monitored objects, it exploits both the partial knowledge of configuration and the material composition. The discussed novel method is designed to recognize certain significant fields of interest, such as material defects or blood clots and tumors in biological images. The actual reconstruction process comprises two phases; while the former stage is focused on industry-related images, with the aim to detect defects in conductive materials, the latter one concentrates on biomedical applications. The thesis also presents a description of the numerical model used to test the algorithm. The testing procedure was centred on the resulting impedivity value, influence of the regularization parameter, initial value of the numerical model impedivity, and effect exerted by noise on the voltage electrodes upon the overall reconstruction results. Another issue analyzed herein is the possibility of reconstructing impedance images from components of the magnetic flux density measured outside the investigated object. The given magnetic field is generated by a current passing through the object. The created algorithm for the reconstruction of impedance images is modeled on the proposed algorithm for EIT-based reconstruction of impedance images from voltage. The algoritm was tested for stability, influence of the regularization parameter, and initial conductivity. From the general perspective, the thesis describes the methodology for both magnetic field measurement via NMR and processing of the obtained data.
|
47 |
Modélisation et identification de paramètres pour les empreintes des faisceaux de haute énergie. / Modelling and parameter identification for energy beam footprintsBashtova, Kateryna 05 December 2016 (has links)
Le progrès technologique nécessite des techniques de plus en plus sophistiquées et précises de traitement de matériaux. Nous étudions le traitement de matériaux par faisceaux de haute énergie : un jet d’eau abrasif, une sonde ionique focalisée, un laser. L’évolution de la surface du matériau sous l’action du faisceau de haute énergie est modélisée par une EDP. Cette équation contient l’ensemble des coefficients inconnus - les paramètres de calibration de mo- dèle. Les paramètres inconnus peuvent être calibrés par minimisation de la fonction coût, c’est-à-dire, la fonction qui décrit la différence entre le résultat de la modélisation et les données expérimentales. Comme la surface modélisée est une solution du problème d’EDP, cela rentre dans le cadre de l’optimisation sous contrainte d’EDP. L’identification a été rendue bien posée par la régularisation du type Tikhonov. Le gradient de la fonction coût a été obtenu en utilisant les deux méthodes : l’approche adjointe et la différen- ciation automatique. Une fois la fonction coût et son gradient obtenus, nous avons utilisé un minimiseur L-BFGS pour réaliser la minimisation.Le problème de la non-unicité de la solution a été résolu pour le problème de traitement par le jet d’eau abrasif. Des effets secondaires ne sont pas inclus dans le modèle. Leur impact sur le procédé de calibration a été évité. Ensuite, le procédé de calibration a été validé pour les données synthétiques et expérimentales. Enfin, nous avons proposé un critère pour distinguer facilement entre le régime thermique et non- thermique d’ablation par laser. / The technological progress demands more and more sophisticated and precise techniques of the treatment of materials. We study the machining of the material with the high energy beams: the abrasive waterjet, the focused ion beam and the laser. Although the physics governing the energy beam interaction with material is very different for different application, we can use the same approach to the mathematical modeling of these processes.The evolution of the material surface under the energy beam impact is modeled by PDE equation. This equation contains a set of unknown parameters - the calibration parameters of the model. The unknown parameters can be identified by minimization of the cost function, i.e., function that describes the differ- ence between the result of modeling and the corresponding experimental data. As the modeled surface is a solution of the PDE problem, this minimization is an example of PDE-constrained optimization problem. The identification problem was regularized using Tikhonov regularization. The gradient of the cost function was obtained both by using the variational approach and by means of the automatic differentiation. Once the cost function and its gradient calculated, the minimization was performed using L-BFGS minimizer.For the abrasive waterjet application the problem of non-uniqueness of numerical solution is solved. The impact of the secondary effects non included into the model is avoided as well. The calibration procedure is validated on both synthetic and experimental data.For the laser application, we presented a simple criterion that allows to distinguish between the thermal and non-thermal laser ablation regimes.
|
48 |
Generalized estimation of the ventilatory distribution from the multiple‑breath nitrogen washoutMotta-Ribeiro, Gabriel Casulari, Jandre, Frederico Caetano, Wrigge, Hermann, Giannella-Neto, Antonio January 2016 (has links)
Background: This work presents a generalized technique to estimate pulmonary ventilation-to-volume (v/V) distributions using the multiple-breath nitrogen washout, in which both tidal volume (VT) and the end-expiratory lung volume (EELV) are allowed to vary during the maneuver. In addition, the volume of the series dead space (vd), unlike the classical model, is considered a common series unit connected to a set of parallel alveolar units. Methods: The numerical solution for simulated data, either error-free or with the N2 measurement contaminated with the addition of Gaussian random noise of 3 or 5 %
standard deviation was tested under several conditions in a computational model constituted by 50 alveolar units with unimodal and bimodal distributions of v/V. Non-negative least squares regression with Tikhonov regularization was employed for parameter retrieval. The solution was obtained with either unconstrained or constrained (VT, EELV and vd) conditions. The Tikhonov gain was fixed or estimated and a weighting matrix (WM) was considered. The quality of estimation was evaluated by the sum of the squared errors (SSE) (between reference and recovered distributions) and by the deviations of the first three moments calculated for both distributions. Additionally, a shape classification method was tested to identify the solution as unimodal or bimodal, by counting the number of shape agreements after 1000 repetitions. Results: The accuracy of the results showed a high dependence on the noise amplitude. The best algorithm for SSE and moments included the constrained and the WM solvers, whereas shape agreement improved without WM, resulting in 97.2 % for unimodal and 90.0 % for bimodal distributions in the highest noise condition. Conclusions: In conclusion this generalized method was able to identify v/V distributions from a lung model with a common series dead space even with variable VT. Although limitations remain in presence of experimental noise, appropriate combination of processing steps were also found to reduce estimation errors.
|
49 |
Contribution à l'analyse mathématique et à la résolution numérique d'un problème inverse de scattering élasto-acoustique / Contribution to the mathematical analysis and to the numerical solution of an inverse elasto-acoustic scattering problemEstecahandy, Elodie 19 September 2013 (has links)
La détermination de la forme d'un obstacle élastique immergé dans un milieu fluide à partir de mesures du champ d'onde diffracté est un problème d'un vif intérêt dans de nombreux domaines tels que le sonar, l'exploration géophysique et l'imagerie médicale. A cause de son caractère non-linéaire et mal posé, ce problème inverse de l'obstacle (IOP) est très difficile à résoudre, particulièrement d'un point de vue numérique. De plus, son étude requiert la compréhension de la théorie du problème de diffraction direct (DP) associé, et la maîtrise des méthodes de résolution correspondantes. Le travail accompli ici se rapporte à l'analyse mathématique et numérique du DP élasto-acoustique et de l'IOP. En particulier, nous avons développé un code de simulation numérique performant pour la propagation des ondes associée à ce type de milieux, basé sur une méthode de type DG qui emploie des éléments finis d'ordre supérieur et des éléments courbes à l'interface afin de mieux représenter l'interaction fluide-structure, et nous l'appliquons à la reconstruction d'objets par la mise en oeuvre d'une méthode de Newton régularisée. / The determination of the shape of an elastic obstacle immersed in water from some measurements of the scattered field is an important problem in many technologies such as sonar, geophysical exploration, and medical imaging. This inverse obstacle problem (IOP) is very difficult to solve, especially from a numerical viewpoint, because of its nonlinear and ill-posed character. Moreover, its investigation requires the understanding of the theory for the associated direct scattering problem (DP), and the mastery of the corresponding numerical solution methods. The work accomplished here pertains to the mathematical and numerical analysis of the elasto-acoustic DP and of the IOP. More specifically, we have developed an efficient numerical simulation code for wave propagation associated to this type of media, based on a DG-type method using higher-order finite elements and curved edges at the interface to better represent the fluid-structure interaction, and we apply it to the reconstruction of objects with the implementation of a regularized Newton method.
|
50 |
Studies on two specific inverse problems from imaging and financeRückert, Nadja 20 July 2012 (has links) (PDF)
This thesis deals with regularization parameter selection methods in the context of Tikhonov-type regularization with Poisson distributed data, in particular the reconstruction of images, as well as with the identification of the volatility surface from observed option prices.
In Part I we examine the choice of the regularization parameter when reconstructing an image, which is disturbed by Poisson noise, with Tikhonov-type regularization. This type of regularization is a generalization of the classical Tikhonov regularization in the Banach space setting and often called variational regularization. After a general consideration of Tikhonov-type regularization for data corrupted by Poisson noise, we examine the methods for choosing the regularization parameter numerically on the basis of two test images and real PET data.
In Part II we consider the estimation of the volatility function from observed call option prices with the explicit formula which has been derived by Dupire using the Black-Scholes partial differential equation. The option prices are only available as discrete noisy observations so that the main difficulty is the ill-posedness of the numerical differentiation. Finite difference schemes, as regularization by discretization of the inverse and ill-posed problem, do not overcome these difficulties when they are used to evaluate the partial derivatives. Therefore we construct an alternative algorithm based on the weak formulation of the dual Black-Scholes partial differential equation and evaluate the performance of the finite difference schemes and the new algorithm for synthetic and real option prices.
|
Page generated in 0.1179 seconds