• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 14
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 89
  • 68
  • 29
  • 23
  • 21
  • 20
  • 17
  • 16
  • 16
  • 16
  • 14
  • 14
  • 14
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Theoretical and Numerical Study of Tikhonov's Regularization and Morozov's Discrepancy Principle

Whitney, MaryGeorge L. 01 December 2009 (has links)
A concept of a well-posed problem was initially introduced by J. Hadamard in 1923, who expressed the idea that every mathematical model should have a unique solution, stable with respect to noise in the input data. If at least one of those properties is violated, the problem is ill-posed (and unstable). There are numerous examples of ill- posed problems in computational mathematics and applications. Classical numerical algorithms, when used for an ill-posed model, turn out to be divergent. Hence one has to develop special regularization techniques, which take advantage of an a priori information (normally available), in order to solve an ill-posed problem in a stable fashion. In this thesis, theoretical and numerical investigation of Tikhonov's (variational) regularization is presented. The regularization parameter is computed by the discrepancy principle of Morozov, and a first-kind integral equation is used for numerical simulations.
32

Extraction Of Auditory Evoked Potentials From Ongoing Eeg

Aydin, Serap 01 September 2005 (has links) (PDF)
In estimating auditory Evoked Potentials (EPs) from ongoing EEG the number of sweeps should be reduced to decrease the experimental time and to increase the reliability of diagnosis. The &macr / rst goal of this study is to demon- strate the use of basic estimation techniques in extracting auditory EPs (AEPs) from small number of sweeps relative to ensemble averaging (EA). For this purpose, three groups of basic estimation techniques are compared to the traditional EA with respect to the signal-to-noise ratio(SNR) improve- ments in extracting the template AEP. Group A includes the combinations of the Subspace Method (SM) with the Wiener Filtering (WF) approaches (the conventional WF and coherence weighted WF (CWWF). Group B con- sists of standard adaptive algorithms (Least Mean Square (LMS), Recursive Least Square (RLS), and one-step Kalman &macr / ltering (KF). The regularization techniques (the Standard Tikhonov Regularization (STR) and the Subspace Regularization (SR) methods) forms Group C. All methods are tested in sim- ulations and pseudo-simulations which are performed with white noise and EEG measurements, respectively. The same methods are also tested with experimental AEPs. Comparisons based on the output signal-to-noise ratio (SNR) show that: 1) the KF and STR methods are the best methods among the algorithms tested in this study,2) the SM can reduce the large amount of the background EEG noise from the raw data, 3) the LMS and WF algo- rithms show poor performance compared to EA. The SM should be used as 1 a pre-&macr / lter to increase their performance. 4) the CWWF works better than the WF when it is combined with the SM, 5) the STR method is better than the SR method. It is observed that, most of the basic estimation techniques show de&macr / nitely better performance compared to EA in extracting the EPs. The KF or the STR e&reg / ectively reduce the experimental time (to one-fourth of that required by EA). The SM is a useful pre-&macr / lter to signi&macr / cantly reduce the noise on the raw data. The KF and STR are shown to be computationally inexpensive tools to extract the template AEPs and should be used instead of EA. They provide a clear template AEP for various analysis methods. To reduce the noise level on single sweeps, the SM can be used as a pre-&macr / lter before various single sweep analysis methods. The second goal of this study is to to present a new approach to extract single sweep AEPs without using a template signal. The SM and a modi- &macr / ed scale-space &macr / lter (MSSF) are applied consecutively. The SM is applied to raw data to increase the SNR. The less-noisy sweeps are then individu- ally &macr / ltered with the MSSF. This new approach is assessed in both pseudo- simulations and experimental studies. The MSSF is also applied to actual auditory brainstem response (ABR) data to obtain a clear ABR from a rel- atively small number of sweeps. The wavelet transform coe&plusmn / cients (WTCs) corresponding to the signal and noise become distinguishable after the SM. The MSSF is an e&reg / ective &macr / lter in selecting the WTCs of the noise. The esti- mated single sweep EPs highly resemble the grand average EP although less number of sweeps are evaluated. Small amplitude variations are observed among the estimations. The MSSF applied to EA of 50 sweeps yields an ABR that best &macr / ts to the grand average of 250 sweeps. We concluded that the combination of SM and MSSF is an e&plusmn / cient tool to obtain clear single sweep AEPs. The MSSF reduces the recording time to one-&macr / fth of that re- quired by EA in template ABR estimation. The proposed approach does not use a template signal (which is generally obtained using the average of small number of sweeps). It provides unprecedented results that support the basic assumptions in the additive signal model.
33

Efficient Calibration and Predictive Error Analysis for Highly-Parameterized Models Combining Tikhonov and Subspace Regularization Techniques

Matthew James Tonkin Unknown Date (has links)
The development and application of environmental models to help understand natural systems, and support decision making, is commonplace. A difficulty encountered in the development of such models is determining which physical and chemical processes to simulate, and on what temporal and spatial scale(s). Modern computing capabilities enable the incorporation of more processes, at increasingly refined scales, than at any time previously. However, the simulation of a large number of fine scale processes has undesirable consequences: first, the execution time of many environmental models has not declined despite advances in processor speed and solution techniques; and second, such complex models incorporate a large number of parameters, for which values must be assigned. Compounding these problems is the recognition that since the inverse problem in groundwater modeling is non-unique the calibration of a single parameter set does not assure the reliability of model predictions. Practicing modelers are, then, faced with complex models that incorporate a large number of parameters whose values are uncertain, and that make predictions that are prone to an unspecified amount of error. In recognition of this, there has been considerable research into methods for evaluating the potential for error in model predictions arising from errors in the values assigned to model parameters. Unfortunately, some common methods employed in the estimation of model parameters, and the evaluation of the potential error associated with model parameters and predictions, suffer from limitations in their application that stem from an emphasis on obtaining an over-determined, parsimonious, inverse problem. That is, common methods of model analysis exhibit artifacts from the propagation of subjective a-priori parameter parsimony throughout the calibration and predictive error analyses. This thesis describes theoretical and practical developments that enable the estimation of a large number of parameters, and the evaluation of the potential for error in predictions made by highly parameterized models. Since the focus of this research is on the use of models in support of decision making, the new methods are demonstrated by application to synthetic applications, where the performance of the method can be evaluated under controlled conditions; and to real-world applications, where the performance of the method can be evaluated in terms of trade-offs in computational effort versus calibration results and the ability to rigorously yet expediently investigate predictive error. The applications suggest that the new techniques are applicable to a range of environmental modeling disciplines. Mathematical innovations described in this thesis focus on combining complementary regularized inversion (calibration) techniques with novel methods for analyzing model predictive error. Several of the innovations are founded on explicit recognition of the existence of the calibration solution and null spaces – that is, that with the available observations there are some (combinations of) parameters that can be estimated; and there are some (combinations of) parameters that cannot. The existence of a non-trivial calibration null space is at the heart of the non-uniqueness problem in model calibration: this research expands upon this concept by recognizing that there are combinations of parameters that lie within the calibration null space yet possess non-trivial projections onto the predictive solution space, and these combinations of parameters are at the heart of predictive error analysis. The most significant contribution of this research is the attempt to develop a framework for model analysis that promotes computational efficiency in both the calibration and the subsequent analysis of the potential for error in model predictions. Fundamental to this framework is the use of a large number of parameters, the use of Tikhonov regularization, and the use of subspace techniques. Use of a large number of parameters enables parameter detail to be represented in the model at a scale approaching true variability; the use of Tikhonov constraints enables the modeler to incorporate preferred conditions on parameter values and/or their variation throughout the calibration and the predictive analysis; and, the use of subspace techniques enables model calibration and predictive analysis to be undertaken expediently, even when undertaken using a large number of parameters. This research focuses on the inability of the calibration process to accurately identify parameter values: it is assumed that the models in question accurately represent the relevant processes at the relevant scales so that parameter and predictive error depend only on parameter detail not represented in the model and/or accurately inferred through the calibration process. Contributions to parameter and predictive error arising from incorrect model identification are outside the scope of this research.
34

Restauração de imagens de AFM com o funcional de regularização de Tikhonov visando a avaliação de superfícies metálicas / Restoration of AFM images with functional Tikhonov regularization for evaluating metallic surfaces

Alexander Corrêa dos Santos 29 August 2008 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Problemas durante o processo de aquisição de imagens de AFM têm feito com que pesquisas na área de nanotecnologia busquem a utilização de ferramentas para minimizar esses efeitos degenerativos. Neste sentido, foram desenvolvidas ferramentas computacionais de restauração destas imagens degradadas. Neste trabalho é utilizado o método baseado na Regularização de Tikhonov, cuja aplicação está concentrada principalmente em restaurações de imagens biológicas. A proposta deste trabalho é a utilização deste regularizador também em imagens de interesse em engenharia. Em alguns casos, um pré-processamento anteriormente à aplicação do algoritmo, apresenta boa resposta na restauração das imagens. Na fase de préprocessamento foram utilizados alguns filtros como, filtro de média, filtro de mediana, filtro laplaciano e filtro de média pontual. Com a aplicação deste regularizador em imagens foi possível obter perfis de distribuição dos pixels onde é mostrado que na medida em que se aumenta a carga de dissolução de ferro puro em ácido sulfúrico, percebe-se que a razão de aspecto aumenta e características de superfície ficam mais visíveis. / Problems during the process of acquisition of images of AFM have been doing with that research in the nanotechnology searchs the use of tools to minimize those degenerative effects. Computational tools for restoration of these degraded images have developed, in this work the method is used based on Regularization of Tikhonov. This method is usually used for restoration of biological images. It is proposed the use of this regularization functional also in images of interest in engineering. In some cases, a previously processing to the application of the algorithm, it presents good answer in the restoration of the images. The previously processing phase some were used filters as, average filter, median filter, laplacian filter and filter of punctual average, besides combination of filters. With the application of this regularizator it was possible to obtain profiles of distribution of the pixels where is shown that in the measure in that he increases the dissolution charge of iron in sulphuric acid, it is noticed that the aspect reason increases and surface characteristics are more visible.
35

Restauração de imagens de AFM com o funcional de regularização de Tikhonov visando a avaliação de superfícies metálicas / Restoration of AFM images with functional Tikhonov regularization for evaluating metallic surfaces

Alexander Corrêa dos Santos 29 August 2008 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Problemas durante o processo de aquisição de imagens de AFM têm feito com que pesquisas na área de nanotecnologia busquem a utilização de ferramentas para minimizar esses efeitos degenerativos. Neste sentido, foram desenvolvidas ferramentas computacionais de restauração destas imagens degradadas. Neste trabalho é utilizado o método baseado na Regularização de Tikhonov, cuja aplicação está concentrada principalmente em restaurações de imagens biológicas. A proposta deste trabalho é a utilização deste regularizador também em imagens de interesse em engenharia. Em alguns casos, um pré-processamento anteriormente à aplicação do algoritmo, apresenta boa resposta na restauração das imagens. Na fase de préprocessamento foram utilizados alguns filtros como, filtro de média, filtro de mediana, filtro laplaciano e filtro de média pontual. Com a aplicação deste regularizador em imagens foi possível obter perfis de distribuição dos pixels onde é mostrado que na medida em que se aumenta a carga de dissolução de ferro puro em ácido sulfúrico, percebe-se que a razão de aspecto aumenta e características de superfície ficam mais visíveis. / Problems during the process of acquisition of images of AFM have been doing with that research in the nanotechnology searchs the use of tools to minimize those degenerative effects. Computational tools for restoration of these degraded images have developed, in this work the method is used based on Regularization of Tikhonov. This method is usually used for restoration of biological images. It is proposed the use of this regularization functional also in images of interest in engineering. In some cases, a previously processing to the application of the algorithm, it presents good answer in the restoration of the images. The previously processing phase some were used filters as, average filter, median filter, laplacian filter and filter of punctual average, besides combination of filters. With the application of this regularizator it was possible to obtain profiles of distribution of the pixels where is shown that in the measure in that he increases the dissolution charge of iron in sulphuric acid, it is noticed that the aspect reason increases and surface characteristics are more visible.
36

Métodos para problemas mal-postos discretos de grande porte / Methods for large-scale discrete ill-posed problems

Borges, Leonardo Silveira, 1983- 02 July 2013 (has links)
Orientadores: Maria Cristina de Castro Cunha, Fermín Sinforiano Viloche Bazán / Tese (doutorado) - Universidade Estadual de Campionas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-21T19:25:47Z (GMT). No. of bitstreams: 1 Borges_LeonardoSilveira_D.pdf: 3354099 bytes, checksum: 22e0646185a1b6a6832ca570c099cde8 (MD5) Previous issue date: 2013 / Resumo: A resolução estável de problemas mal-postos discretos requer o uso de métodos de regularização. Dentre vários métodos de regularização existentes na literatura, um dos mais utilizados é o método de regularização de Tikhonovçuja eficiência depende da escolha do parâmetro de regularização. Existem vários métodos para selecionar um parâmetro apropriado tais como o princípio da discrepância de Morozov e métodos heurísticos como o critério da curva-L de Hansen, a Validação Cruzada Generalizada de Golub, Heath e Wahba e o método de ponto fixo de Bazán. Problemas mal-postos discretos de grande porte podem ser resolvidos por métodos iterativos como CGLS e LSQR desde que as iterações sejam interrompidas antes que a influência do ruído deteriore a qualidade das iteradas. Esta é uma tarefa difícil que ainda não foi abordada satisfatoriamente na literatura. Em uma tentativa de atenuar a dificuldade na escolha da iteração de parada, tais métodos podem ser combinados com o método de regularização de Tikhonov gerando os métodos híbridos como GKB-FP e W-GCV (ambos usam a matriz identidade como matriz de regularização). As contribuições desta tese incluem primeiramente novas informações referentes ao algoritmo GKB-FP e como este pode ser eficientemente implementado para o método de regularização de Tikhonov com a matriz de regularização sendo diferente da matriz identidade. Como segunda contribuição tem-se o desenvolvimento de um critério de parada automático para métodos iterativos para problemas "de grande porte", incluindo meios para incorporar informações a priori da solução (como regularidade, por exemplo) no processo iterativo. O método de regularização de Tikhonov usualmente está confinado apenas a um único parâmetro. Entretanto, alguns problemas apresentam soluções com distintas características que devem ser incorporadas na solução regularizada. Isso conduz ao método de regularização de Tikhonov com múltiplos parâmetros. A terceira contribuição desta tese é o desenvolvimento de um método baseado em iterações de ponto fixo para a seleção destes parâmetros e um algoritmo do tipo GKB-FP para problemas de grande porte. Por fim, os resultados teóricos obtidos nesta pesquisa são avaliados na construção de soluções numéricas para diversos problemas como restauração e super-resolução de imagens, problemas de espalhamento e outros obtidos de equações integrais de Fredholm / Abstract: Discrete ill-posed problems need to be regularized in order to be stably solved. Amongst several regularization methods, perhaps the most used is the method of Tikhonov whose effectiveness depends on a proper choice of the regularization parameter. There are considerable amount of parameter choice rules in the literature; these include the Discrepancy Principle by Morozov and heuristic methods like the L-curve criterion by Hansen, Generalized Cross Validation by Golub, Heath and Wahba, and a fixed point method due to Bazán. Large-scale discrete ill-posed problems can be solved by iterative methods like CGLS and LSQR provided that the iterations are stopped before the noise starts deteriorating the quality of the iterates. This is a difficult task which has not yet been addressed satisfactorily in the literature. In an attempt to alleviate the difficulty associated with selecting the regularization parameter, iterative methods can be combined with Tikhonov regularization giving rise to the so-called hybrid methods such as GKB-FP and W-GCV (both using the identity matrix as regularization matrix). The contributions of this thesis include further results concerning the theoretical properties of GKB-FP algorithm as well as the extension of GKB-FP to Tikhonov regularization using a general regularization matrix. Apart from this, as a second contribution, we propose an automatic stopping rule for iterative methods for large-scale problems, including the case where the methods are preconditioned via smoothing norms. Tikhonov regularization has been widely applied to solve linear ill-posed problems, but almost always confined to a single regularization parameter. Nevertheless, some problems have solutions with distinctive characteristics that must be included in the regularized solution. This leads to multi-parameter Tikhonov regularization problems. The third contribution of the thesis is the development of a fixed point method to select the regularization parameters in this multi-parameter case as well as a GKB-FP type algorithm which is well suited for large-scale problems. The proposed algorithms are numerically illustrated by solving several problems such as reconstruction and super-resolution image problems, scattering problems and others from Fredholm integral equations / Doutorado / Matematica Aplicada / Doutor em Matemática Aplicada
37

Analise espectral do metodo de regularização de Tikhonov para resolver equações integrais de Fredholm de primeira especie aproximação por elementos finitos

Viloche Bazan, Fermin Sinforiano 13 July 2018 (has links)
Orientador: Maria Cristina de Castro Cunha / Dissertação (mestrado) - Universidade Estadual de Campinas. Instituto de Matematica, Estatica e Computação Científica / Made available in DSpace on 2018-07-13T22:21:30Z (GMT). No. of bitstreams: 1 VilocheBazan_FerminSinforiano_M.pdf: 1926908 bytes, checksum: 96ff418681cc97aee06d33abe4634e65 (MD5) Previous issue date: 1991 / Resumo: Não informado / Abstract: Not informed / Mestrado / Analise Aplicada / Mestre em Matemática Aplicada
38

Krylov subspace type methods for the computation of non-negative or sparse solutions of ill-posed problems

Pasha, Mirjeta 10 April 2020 (has links)
No description available.
39

Multihypothesis Prediction for Compressed Sensing and Super-Resolution of Images

Chen, Chen 12 May 2012 (has links)
A process for the use of multihypothesis prediction in the reconstruction of images is proposed for use in both compressed-sensing reconstruction as well as single-image super-resolution. Specifically, for compressed-sensing reconstruction of a single still image, multiple predictions for an image block are drawn from spatially surrounding blocks within an initial non-predicted reconstruction. The predictions are used to generate a residual in the domain of the compressed-sensing random projections. This residual being typically more compressible than the original signal leads to improved compressed-sensing reconstruction quality. To appropriately weight the hypothesis predictions, a Tikhonov regularization to an ill-posed least-squares optimization is proposed. An extension of this framework is applied to the compressed-sensing reconstruction of hyperspectral imagery is also studied. Finally, the multihypothesis paradigm is employed for single-image superresolution wherein each patch of a low-resolution image is represented as a linear combination of spatially surrounding hypothesis patches.
40

Lanczos and Golub-Kahan Reduction Methods Applied to Ill-Posed Problems

Onunwor, Enyinda Nyekachi 24 April 2018 (has links)
No description available.

Page generated in 0.0679 seconds