• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Joint inversion of Direct Current and Radiomagnetotelluric data

García Juanatey, María de los Ángeles January 2007 (has links)
No description available.
2

Joint inversion of Direct Current and Radiomagnetotelluric data

García Juanatey, María de los Ángeles January 2007 (has links)
No description available.
3

Evaulation Of Spatial And Spatio-temporal Regularization Approaches In Inverse Problem Of Electrocardiography

Onal, Murat 01 August 2008 (has links) (PDF)
Conventional electrocardiography (ECG) is an essential tool for investigating cardiac disorders such as arrhythmias or myocardial infarction. It consists of interpretation of potentials recorded at the body surface that occur due to the electrical activity of the heart. However, electrical signals originated at the heart suffer from attenuation and smoothing within the thorax, therefore ECG signal measured on the body surface lacks some important details. The goal of forward and inverse ECG problems is to recover these lost details by estimating the heart&amp / #8217 / s electrical activity non-invasively from body surface potential measurements. In the forward problem, one calculates the body surface potential distribution (i.e. torso potentials) using an appropriate source model for the equivalent cardiac sources. In the inverse problem of ECG, one estimates cardiac electrical activity based on measured torso potentials and a geometric model of the torso. Due to attenuation and spatial smoothing that occur within the thorax, inverse ECG problem is ill-posed and the forward model matrix is badly conditioned. Thus, small disturbances in the measurements lead to amplified errors in inverse solutions. It is difficult to solve this problem for effective cardiac imaging due to the ill-posed nature and high dimensionality of the problem. Tikhonov regularization, Truncated Singular Value Decomposition (TSVD) and Bayesian MAP estimation are some of the methods proposed in literature to cope with the ill-posedness of the problem. The most common approach in these methods is to ignore temporal relations of epicardial potentials and to solve the inverse problem at every time instant independently (column sequential approach). This is the fastest and the easiest approach / however, it does not include temporal correlations. The goal of this thesis is to include temporal constraints as well as spatial constraints in solving the inverse ECG problem. For this purpose, two methods are used. In the first method, we solved the augmented problem directly. Alternatively, we solve the problem with column sequential approach after applying temporal whitening. The performance of each method is evaluated.
4

Efficient Calibration and Predictive Error Analysis for Highly-Parameterized Models Combining Tikhonov and Subspace Regularization Techniques

Matthew James Tonkin Unknown Date (has links)
The development and application of environmental models to help understand natural systems, and support decision making, is commonplace. A difficulty encountered in the development of such models is determining which physical and chemical processes to simulate, and on what temporal and spatial scale(s). Modern computing capabilities enable the incorporation of more processes, at increasingly refined scales, than at any time previously. However, the simulation of a large number of fine scale processes has undesirable consequences: first, the execution time of many environmental models has not declined despite advances in processor speed and solution techniques; and second, such complex models incorporate a large number of parameters, for which values must be assigned. Compounding these problems is the recognition that since the inverse problem in groundwater modeling is non-unique the calibration of a single parameter set does not assure the reliability of model predictions. Practicing modelers are, then, faced with complex models that incorporate a large number of parameters whose values are uncertain, and that make predictions that are prone to an unspecified amount of error. In recognition of this, there has been considerable research into methods for evaluating the potential for error in model predictions arising from errors in the values assigned to model parameters. Unfortunately, some common methods employed in the estimation of model parameters, and the evaluation of the potential error associated with model parameters and predictions, suffer from limitations in their application that stem from an emphasis on obtaining an over-determined, parsimonious, inverse problem. That is, common methods of model analysis exhibit artifacts from the propagation of subjective a-priori parameter parsimony throughout the calibration and predictive error analyses. This thesis describes theoretical and practical developments that enable the estimation of a large number of parameters, and the evaluation of the potential for error in predictions made by highly parameterized models. Since the focus of this research is on the use of models in support of decision making, the new methods are demonstrated by application to synthetic applications, where the performance of the method can be evaluated under controlled conditions; and to real-world applications, where the performance of the method can be evaluated in terms of trade-offs in computational effort versus calibration results and the ability to rigorously yet expediently investigate predictive error. The applications suggest that the new techniques are applicable to a range of environmental modeling disciplines. Mathematical innovations described in this thesis focus on combining complementary regularized inversion (calibration) techniques with novel methods for analyzing model predictive error. Several of the innovations are founded on explicit recognition of the existence of the calibration solution and null spaces – that is, that with the available observations there are some (combinations of) parameters that can be estimated; and there are some (combinations of) parameters that cannot. The existence of a non-trivial calibration null space is at the heart of the non-uniqueness problem in model calibration: this research expands upon this concept by recognizing that there are combinations of parameters that lie within the calibration null space yet possess non-trivial projections onto the predictive solution space, and these combinations of parameters are at the heart of predictive error analysis. The most significant contribution of this research is the attempt to develop a framework for model analysis that promotes computational efficiency in both the calibration and the subsequent analysis of the potential for error in model predictions. Fundamental to this framework is the use of a large number of parameters, the use of Tikhonov regularization, and the use of subspace techniques. Use of a large number of parameters enables parameter detail to be represented in the model at a scale approaching true variability; the use of Tikhonov constraints enables the modeler to incorporate preferred conditions on parameter values and/or their variation throughout the calibration and the predictive analysis; and, the use of subspace techniques enables model calibration and predictive analysis to be undertaken expediently, even when undertaken using a large number of parameters. This research focuses on the inability of the calibration process to accurately identify parameter values: it is assumed that the models in question accurately represent the relevant processes at the relevant scales so that parameter and predictive error depend only on parameter detail not represented in the model and/or accurately inferred through the calibration process. Contributions to parameter and predictive error arising from incorrect model identification are outside the scope of this research.
5

Lanczos and Golub-Kahan Reduction Methods Applied to Ill-Posed Problems

Onunwor, Enyinda Nyekachi 24 April 2018 (has links)
No description available.
6

Simulation of Complex Sound Radiation Patterns from Truck Components using Monopole Clusters / Simulering av komplexa ljudstrålningsmönster från lastbilskomponenter med hjälp av monopolkluster

Calen, Titus, Wang, Xiaomo January 2023 (has links)
Pass-by noise testing is an important step in vehicle design and regulation compliance. Finite element analysis simulations have been used to cut costs on prototyping and testing, but the high computational cost of simulating surface vibrations from complex geometries and the resulting airborne noise propagation is making the switch to digital twin methods not viable. This paper aims at investigating the use of equivalent source methods as an alternative to the before mentioned simulations. Through the use of a simple 2D model, the difficulties such as ill-conditioning of the transfer matrix and the required regularisation techniques such as TSVD and the Tikhonov L-curve method are tested and then applied to a mesh of a 3D engine model. Source and pressure field errors are measured and their origins are explained. A heavy emphasis is put on the model geometry as a source of error. Finally, rules of thumb based on the regularisation balance and the wavelength dependent pressure sampling positions are formulated in order to achieve usable results. / Bullerprovning vid passage är ett viktigt steg i fordonsdesign och regelefterlevnad. Simuleringar med finita elementanalyser har använts för att minska kostnaderna för prototypframtagning och provning, men de höga beräkningskostnaderna för att simulera ytvibrationer från komplexa geometrier och den resulterande luftburna bullerspridningen gör att övergången till digitala tvillingmetoder inte är genomförbar. Denna uppsats syftar till att undersöka användningen av ekvivalenta källmetoder som ett alternativ till de tidigare nämnda simuleringarna. Genom att använda en enkel 2D-modell testas svårigheterna som dålig konditionering av överföringsmatrisen och de nödvändiga regulariseringsteknikerna som TSVD och Tikhonov L-kurvmetoden och tillämpas sedan på ett nät av en 3D-motormodell. Käll- och tryckfältsfel mäts och deras ursprung förklaras. Stor vikt läggs vid modellgeometrin som en felkälla. Slutligen formuleras tumregler baserade på regulariseringsbalansen och de våglängdsberoende tryckprovtagningspositionerna för att uppnå användbara resultat.

Page generated in 0.048 seconds