• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 2
  • Tagged with
  • 7
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Algoritmo de tomografia por impedância elétrica utilizando programação linear como método de busca da imagem. / Algorithm of electrical impedance tomography using linear programming as method of searching image.

Montoya Vallejo, Miguel Fernando 14 November 2007 (has links)
A Tomografia por Impedância elétrica (TIE) tem como objetivo gerar imagens da distribuição de resistividade dentro de um domínio. A TIE injeta correntes em eletrodos alocados na fronteira do domínio e mede potenciais elétricos através dos mesmos eletrodos. A TIE é considerada um problema inverso, não-linear e mal posto. Atualmente, para gerar uma solução do problema inverso, existem duas classes de algoritmos para estimar a distribuição de resistividade no interior do domínio, os que estimam variações da distribuição de resistividade do domínio e os absolutos, que estimam a distribuição de resistividade. Variações da distribuição de resistividade são o resultado da solução de um sistema linear do tipo Ax = b. O objetivo do presente trabalho é avaliar o desempenho da Programação Linear (PL) na solução do sistema linear, avaliar o algoritmo quanto a propaga- ção de erros numéricos e avaliar os efeitos de restringir o espaço solução através de restrições de PL. Os efeitos do uso de Programação Linear é avaliado tanto em métodos que geram imagens de diferenças, como o Matriz de Sensibilidade, como em métodos absolutos, como o Gauss-Newton. Mostra-se neste trabalho que o uso da PL diminui o erro numérico propagado quando comparado ao uso do algoritmo LU Decomposition. Resulta também que reduzir o espaço solução, diretamente através de restrições de PL, melhora a resolução em resistividade e a resolução espacial da imagem quando comparado com o uso de LU Decomposition. / Electrical impedance tomography (EIT) generates images of the resistivity distribution of a domain. The EIT method inject currents through electrodes placed on the boundary of the domain and measures electric potentials through the same electrodes. EIT is considered an inverse problem, non-linear and ill-conditioned. There are two classes of algorithms to estimate the resistivity distribution inside the domain, difference images algorithms, which estimate resistivity distribution variations, and absolute images algorithms, which estimate the resistivity distribution. Resistivity distribution variations are the solution of a linear system, say Ax = b. In this work, the main objective is to evaluate the performance of Linear Programming (LP) solving an EIT linear system from the point of view of the numerical error propagation and the ability to constrain the solution space. The impact of using LP to solve an EIT linear system is evaluated on a difference image algorithm and on an absolute algorithm. This work shows that the use of LP diminishes the numerical error propagation compared to LU Decomposition. It is also shown that constraining the solution space through LP improves the resistivity resolution and the spatial resolution of the images when compared to LU Decomposition.
2

Identification de lois de comportement enrichie pour les géomatériaux en présence d'une localisation de la déformation / Identification of enriched constitutive models for geomaterials in the presence of strain localization

Moustapha, Khadijetou El 23 April 2014 (has links)
Modéliser la localisation de la déformation dans les géomatériaux de manière objective nécessite l'utilisation de méthodes capables de régulariser le problème aux limites en introduisant une longueur caractéristique. Dans le cadre de ce travail de thèse, nous avons choisis d'utiliser les milieux à microstructures de types second gradient. Une question, alors se pose quant à l'identification des paramètres constitutifs qui interviennent dans la formulation de ces milieux. L'objectif de cette thèse est de mettre en place une méthode d'identification d'une loi de comportement enrichie de type second gradient. L'identification paramétrique d'une loi constitutive écrite dans un formalisme de milieu enrichi de type second gradient local est étudiée. Une partie de l'identification peut-être réalisée à partir d'essais homogènes, mais l'identification complète nécessite de considérer des modes de déformation à forts gradients, comme cela est le cas en présence d'une localisation de la déformation. La procédure d'identification développée s'appuie sur des résultats expérimentaux d'essais mécaniques sur le grès de Vosges, pour lesquels le mode de déformation des échantillons a été caractérisé à l'aide de mesures de champs cinématiques, y compris en régime localisé. Un certain nombre d'observables peuvent être extraits de ces essais, qui servent à appuyer la comparaison entre calculs numériques et observations expérimentales. L'identification nécessite le calcul d'une matrice de sensibilité pour l'optimisation des observables. Afin de calculer cette matrice, deux études de sensibilité sont effectuées. Ces études consistent à évaluer l'influence de la variation de chaque paramètre constitutif sur les données sélectionnées. La première étude de sensibilité porte sur la partie homogène des essais, elle permet l'optimisation d'un certain nombre de paramètres qui jouent un rôle uniquement dans cette partie. La deuxième étude concerne le régime de déformation localisé. Celle-ci permet le calcul de la matrice de sensibilité. Grâce à cette matrice, il est possible de démarrer l'optimisation des observables. Ainsi, chaque observable pourra être optimisé indépendamment des autres. A l'issue de cette optimisation un jeu de paramètre est proposé. Il permet de reproduire de manière fiable, les essais expérimentaux à 20 et 30 MPa de confinement. / The strain localisation modelling of geomaterials requires the use of enhanced models, able to regularise the boundary value problem, by introducing a characteristic length. In this research work, we have chosen to use second gradient models. A question then arises, concerning the identification of second gradient constitutive parameters. This PhD research work aimed to develop an identification method to obtain theses parameters. The study proposed here covers a parametric identification of a constitutive law written in the local second gradient formalism. A part of this identification may be performed through homogenous tests, however the complete identification requires the consideration of high gradient deformation modes, as it is the case where localized deformation is observed. The identification procedure developed uses experimental results from mechanical tests on Vosges sandstone, for which the deformation mode was characterised by kinematic field measurement, including the localized regime. A certain number of observable data can be extracted from these tests, they are then used for the comparison between experimental and numerical data. It is necessary to compute the sensitivity matrix in order to optimise the observables data. In this sense, two sensitivity studies have been carried out, allowing the evaluation of the influence of each constitutive parameter on the selected data. This first analyse concern the homogenous part of the tests. Constitutive parameters involved in this part can be then optimized. The second analyse concerns the localized regime and the sensitivity matrix computation. Once this is achieved, the optimization of the observable data can be conducted. Each observable data can be optimised independently. A set of constitutive parameters is proposed. It allows a good matching between experimental and numerical results at two confining pressures ; 20 and 30 MPa.
3

Algoritmo de tomografia por impedância elétrica utilizando programação linear como método de busca da imagem. / Algorithm of electrical impedance tomography using linear programming as method of searching image.

Miguel Fernando Montoya Vallejo 14 November 2007 (has links)
A Tomografia por Impedância elétrica (TIE) tem como objetivo gerar imagens da distribuição de resistividade dentro de um domínio. A TIE injeta correntes em eletrodos alocados na fronteira do domínio e mede potenciais elétricos através dos mesmos eletrodos. A TIE é considerada um problema inverso, não-linear e mal posto. Atualmente, para gerar uma solução do problema inverso, existem duas classes de algoritmos para estimar a distribuição de resistividade no interior do domínio, os que estimam variações da distribuição de resistividade do domínio e os absolutos, que estimam a distribuição de resistividade. Variações da distribuição de resistividade são o resultado da solução de um sistema linear do tipo Ax = b. O objetivo do presente trabalho é avaliar o desempenho da Programação Linear (PL) na solução do sistema linear, avaliar o algoritmo quanto a propaga- ção de erros numéricos e avaliar os efeitos de restringir o espaço solução através de restrições de PL. Os efeitos do uso de Programação Linear é avaliado tanto em métodos que geram imagens de diferenças, como o Matriz de Sensibilidade, como em métodos absolutos, como o Gauss-Newton. Mostra-se neste trabalho que o uso da PL diminui o erro numérico propagado quando comparado ao uso do algoritmo LU Decomposition. Resulta também que reduzir o espaço solução, diretamente através de restrições de PL, melhora a resolução em resistividade e a resolução espacial da imagem quando comparado com o uso de LU Decomposition. / Electrical impedance tomography (EIT) generates images of the resistivity distribution of a domain. The EIT method inject currents through electrodes placed on the boundary of the domain and measures electric potentials through the same electrodes. EIT is considered an inverse problem, non-linear and ill-conditioned. There are two classes of algorithms to estimate the resistivity distribution inside the domain, difference images algorithms, which estimate resistivity distribution variations, and absolute images algorithms, which estimate the resistivity distribution. Resistivity distribution variations are the solution of a linear system, say Ax = b. In this work, the main objective is to evaluate the performance of Linear Programming (LP) solving an EIT linear system from the point of view of the numerical error propagation and the ability to constrain the solution space. The impact of using LP to solve an EIT linear system is evaluated on a difference image algorithm and on an absolute algorithm. This work shows that the use of LP diminishes the numerical error propagation compared to LU Decomposition. It is also shown that constraining the solution space through LP improves the resistivity resolution and the spatial resolution of the images when compared to LU Decomposition.
4

Perceptually motivated speech recognition and mispronunciation detection

Koniaris, Christos January 2012 (has links)
This doctoral thesis is the result of a research effort performed in two fields of speech technology, i.e., speech recognition and mispronunciation detection. Although the two areas are clearly distinguishable, the proposed approaches share a common hypothesis based on psychoacoustic processing of speech signals. The conjecture implies that the human auditory periphery provides a relatively good separation of different sound classes. Hence, it is possible to use recent findings from psychoacoustic perception together with mathematical and computational tools to model the auditory sensitivities to small speech signal changes. The performance of an automatic speech recognition system strongly depends on the representation used for the front-end. If the extracted features do not include all relevant information, the performance of the classification stage is inherently suboptimal. The work described in Papers A, B and C is motivated by the fact that humans perform better at speech recognition than machines, particularly for noisy environments. The goal is to make use of knowledge of human perception in the selection and optimization of speech features for speech recognition. These papers show that maximizing the similarity of the Euclidean geometry of the features to the geometry of the perceptual domain is a powerful tool to select or optimize features. Experiments with a practical speech recognizer confirm the validity of the principle. It is also shown an approach to improve mel frequency cepstrum coefficients (MFCCs) through offline optimization. The method has three advantages: i) it is computationally inexpensive, ii) it does not use the auditory model directly, thus avoiding its computational cost, and iii) importantly, it provides better recognition performance than traditional MFCCs for both clean and noisy conditions. The second task concerns automatic pronunciation error detection. The research, described in Papers D, E and F, is motivated by the observation that almost all native speakers perceive, relatively easily, the acoustic characteristics of their own language when it is produced by speakers of the language. Small variations within a phoneme category, sometimes different for various phonemes, do not change significantly the perception of the language’s own sounds. Several methods are introduced based on similarity measures of the Euclidean space spanned by the acoustic representations of the speech signal and the Euclidean space spanned by an auditory model output, to identify the problematic phonemes for a given speaker. The methods are tested for groups of speakers from different languages and evaluated according to a theoretical linguistic study showing that they can capture many of the problematic phonemes that speakers from each language mispronounce. Finally, a listening test on the same dataset verifies the validity of these methods. / <p>QC 20120914</p> / European Union FP6-034362 research project ACORNS / Computer-Animated language Teachers (CALATea)
5

Three-dimensional individual and joint inversion of direct current resistivity and electromagnetic data

Weißflog, Julia 06 April 2017 (has links) (PDF)
The objective of our studies is the combination of electromagnetic and direct current (DC) resistivity methods in a joint inversion approach to improve the reconstruction of a given conductivity distribution. We utilize the distinct sensitivity patterns of different methods to enhance the overall resolution power and ensure a more reliable imaging result. In order to simplify the work with more than one electromagnetic method and establish a flexible and state-of-the-art software basis, we developed new DC resistivity and electromagnetic forward modeling and inversion codes based on finite elements of second order on unstructured grids. The forward operators are verified using analytical solutions and convergence studies before we apply a regularized Gauss-Newton scheme and successfully invert synthetic data sets. Finally, we link both codes with each other in a joint inversion. In contrast to most widely used joint inversion strategies, where different data sets are combined in a single least-squares problem resulting in a large system of equations, we introduce a sequential approach that cycles through the different methods iteratively. This way, we avoid several difficulties such as the determination of the full set of regularization parameters or a weighting of the distinct data sets. The sequential approach makes use of a smoothness regularization operator which penalizes the deviation of the model parameters from a given reference model. In our sequential strategy, we use the result of the preceding individual inversion scheme as reference model for the following one. We successfully apply this approach to synthetic data sets and show that the combination of at least two methods yields a significantly improved parameter model compared to the individual inversion results. / Ziel der vorliegenden Arbeit ist die gemeinsame Inversion (\"joint inversion\") elektromagnetischer und geoelektrischer Daten zur Verbesserung des rekonstruierten Leitfähigkeitsmodells. Dabei nutzen wir die verschiedenartigen Sensitivitäten der Methoden aus, um die Auflösung zu erhöhen und ein zuverlässigeres Ergebnis zu erhalten. Um die Arbeit mit mehr als einer Methode zu vereinfachen und eine flexible Softwarebasis auf dem neuesten Stand der Forschung zu etablieren, wurden zwei Codes zur Modellierung und Inversion geoelektrischer als auch elektromagnetischer Daten neu entwickelt, die mit finiten Elementen zweiter Ordnung auf unstrukturierten Gittern arbeiten. Die Vorwärtsoperatoren werden mithilfe analytischer Lösungen und Konvergenzstudien verifiziert, bevor wir ein regularisiertes Gauß-Newton-Verfahren zur Inversion synthetischer Datensätze anwenden. Im Gegensatz zur meistgenutzten \"joint inversion\"-Strategie, bei der verschiedene Daten in einem einzigen Minimierungsproblem kombiniert werden, was in einem großen Gleichungssystem resultiert, stellen wir schließlich einen sequentiellen Ansatz vor, der zyklisch durch die einzelnen Methoden iteriert. So vermeiden wir u.a. eine komplizierte Wichtung der verschiedenen Daten und die Bestimmung aller Regularisierungsparameter in einem Schritt. Der sequentielle Ansatz wird über die Anwendung einer Glättungsregularisierung umgesetzt, bei der die Abweichung der Modellparameter zu einem gegebenen Referenzmodell bestraft wird. Wir nutzen das Ergebnis der vorangegangenen Einzelinversion als Referenzmodell für die folgende Inversion. Der Ansatz wird erfolgreich auf synthetische Datensätze angewendet und wir zeigen, dass die Kombination von mehreren Methoden eine erhebliche Verbesserung des Inversionsergebnisses im Vergleich zu den Einzelinversionen liefert.
6

Les algorithmes de haute résolution en tomographie d'émission par positrons : développement et accélération sur les cartes graphiques

Nassiri, Moulay Ali 05 1900 (has links)
La tomographie d’émission par positrons (TEP) est une modalité d’imagerie moléculaire utilisant des radiotraceurs marqués par des isotopes émetteurs de positrons permettant de quantifier et de sonder des processus biologiques et physiologiques. Cette modalité est surtout utilisée actuellement en oncologie, mais elle est aussi utilisée de plus en plus en cardiologie, en neurologie et en pharmacologie. En fait, c’est une modalité qui est intrinsèquement capable d’offrir avec une meilleure sensibilité des informations fonctionnelles sur le métabolisme cellulaire. Les limites de cette modalité sont surtout la faible résolution spatiale et le manque d’exactitude de la quantification. Par ailleurs, afin de dépasser ces limites qui constituent un obstacle pour élargir le champ des applications cliniques de la TEP, les nouveaux systèmes d’acquisition sont équipés d’un grand nombre de petits détecteurs ayant des meilleures performances de détection. La reconstruction de l’image se fait en utilisant les algorithmes stochastiques itératifs mieux adaptés aux acquisitions à faibles statistiques. De ce fait, le temps de reconstruction est devenu trop long pour une utilisation en milieu clinique. Ainsi, pour réduire ce temps, on les données d’acquisition sont compressées et des versions accélérées d’algorithmes stochastiques itératifs qui sont généralement moins exactes sont utilisées. Les performances améliorées par l’augmentation de nombre des détecteurs sont donc limitées par les contraintes de temps de calcul. Afin de sortir de cette boucle et permettre l’utilisation des algorithmes de reconstruction robustes, de nombreux travaux ont été effectués pour accélérer ces algorithmes sur les dispositifs GPU (Graphics Processing Units) de calcul haute performance. Dans ce travail, nous avons rejoint cet effort de la communauté scientifique pour développer et introduire en clinique l’utilisation des algorithmes de reconstruction puissants qui améliorent la résolution spatiale et l’exactitude de la quantification en TEP. Nous avons d’abord travaillé sur le développement des stratégies pour accélérer sur les dispositifs GPU la reconstruction des images TEP à partir des données d’acquisition en mode liste. En fait, le mode liste offre de nombreux avantages par rapport à la reconstruction à partir des sinogrammes, entre autres : il permet d’implanter facilement et avec précision la correction du mouvement et le temps de vol (TOF : Time-Of Flight) pour améliorer l’exactitude de la quantification. Il permet aussi d’utiliser les fonctions de bases spatio-temporelles pour effectuer la reconstruction 4D afin d’estimer les paramètres cinétiques des métabolismes avec exactitude. Cependant, d’une part, l’utilisation de ce mode est très limitée en clinique, et d’autre part, il est surtout utilisé pour estimer la valeur normalisée de captation SUV qui est une grandeur semi-quantitative limitant le caractère fonctionnel de la TEP. Nos contributions sont les suivantes : - Le développement d’une nouvelle stratégie visant à accélérer sur les dispositifs GPU l’algorithme 3D LM-OSEM (List Mode Ordered-Subset Expectation-Maximization), y compris le calcul de la matrice de sensibilité intégrant les facteurs d’atténuation du patient et les coefficients de normalisation des détecteurs. Le temps de calcul obtenu est non seulement compatible avec une utilisation clinique des algorithmes 3D LM-OSEM, mais il permet également d’envisager des reconstructions rapides pour les applications TEP avancées telles que les études dynamiques en temps réel et des reconstructions d’images paramétriques à partir des données d’acquisitions directement. - Le développement et l’implantation sur GPU de l’approche Multigrilles/Multitrames pour accélérer l’algorithme LMEM (List-Mode Expectation-Maximization). L’objectif est de développer une nouvelle stratégie pour accélérer l’algorithme de référence LMEM qui est un algorithme convergent et puissant, mais qui a l’inconvénient de converger très lentement. Les résultats obtenus permettent d’entrevoir des reconstructions en temps quasi-réel que ce soit pour les examens utilisant un grand nombre de données d’acquisition aussi bien que pour les acquisitions dynamiques synchronisées. Par ailleurs, en clinique, la quantification est souvent faite à partir de données d’acquisition en sinogrammes généralement compressés. Mais des travaux antérieurs ont montré que cette approche pour accélérer la reconstruction diminue l’exactitude de la quantification et dégrade la résolution spatiale. Pour cette raison, nous avons parallélisé et implémenté sur GPU l’algorithme AW-LOR-OSEM (Attenuation-Weighted Line-of-Response-OSEM) ; une version de l’algorithme 3D OSEM qui effectue la reconstruction à partir de sinogrammes sans compression de données en intégrant les corrections de l’atténuation et de la normalisation dans les matrices de sensibilité. Nous avons comparé deux approches d’implantation : dans la première, la matrice système (MS) est calculée en temps réel au cours de la reconstruction, tandis que la seconde implantation utilise une MS pré- calculée avec une meilleure exactitude. Les résultats montrent que la première implantation offre une efficacité de calcul environ deux fois meilleure que celle obtenue dans la deuxième implantation. Les temps de reconstruction rapportés sont compatibles avec une utilisation clinique de ces deux stratégies. / Positron emission tomography (PET) is a molecular imaging modality that uses radiotracers labeled with positron emitting isotopes in order to quantify many biological processes. The clinical applications of this modality are largely in oncology, but it has a potential to be a reference exam for many diseases in cardiology, neurology and pharmacology. In fact, it is intrinsically able to offer the functional information of cellular metabolism with a good sensitivity. The principal limitations of this modality are the limited spatial resolution and the limited accuracy of the quantification. To overcome these limits, the recent PET systems use a huge number of small detectors with better performances. The image reconstruction is also done using accurate algorithms such as the iterative stochastic algorithms. But as a consequence, the time of reconstruction becomes too long for a clinical use. So the acquired data are compressed and the accelerated versions of iterative stochastic algorithms which generally are non convergent are used to perform the reconstruction. Consequently, the obtained performance is compromised. In order to be able to use the complex reconstruction algorithms in clinical applications for the new PET systems, many previous studies were aiming to accelerate these algorithms on GPU devices. Therefore, in this thesis, we joined the effort of researchers for developing and introducing for routine clinical use the accurate reconstruction algorithms that improve the spatial resolution and the accuracy of quantification for PET. Therefore, we first worked to develop the new strategies for accelerating on GPU devices the reconstruction from list mode acquisition. In fact, this mode offers many advantages over the histogram-mode, such as motion correction, the possibility of using time-of-flight (TOF) information to improve the quantification accuracy, the possibility of using temporal basis functions to perform 4D reconstruction and extract kinetic parameters with better accuracy directly from the acquired data. But, one of the main obstacles that limits the use of list-mode reconstruction approach for routine clinical use is the relatively long reconstruction time. To overcome this obstacle we : developed a new strategy to accelerate on GPU devices fully 3D list mode ordered-subset expectation-maximization (LM-OSEM) algorithm, including the calculation of the sensitivity matrix that accounts for the patient-specific attenuation and normalisation corrections. The reported reconstruction are not only compatible with a clinical use of 3D LM-OSEM algorithms, but also lets us envision fast reconstructions for advanced PET applications such as real time dynamic studies and parametric image reconstructions. developed and implemented on GPU a multigrid/multiframe approach of an expectation-maximization algorithm for list-mode acquisitions (MGMF-LMEM). The objective is to develop new strategies to accelerate the reconstruction of gold standard LMEM (list-mode expectation-maximization) algorithm which converges slowly. The GPU-based MGMF-LMEM algorithm processed data at a rate close to one million of events per second per iteration, and permits to perform near real-time reconstructions for large acquisitions or low-count acquisitions such as gated studies. Moreover, for clinical use, the quantification is often done from acquired data organized in sinograms. This data is generally compressed in order to accelerate reconstruction. But previous works have shown that this approach to accelerate the reconstruction decreases the accuracy of quantification and the spatial resolution. The ordered-subset expectation-maximization (OSEM) is the most used reconstruction algorithm from sinograms in clinic. Thus, we parallelized and implemented the attenuation-weighted line-of-response OSEM (AW-LOR-OSEM) algorithm which allows a PET image reconstruction from sinograms without any data compression and incorporates the attenuation and normalization corrections in the sensitivity matrices as weight factors. We compared two strategies of implementation: in the first, the system matrix (SM) is calculated on the fly during the reconstruction, while the second implementation uses a precalculated SM more accurately. The results show that the computational efficiency is about twice better for the implementation using calculated SM on-the-fly than the implementation using pre-calculated SM, but the reported reconstruction times are compatible with a clinical use for both strategies.
7

Three-dimensional individual and joint inversion of direct current resistivity and electromagnetic data

Weißflog, Julia 07 February 2017 (has links)
The objective of our studies is the combination of electromagnetic and direct current (DC) resistivity methods in a joint inversion approach to improve the reconstruction of a given conductivity distribution. We utilize the distinct sensitivity patterns of different methods to enhance the overall resolution power and ensure a more reliable imaging result. In order to simplify the work with more than one electromagnetic method and establish a flexible and state-of-the-art software basis, we developed new DC resistivity and electromagnetic forward modeling and inversion codes based on finite elements of second order on unstructured grids. The forward operators are verified using analytical solutions and convergence studies before we apply a regularized Gauss-Newton scheme and successfully invert synthetic data sets. Finally, we link both codes with each other in a joint inversion. In contrast to most widely used joint inversion strategies, where different data sets are combined in a single least-squares problem resulting in a large system of equations, we introduce a sequential approach that cycles through the different methods iteratively. This way, we avoid several difficulties such as the determination of the full set of regularization parameters or a weighting of the distinct data sets. The sequential approach makes use of a smoothness regularization operator which penalizes the deviation of the model parameters from a given reference model. In our sequential strategy, we use the result of the preceding individual inversion scheme as reference model for the following one. We successfully apply this approach to synthetic data sets and show that the combination of at least two methods yields a significantly improved parameter model compared to the individual inversion results. / Ziel der vorliegenden Arbeit ist die gemeinsame Inversion (\"joint inversion\") elektromagnetischer und geoelektrischer Daten zur Verbesserung des rekonstruierten Leitfähigkeitsmodells. Dabei nutzen wir die verschiedenartigen Sensitivitäten der Methoden aus, um die Auflösung zu erhöhen und ein zuverlässigeres Ergebnis zu erhalten. Um die Arbeit mit mehr als einer Methode zu vereinfachen und eine flexible Softwarebasis auf dem neuesten Stand der Forschung zu etablieren, wurden zwei Codes zur Modellierung und Inversion geoelektrischer als auch elektromagnetischer Daten neu entwickelt, die mit finiten Elementen zweiter Ordnung auf unstrukturierten Gittern arbeiten. Die Vorwärtsoperatoren werden mithilfe analytischer Lösungen und Konvergenzstudien verifiziert, bevor wir ein regularisiertes Gauß-Newton-Verfahren zur Inversion synthetischer Datensätze anwenden. Im Gegensatz zur meistgenutzten \"joint inversion\"-Strategie, bei der verschiedene Daten in einem einzigen Minimierungsproblem kombiniert werden, was in einem großen Gleichungssystem resultiert, stellen wir schließlich einen sequentiellen Ansatz vor, der zyklisch durch die einzelnen Methoden iteriert. So vermeiden wir u.a. eine komplizierte Wichtung der verschiedenen Daten und die Bestimmung aller Regularisierungsparameter in einem Schritt. Der sequentielle Ansatz wird über die Anwendung einer Glättungsregularisierung umgesetzt, bei der die Abweichung der Modellparameter zu einem gegebenen Referenzmodell bestraft wird. Wir nutzen das Ergebnis der vorangegangenen Einzelinversion als Referenzmodell für die folgende Inversion. Der Ansatz wird erfolgreich auf synthetische Datensätze angewendet und wir zeigen, dass die Kombination von mehreren Methoden eine erhebliche Verbesserung des Inversionsergebnisses im Vergleich zu den Einzelinversionen liefert.

Page generated in 0.0668 seconds