• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 76
  • 46
  • 30
  • 10
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 438
  • 438
  • 110
  • 101
  • 79
  • 75
  • 70
  • 69
  • 68
  • 64
  • 60
  • 56
  • 53
  • 52
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Uncertainty Quantification for low-frequency Maxwell equations with stochastic conductivity models

Kamilis, Dimitrios January 2018 (has links)
Uncertainty Quantification (UQ) has been an active area of research in recent years with a wide range of applications in data and imaging sciences. In many problems, the source of uncertainty stems from an unknown parameter in the model. In physical and engineering systems for example, the parameters of the partial differential equation (PDE) that model the observed data may be unknown or incompletely specified. In such cases, one may use a probabilistic description based on prior information and formulate a forward UQ problem of characterising the uncertainty in the PDE solution and observations in response to that in the parameters. Conversely, inverse UQ encompasses the statistical estimation of the unknown parameters from the available observations, which can be cast as a Bayesian inverse problem. The contributions of the thesis focus on examining the aforementioned forward and inverse UQ problems for the low-frequency, time-harmonic Maxwell equations, where the model uncertainty emanates from the lack of knowledge of the material conductivity parameter. The motivation comes from the Controlled-Source Electromagnetic Method (CSEM) that aims to detect and image hydrocarbon reservoirs by using electromagnetic field (EM) measurements to obtain information about the conductivity profile of the sub-seabed. Traditionally, algorithms for deterministic models have been employed to solve the inverse problem in CSEM by optimisation and regularisation methods, which aside from the image reconstruction provide no quantitative information on the credibility of its features. This work employs instead stochastic models where the conductivity is represented as a lognormal random field, with the objective of providing a more informative characterisation of the model observables and the unknown parameters. The variational formulation of these stochastic models is analysed and proved to be well-posed under suitable assumptions. For computational purposes the stochastic formulation is recast as a deterministic, parametric problem with distributed uncertainty, which leads to an infinite-dimensional integration problem with respect to the prior and posterior measure. One of the main challenges is thus the approximation of these integrals, with the standard choice being some variant of the Monte-Carlo (MC) method. However, such methods typically fail to take advantage of the intrinsic properties of the model and suffer from unsatisfactory convergence rates. Based on recently developed theory on high-dimensional approximation, this thesis advocates the use of Sparse Quadrature (SQ) to tackle the integration problem. For the models considered here and under certain assumptions, we prove that for forward UQ, Sparse Quadrature can attain dimension-independent convergence rates that out-perform MC. Typical CSEM models are large-scale and thus additional effort is made in this work to reduce the cost of obtaining forward solutions for each sampling parameter by utilising the weighted Reduced Basis method (RB) and the Empirical Interpolation Method (EIM). The proposed variant of a combined SQ-EIM-RB algorithm is based on an adaptive selection of training sets and a primal-dual, goal-oriented formulation for the EIM-RB approximation. Numerical examples show that the suggested computational framework can alleviate the computational costs associated with forward UQ for the pertinent large-scale models, thus providing a viable methodology for practical applications.
252

Vers la résolution "optimale" de problèmes inverses non linéaires parcimonieux grâce à l'exploitation de variables binaires sur dictionnaires continus : applications en astrophysique / Towards an "optimal" solution for nonlinear sparse inverse problems using binary variables on continuous dictionaries : applications in Astrophysics

Boudineau, Mégane 01 February 2019 (has links)
Cette thèse s'intéresse à la résolution de problèmes inverses non linéaires exploitant un a priori de parcimonie ; plus particulièrement, des problèmes où les données se modélisent comme la combinaison linéaire d'un faible nombre de fonctions non linéaires en un paramètre dit de " localisation " (par exemple la fréquence en analyse spectrale ou le décalage temporel en déconvolution impulsionnelle). Ces problèmes se reformulent classiquement en un problème d'approximation parcimonieuse linéaire (APL) en évaluant les fonctions non linéaires sur une grille de discrétisation arbitrairement fine du paramètre de localisation, formant ainsi un " dictionnaire discret ". Cependant, une telle approche se heurte à deux difficultés majeures. D'une part, le dictionnaire provenant d'une telle discrétisation est fortement corrélé et met en échec les méthodes de résolution sous-optimales classiques comme la pénalisation L1 ou les algorithmes gloutons. D'autre part, l'estimation du paramètre de localisation, appartenant nécessairement à la grille de discrétisation, se fait de manière discrète, ce qui entraîne une erreur de modélisation. Dans ce travail nous proposons des solutions pour faire face à ces deux enjeux, d'une part via la prise en compte de la parcimonie de façon exacte en introduisant un ensemble de variables binaires, et d'autre part via la résolution " optimale " de tels problèmes sur " dictionnaire continu " permettant l'estimation continue du paramètre de localisation. Deux axes de recherches ont été suivis, et l'utilisation des algorithmes proposés est illustrée sur des problèmes de type déconvolution impulsionnelle et analyse spectrale de signaux irrégulièrement échantillonnés. Le premier axe de ce travail exploite le principe " d'interpolation de dictionnaire ", consistant en une linéarisation du dictionnaire continu pour obtenir un problème d'APL sous contraintes. L'introduction des variables binaires nous permet de reformuler ce problème sous forme de " programmation mixte en nombres entiers " (Mixed Integer Programming - MIP) et ainsi de modéliser de façon exacte la parcimonie sous la forme de la " pseudo-norme L0 ". Différents types d'interpolation de dictionnaires et de relaxation des contraintes sont étudiés afin de résoudre de façon optimale le problème grâce à des algorithmes classiques de type MIP. Le second axe se place dans le cadre probabiliste Bayésien, où les variables binaires nous permettent de modéliser la parcimonie en exploitant un modèle de type Bernoulli-Gaussien. Ce modèle est étendu (modèle BGE) pour la prise en compte de la variable de localisation continue. L'estimation des paramètres est alors effectuée à partir d'échantillons tirés avec des algorithmes de type Monte Carlo par Chaîne de Markov. Plus précisément, nous montrons que la marginalisation des amplitudes permet une accélération de l'algorithme de Gibbs dans le cas supervisé (hyperparamètres du modèle connu). De plus, nous proposons de bénéficier d'une telle marginalisation dans le cas non supervisé via une approche de type " Partially Collapsed Gibbs Sampler. " Enfin, nous avons adapté le modèle BGE et les algorithmes associés à un problème d'actualité en astrophysique : la détection d'exoplanètes par la méthode des vitesses radiales. Son efficacité sera illustrée sur des données simulées ainsi que sur des données réelles. / This thesis deals with solutions of nonlinear inverse problems using a sparsity prior; more specifically when the data can be modelled as a linear combination of a few functions, which depend non-linearly on a "location" parameter, i.e. frequencies for spectral analysis or time-delay for spike train deconvolution. These problems are generally reformulated as linear sparse approximation problems, thanks to an evaluation of the nonlinear functions at location parameters discretised on a thin grid, building a "discrete dictionary". However, such an approach has two major drawbacks. On the one hand, the discrete dictionary is highly correlated; classical sub-optimal methods such as L1- penalisation or greedy algorithms can then fail. On the other hand, the estimated location parameter, which belongs to the discretisation grid, is necessarily discrete and that leads to model errors. To deal with these issues, we propose in this work an exact sparsity model, thanks to the introduction of binary variables, and an optimal solution of the problem with a "continuous dictionary" allowing a continuous estimation of the location parameter. We focus on two research axes, which we illustrate with problems such as spike train deconvolution and spectral analysis of unevenly sampled data. The first axis focusses on the "dictionary interpolation" principle, which consists in a linearisation of the continuous dictionary in order to get a constrained linear sparse approximation problem. The introduction of binary variables allows us to reformulate this problem as a "Mixed Integer Program" (MIP) and to exactly model the sparsity thanks to the "pseudo-norm L0". We study different kinds of dictionary interpolation and constraints relaxation, in order to solve the problem optimally thanks to MIP classical algorithms. For the second axis, in a Bayesian framework, the binary variables are supposed random with a Bernoulli distribution and we model the sparsity through a Bernoulli-Gaussian prior. This model is extended to take into account continuous location parameters (BGE model). We then estimate the parameters from samples drawn using Markov chain Monte Carlo algorithms. In particular, we show that marginalising the amplitudes allows us to improve the sampling of a Gibbs algorithm in a supervised case (when the model's hyperparameters are known). In an unsupervised case, we propose to take advantage of such a marginalisation through a "Partially Collapsed Gibbs Sampler." Finally, we adapt the BGE model and associated samplers to a topical science case in Astrophysics: the detection of exoplanets from radial velocity measurements. The efficiency of our method will be illustrated with simulated data, as well as actual astrophysical data.
253

Towards Reducing Structural Interpretation Uncertainties Using Seismic Data / Vers la réduction des incertitudes d'interprétation structurale à l'aide de données sismiques

Irakarama, Modeste 25 April 2019 (has links)
Les modèles géologiques sont couramment utilisés pour estimer les ressources souterraines, pour faire des simulations numériques, et pour évaluer les risques naturels ; il est donc important que les modèles géologiques représentent la géométrie des objets géologiques de façon précise. La première étape pour construire un modèle géologique consiste souvent à interpréter des surfaces structurales, telles que les failles et horizons, à partir d'une image sismique ; les objets géologiques identifiés sont ensuite utilisés pour construire le modèle géologique par des méthodes d'interpolation. Les modèles géologiques construits de cette façon héritent donc les incertitudes d'interprétation car une image sismique peut souvent supporter plusieurs interprétations structurales. Dans ce manuscrit, j'étudie le problème de réduire les incertitudes d'interprétation à l'aide des données sismiques. Particulièrement, j'étudie le problème de déterminer, à l'aide des données sismiques, quels modèles sont plus probables que d'autres dans un ensemble des modèles géologiques cohérents. Ce problème sera connu par la suite comme "le problème d'évaluation des modèles géologiques par données sismiques". J'introduis et formalise ce problème. Je propose de le résoudre par génération des données sismiques synthétiques pour chaque interprétation structurale dans un premier temps, ensuite d'utiliser ces données synthétiques pour calculer la fonction-objectif pour chaque interprétation ; cela permet de classer les différentes interprétations structurales. La difficulté majeure d'évaluer les modèles structuraux à l'aide des données sismiques consiste à proposer des fonctions-objectifs adéquates. Je propose un ensemble de conditions qui doivent être satisfaites par la fonction-objectif pour une évaluation réussie des modèles structuraux à l'aide des données sismiques. Ces conditions imposées à la fonction-objectif peuvent, en principe, être satisfaites en utilisant les données sismiques de surface (« surface seismic data »). Cependant, en pratique il reste tout de même difficile de proposer et de calculer des fonctions-objectifs qui satisfassent ces conditions. Je termine le manuscrit en illustrant les difficultés rencontrées en pratique lorsque nous cherchons à évaluer les interprétations structurales à l'aide des données sismiques de surface. Je propose une fonction-objectif générale faite de deux composants principaux : (1) un opérateur de résidus qui calcule les résidus des données, et (2) un opérateur de projection qui projette les résidus de données depuis l'espace de données vers l'espace physique (le sous-sol). Cette fonction-objectif est donc localisée dans l'espace car elle génère des valeurs en fonction de l'espace. Cependant, je ne suis toujours pas en mesure de proposer une implémentation pratique de cette fonction-objectif qui satisfasse les conditions imposées pour une évaluation réussie des interprétations structurales ; cela reste un sujet de recherche. / Subsurface structural models are routinely used for resource estimation, numerical simulations, and risk management; it is therefore important that subsurface models represent the geometry of geological objects accurately. The first step in building a subsurface model is usually to interpret structural features, such as faults and horizons, from a seismic image; the identified structural features are then used to build a subsurface model using interpolation methods. Subsurface models built this way therefore inherit interpretation uncertainties since a single seismic image often supports multiple structural interpretations. In this manuscript, I study the problem of reducing interpretation uncertainties using seismic data. In particular, I study the problem of using seismic data to determine which structural models are more likely than others in an ensemble of geologically plausible structural models. I refer to this problem as "appraising structural models using seismic data". I introduce and formalize the problem of appraising structural interpretations using seismic data. I propose to solve the problem by generating synthetic data for each structural interpretation and then to compute misfit values for each interpretation; this allows us to rank the different structural interpretations. The main challenge of appraising structural models using seismic data is to propose appropriate data misfit functions. I derive a set of conditions that have to be satisfied by the data misfit function for a successful appraisal of structural models. I argue that since it is not possible to satisfy these conditions using vertical seismic profile (VSP) data, it is not possible to appraise structural interpretations using VSP data in the most general case. The conditions imposed on the data misfit function can in principle be satisfied for surface seismic data. In practice, however, it remains a challenge to propose and compute data misfit functions that satisfy those conditions. I conclude the manuscript by highlighting practical issues of appraising structural interpretations using surface seismic data. I propose a general data misfit function that is made of two main components: (1) a residual operator that computes data residuals, and (2) a projection operator that projects the data residuals from the data-space into the image-domain. This misfit function is therefore localized in space, as it outputs data misfit values in the image-domain. However, I am still unable to propose a practical implementation of this misfit function that satisfies the conditions imposed for a successful appraisal of structural interpretations; this is a subject for further research.
254

Surface Modified Capillaries in Capillary Electrophoresis Coupled to Mass Spectrometry : Method Development and Exploration of the Potential of Capillary Electrophoresis as a Proteomic Tool

Zuberovic, Aida January 2009 (has links)
The increased knowledge about the complexity of the physiological processes increases the demand on the analytical techniques employed to explore them. A comprehensive analysis of the entire sample content is today the most common approach to investigate the molecular interplay behind a physiological deviation. For this purpose a method that offers a number of important properties, such as speed and simplicity, high resolution and sensitivity, minimal sample volume requirements, cost efficiency and robustness, possibility of automation, high-throughput and wide application range of analysis is requested. Capillary electrophoresis (CE) coupled to mass spectrometry (MS) has a great potential and fulfils many of these criteria. However, further developments and improvements of these techniques and their combination are required to meet the challenges of complex biological samples. Protein analysis using CE is a challenging task due to protein adsorption to the negatively charged fused-silica capillary wall. This is especially emphasised with increased basicity and size of proteins and peptides. In this thesis, the adsorption problem was addressed by using an in-house developed physically adsorbed polyamine coating, named PolyE-323. The coating procedure is fast and simple that generates a coating stable over a wide pH range, 2-11. By coupling PolyE-323 modified capillaries to MS, either using electrospray ionisation (ESI) or matrix-assisted laser desorption/ionisation (MALDI), successful analysis of peptides, proteins and complex samples, such as protein digests and crude human body fluids were obtained. The possibilities of using CE-MALDI-MS/MS as a proteomic tool, combined with a proper sample preparation, are further demonstrated by applying high-abundant protein depletion in combination with a peptide derivatisation step or isoelectric focusing (IEF). These approaches were applied in profiling of the proteomes of human cerebrospinal fluid (CSF) and human follicular fluid (hFF), respectively. Finally, a multiplexed quantitative proteomic analysis was performed on a set of ventricular cerebrospinal fluid (vCSF) samples from a patient with traumatic brain injury (TBI) to follow relative changes in protein patterns during the recovery process. The results presented in this thesis confirm the potential of CE, in combination with MS, as a valuable choice in the analysis of complex biological samples and clinical applications.
255

A multi-stack framework in magnetic resonance imaging

Shilling, Richard Zethward 02 April 2009 (has links)
Magnetic resonance imaging (MRI) is the preferred imaging modality for visualization of intracranial soft tissues. Surgical planning, and increasingly surgical navigation, use high resolution 3-D patient-specific structural maps of the brain. However, the process of MRI is a multi-parameter tomographic technique where high resolution imagery competes against high contrast and reasonable acquisition times. Resolution enhancement techniques based on super-resolution are particularly well suited in solving the problems of resolution when high contrast with reasonable times for MRI acquisitions are needed. Super-resolution is the concept of reconstructing a high resolution image from a set of low-resolution images taken at dierent viewpoints or foci. The MRI encoding techniques that produce high resolution imagery are often sub-optimal for the desired contrast needed for visualization of some structures in the brain. A novel super-resolution reconstruction framework for MRI is proposed in this thesis. Its purpose is to produce images of both high resolution and high contrast desirable for image-guided minimally invasive brain surgery. The input data are multiple 2-D multi-slice Inversion Recovery MRI scans acquired at orientations with regular angular spacing rotated around a common axis. Inspired by the computed tomography domain, the reconstruction is a 3-D volume of isotropic high resolution, where the inversion process resembles a projection reconstruction problem. Iterative algorithms for reconstruction are based on the projection onto convex sets formalism. Results demonstrate resolution enhancement in simulated phantom studies, and in ex- and in-vivo human brain scans, carried out on clinical scanners. In addition, a novel motion correction method is applied to volume registration using an iterative technique in which super-resolution reconstruction is estimated in a given iteration following motion correction in the preceding iteration. A comparison study of our method with previously published methods in super-resolution shows favorable characteristics of the proposed approach.
256

A computational framework for the solution of infinite-dimensional Bayesian statistical inverse problems with application to global seismic inversion

Martin, James Robert, Ph. D. 18 September 2015 (has links)
Quantifying uncertainties in large-scale forward and inverse PDE simulations has emerged as a central challenge facing the field of computational science and engineering. The promise of modeling and simulation for prediction, design, and control cannot be fully realized unless uncertainties in models are rigorously quantified, since this uncertainty can potentially overwhelm the computed result. While statistical inverse problems can be solved today for smaller models with a handful of uncertain parameters, this task is computationally intractable using contemporary algorithms for complex systems characterized by large-scale simulations and high-dimensional parameter spaces. In this dissertation, I address issues regarding the theoretical formulation, numerical approximation, and algorithms for solution of infinite-dimensional Bayesian statistical inverse problems, and apply the entire framework to a problem in global seismic wave propagation. Classical (deterministic) approaches to solving inverse problems attempt to recover the “best-fit” parameters that match given observation data, as measured in a particular metric. In the statistical inverse problem, we go one step further to return not only a point estimate of the best medium properties, but also a complete statistical description of the uncertain parameters. The result is a posterior probability distribution that describes our state of knowledge after learning from the available data, and provides a complete description of parameter uncertainty. In this dissertation, a computational framework for such problems is described that wraps around the existing forward solvers, as long as they are appropriately equipped, for a given physical problem. Then a collection of tools, insights and numerical methods may be applied to solve the problem, and interrogate the resulting posterior distribution, which describes our final state of knowledge. We demonstrate the framework with numerical examples, including inference of a heterogeneous compressional wavespeed field for a problem in global seismic wave propagation with 10⁶ parameters.
257

Αντίστροφα προβλήματα στη μαθηματική θεωρία της ήλεκτρο-μάγνητο-εγκεφαλογραφίας

Χατζηλοϊζή, Δήμητρα 22 December 2009 (has links)
Η ηλεκτρομαγνητική δραστηριότητα του εγκεφάλου μελετάται με τη βοήθεια των μη παρεμβατικών μεθόδων της Ήλεκτροεγκεφαλογραφίας και της Μαγνητοεγκεφαλογραφίας. Ειδικότερα, κάθε ηλεκτροχημικά παραγόμενο ρεύμα στο εσωτερικό του εγκεφάλου δημιουργεί ένα ηλεκτρικό και ένα μαγνητικό πεδίο, στο εσωτερικό και στο εξωτερικό του εγκεφάλου αντίστοιχα. Τα πεδία αυτά καταγράφονται στην επιφάνεια και στον εξωτερικό χώρο του κρανίου και δίνουν το Ηλεκτροεγκεφαλόγραφημα (EEG) και το Μαγνητοεγκεφαλόγραφημα (MEG) αντίστοιχα, τα οποία μεταφέρουν πληροφορίες για τη λειτουργία του εγκεφάλου τη χρονική στιγμή της καταγραφής. Η παρούσα διατριβή αφορά στη μαθηματική ανάλυση ευθέων και αντίστροφων προβλημάτων που συνδέονται με τις μεθόδους αυτές με σκοπό τον εντοπισμό και το χαρακτηρισμό της πηγής που παρήγαγε τα μετρούμενα πεδία. Στο Μέρος Ι μελετάται αναλυτικά η δομή και λειτουργία του εγκεφάλου, περιγράφεται το φυσικό πρότυπο που χρησιμοποιούμε και γίνεται αναφορά τόσο στη σφαιρική όσο και στην ελλειψοειδή γεωμετρία που αποτελούν τα γεωμετρικά υπόβαθρα. Στο Μέρος ΙΙ επιλύεται το ευθύ πρόβλημα του Βιοηλεκτρισμού στην περίπτωση του σφαιρικού ομογενούς προτύπου για τον ανθρώπινο εγκέφαλο, όπου η πηγή είναι αυθαίρετη κατανομή ρεύματος. Αποδεικνύεται ό,τι, στο εξωτερικό ηλεκτρικό δυναμικό δεν εμπεριέχεται η συνεισφορά του σωληνοειδούς μέρους της εφαπτομενικής συνιστώσας του ρεύματος και συνεπώς το αντίστοιχο αντίστροφο πρόβλημα είναι μη μοναδικό. Με την απαίτηση το ρεύμα να ελαχιστοποιεί την , το αντίστροφο πρόβλημα επιλύεται μοναδικά και προσδιορίζονται οι συνιστώσες του νευρωνικού ρεύματος από γνωστές μετρήσεις του ηλεκτρικού δυναμικού. Τα κύρια χαρακτηριστικά καθώς και οι περιορισμοί που επιβάλλουν το φυσικό και το γεωμετρικό πρόβλημα αναλύονται λεπτομερώς. Στο Μέρος ΙΙΙ επιλύονται ευθέα προβλήματα του Βιοηλεκτρομαγνητισμού σε ελλειψοειδή γεωμετρία και αντλούμε χρήσιμα συμπεράσματα για την αντιστροφή των προβλημάτων MEG. Συγκεκριμένα υπολογίστηκε η οκταπολική προσέγγιση του μαγνητικού πεδίου που παράγεται στο εξωτερικό του πλέον ρεαλιστικού ομογενούς προτύπου για τον ανθρώπινο εγκέφαλο, που είναι το ελλειψοειδές, συναρτήσει των ελλειψοειδών αρμονικών τρίτου βαθμού. Η βελτίωση αυτή είναι σημαντική καθώς αποδεικνύεται αριθμητικά ότι η μαγνητικά «σιωπηλή» πηγή της τετραπολικής προσέγγισης συνεισφέρει στις μετρήσεις του μαγνητικού πεδίου. Ως εκ τούτου, η νέα αυτή προσέγγιση του μαγνητικού πεδίου παρέχει αρκετές πληροφορίες για την πιθανή αντιστροφή του προβλήματος. Στη συνέχεια επιλύθηκε το ευθύ πρόβλημα του Βιοηλεκτρομαγνητισμού στην περίπτωση που ο εγκεφαλικός ιστός περιλαμβάνει περιοχή υγρού πυρήνα διαφορετικής αγωγιμότητας. Ο πυρήνας αυτός πληρούται από εγκεφαλονωτιαίο υγρό ενώ η πηγή βρίσκεται στον φλοιό του εγκεφαλικού ιστού. Υπολογίζεται το ηλεκτρικό δυναμικό και το μαγνητικό πεδίο στο εξωτερικό του αγωγού και τα αποτελέσματα συγκρίνονται αναλυτικά και αριθμητικά με τα αντίστοιχα αποτελέσματα του ομογενούς προτύπου του εγκέφαλου. Από την σύγκριση αυτή προκύπτει ότι τόσο η ανομοιογένεια εντός του εγκεφαλικού ιστού όσο και η θέση της πηγής υπεισέρχονται με καθοριστικό τρόπο στο μαγνητικό πεδίο του υπό μελέτη προτύπου. / The electromagnetic activity of the human brain is studying via the non invasive methods of Electroencephalography and Magnetoencephalography. It is well known that an electrochemically generated current in the interior of the brain generates an electric and a magnetic field, both in the interior and exterior of the brain. The resulting electric and magnetic fields are measured on the surface and the exterior of the head via the EEG and MEG, respectively. In the present thesis we study direct and inverse EEG and MEG problems in order to identify and characterize the source. In the First Part we describe the morphology and the functionality of the human brain and we state the physical and geometrical models that we use. In the Second Part we solved the direct problem of EEG for the spherical homogeneous model of the brain in the case of a continuously distributed neuronal current. It turns out that the electric potential is independent of the solenoid part of the tangential component of the neuronal current. Consequently, the corresponding inverse problem is not uniquely solvable. Hence, we demand that the current has minimum and in this case we ended up with the complete expansions of the visible part of the current from the knowledge of the electric field. In the Third Part we studied direct problems of MEG in ellipsoidal geometry. In particular we evaluated the octapolic term of the magnetic induction field which it’s produced in the exterior of the ellipsoidal model of the brain-head system. This term provides the highest order terms that can be expressed in closed form. It is shown numerically that the silent source of the quadrupolic term of the magnetic induction field does contribute to the octapolic term. Therefore, the knowledge of the quadrupolic and octapolic terms provides enough data to construct an effective algorithm for inversion. Finally, the direct problem of MEG is presented, in the case where the cerebral tissue is considered as an ellipsoidal conductor and surrounds a fluid ellipsoidal core of different conductivity. The fluid core is occupied by the cerebrospinal fluid and the source lies in the cerebral shell. The electric field in every region and the exterior magnetic induction field are obtained. Furthermore, we compare analytically and numerically the results of the inhomogeneous model with the homogeneous ellipsoidal model. We observed that both the inhomogeniety inside the cerebral tissue and the location of the source appear in the magnetic induction field of the inhomogeneous model. Τhe existence of the fluid core effects the monotonicity of the components of the magnetic field as well as its magnitude.
258

Advanced Methods for Radial Data Sampling in Magnetic Resonance Imaging / Erweiterte Methoden für radiale Datenabtastung bei der Magnetresonanz-Tomographie

Block, Kai Tobias 16 September 2008 (has links)
No description available.
259

Nonlinear Reconstruction Methods for Parallel Magnetic Resonance Imaging / Nichtlineare Rekonstruktionsmethoden für die parallele Magnetresonanztomographie

Uecker, Martin 15 July 2009 (has links)
No description available.
260

Statistische Multiresolutions-Schätzer in linearen inversen Problemen - Grundlagen und algorithmische Aspekte / Statistical Multiresolution Estimatiors in Linear Inverse Problems - Foundations and Algorithmic Aspects

Marnitz, Philipp 27 October 2010 (has links)
No description available.

Page generated in 0.0671 seconds