• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 153
  • 98
  • 65
  • 14
  • 9
  • 5
  • 2
  • 2
  • Tagged with
  • 368
  • 368
  • 136
  • 53
  • 52
  • 50
  • 49
  • 45
  • 34
  • 33
  • 32
  • 30
  • 29
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Bayesian M/EEG source localization with possible joint skull conductivity estimation

Costa, Facundo Hernan 02 March 2017 (has links) (PDF)
M/EEG mechanisms allow determining changes in the brain activity, which is useful in diagnosing brain disorders such as epilepsy. They consist of measuring the electric potential at the scalp and the magnetic field around the head. The measurements are related to the underlying brain activity by a linear model that depends on the lead-field matrix. Localizing the sources, or dipoles, of M/EEG measurements consists of inverting this linear model. However, the non-uniqueness of the solution (due to the fundamental law of physics) and the low number of dipoles make the inverse problem ill-posed. Solving such problem requires some sort of regularization to reduce the search space. The literature abounds of methods and techniques to solve this problem, especially with variational approaches. This thesis develops Bayesian methods to solve ill-posed inverse problems, with application to M/EEG. The main idea underlying this work is to constrain sources to be sparse. This hypothesis is valid in many applications such as certain types of epilepsy. We develop different hierarchical models to account for the sparsity of the sources. Theoretically, enforcing sparsity is equivalent to minimizing a cost function penalized by an l0 pseudo norm of the solution. However, since the l0 regularization leads to NP-hard problems, the l1 approximation is usually preferred. Our first contribution consists of combining the two norms in a Bayesian framework, using a Bernoulli-Laplace prior. A Markov chain Monte Carlo (MCMC) algorithm is used to estimate the parameters of the model jointly with the source location and intensity. Comparing the results, in several scenarios, with those obtained with sLoreta and the weighted l1 norm regularization shows interesting performance, at the price of a higher computational complexity. Our Bernoulli-Laplace model solves the source localization problem at one instant of time. However, it is biophysically well-known that the brain activity follows spatiotemporal patterns. Exploiting the temporal dimension is therefore interesting to further constrain the problem. Our second contribution consists of formulating a structured sparsity model to exploit this biophysical phenomenon. Precisely, a multivariate Bernoulli-Laplacian distribution is proposed as an a priori distribution for the dipole locations. A latent variable is introduced to handle the resulting complex posterior and an original Metropolis-Hastings sampling algorithm is developed. The results show that the proposed sampling technique improves significantly the convergence. A comparative analysis of the results is performed between the proposed model, an l21 mixed norm regularization and the Multiple Sparse Priors (MSP) algorithm. Various experiments are conducted with synthetic and real data. Results show that our model has several advantages including a better recovery of the dipole locations. The previous two algorithms consider a fully known leadfield matrix. However, this is seldom the case in practical applications. Instead, this matrix is the result of approximation methods that lead to significant uncertainties. Our third contribution consists of handling the uncertainty of the lead-field matrix. The proposed method consists in expressing this matrix as a function of the skull conductivity using a polynomial matrix interpolation technique. The conductivity is considered as the main source of uncertainty of the lead-field matrix. Our multivariate Bernoulli-Laplacian model is then extended to estimate the skull conductivity jointly with the brain activity. The resulting model is compared to other methods including the techniques of Vallaghé et al and Guttierez et al. Our method provides results of better quality without requiring knowledge of the active dipole positions and is not limited to a single dipole activation.
2

Reconstruction of mechanical properties from surface-based motion data for Digital Image Elasto-Tomography using an implicit surface representation of breast tissue structure

Kershaw, Helen Elizabeth January 2012 (has links)
There has been great interest in recent times in the use of elastography for the characterization of human tissue. Digital Image Elasto-Tomography is a novel breast cancer pre-screening technique under development at the University of Canterbury, which aims to identify and locate stiff areas within the breast that require further investigation using images of the surface motion alone. A calibrated array of five digital cameras is used to capture surface motion of the breast under harmonic actuation. The forward problem, that is the resulting motion for a given mechanical property distribution, is calculated using the Finite Element Method. The inverse problem is to find the mechanical properties which reproduce the measured surface motion through numerical simulation. A reconstruction algorithm is developed using a shape based description to reduce the number of parameters in the inverse problem. A parallel Genetic Algorithm is developed for parameter optimization. A geometric method termed Fitness Function Analysis is shown to improve the inclusion location optimization problem. The ensemble of solutions generated using the Genetic Algorithm is used to produce an optimal and a credible region for inclusion location. Successful single frequency phantom reconstructions are presented. An effective way of combining information from multi-frequency phantom data by examining the characteristics of the measured surface motion using data quality metrics is developed and used to produce improved reconstructions. Results from numerical simulation datasets and a two inclusion phantom used to test the optimization of multiple and ellipsoidal inclusions indicate that although two inclusions can be successfully reconstructed, the single inclusions assumption may suffice even in irregular, heterogeneous cases. This assumption was used to successfully locate the stiffest inclusion in a phantom containing multiple inclusions of differing stiffness based on three multi-frequency datasets. The methods developed in phantoms are applied to three in vivo cases for both single and multi-frequency data with limited success. This thesis builds on previous work undertaken at the University of Canterbury. The original contributions in this work are as follows. A new reconstruction algorithm combining a genetic algorithm with fitness function analysis is developed. The most realistic tissue mimicking phantoms to date are used. An ellipsoidal shape-based description is presented, and applied to the first multi-inclusion reconstructions in DIET. This work presents the first reconstruction using meshes created directly from data using a meshing algorithm developed by Jonas Biehler. A multi-frequency cost function is developed to produce the first multi-frequency and in vivo reconstructions using DIET data.
3

Theoretical investigation of non-invasive methods to identify origins of cardiac arrhythmias

Perez Alday, Erick Andres January 2016 (has links)
Cardiac disease is one of the leading causes of death in the world, with an increase in cardiac arrhythmias in recent years. In addition, myocardial ischemia, which arises from the lack of blood in the cardiac tissue, can lead to cardiac arrhythmias and even sudden cardiac death. Cardiac arrhythmias, such as atrial fibrillation, are characterised by abnormal wave excitation and repolarization patterns in the myocardial tissue. These abnormal patterns are usually diagnosed through non-invasive electrical measurements on the surface of the body, i.e., the electrocardiogram (ECG). However, the most common lead configuration of the ECG, the 12-lead ECG, has its limitations in providing sufficient information to identify and locate the origin of cardiac arrhythmias. Therefore, there is an increasing need to develop novel methods to diagnose and find the origin of arrhythmic excitation, which will increase the efficacy of the treatment and diagnosis of cardiac arrhythmias. The objective of this research was to develop a family of multi-scale computational models of the human heart and thorax to simulate and investigate the effect of arrhythmic electrical activity in the heart on the electric and magnetic activities on the surface of the body. Based on these simulations, new theoretical algorithms were developed to non-invasively diagnose the origins of cardiac arrhythmias, such as the location of ectopic activities in the atria or ischemic regions within the ventricles, which are challenging to the clinician. These non-invasive diagnose methods were based on the implementation of multi-lead ECG systems, magnetocardiograms (MCGs) and electrocardiographic imaging.
4

Functional magnetic resonance imaging : an intermediary between behavior and neural activity

Vakorin, Vasily 28 June 2007
Blood oxygen level dependent (BOLD) functional magnetic resonance imaging is a non-invasive technique used to trace changes in neural dynamics in reaction to mental activity caused by perceptual, motor or cognitive tasks. The BOLD response is a complex signal, a consequence of a series of physiological events regulated by increased neural activity. A method to infer from the BOLD signal onto underlying neuronal activity (hemodynamic inverse problem) is proposed in Chapter 2 under the assumption of a previously proposed mathematical model on the transduction of neural activity to the BOLD signal. Also, in this chapter we clarify the meaning of the neural activity function used as the input for an intrinsic dynamic system which can be viewed as an advanced substitute for the impulse response function. Chapter 3 describes an approach for recovering neural timing information (mental chronometry) in an object interaction decision task via solving the hemodynamic inverse problem. In contrast to the hemodynamic level, at the neural level, we were able to determine statistically significant latencies in activation between functional units in the model used. In Chapter 4, two approaches for regularization parameter tuning in a regularized-regression analysis are compared in an attempt to find the optimal amount of smoothing to be imposed on fMRI data in determining an empirical hemodynamic response function. We found that the noise autocorrelation structure can be improved by tuning the regularization parameter but the whitening-based criterion provides too much smoothing when compared to cross-validation. Chapter~5 illustrates that the smoothing techniques proposed in Chapter 4 can be useful in the issue of correlating behavioral and hemodynamic characteristics. Specifically, Chapter 5, based on the smoothing techniques from Chapter 4, seeks to correlate several parameters characterizing the hemodynamic response in Broca's area to behavioral measures in a naming task. In particular, a condition for independence between two routes of converting print to speech in a dual route cognitive model was verified in terms of hemodynamic parameters.
5

Functional magnetic resonance imaging : an intermediary between behavior and neural activity

Vakorin, Vasily 28 June 2007 (has links)
Blood oxygen level dependent (BOLD) functional magnetic resonance imaging is a non-invasive technique used to trace changes in neural dynamics in reaction to mental activity caused by perceptual, motor or cognitive tasks. The BOLD response is a complex signal, a consequence of a series of physiological events regulated by increased neural activity. A method to infer from the BOLD signal onto underlying neuronal activity (hemodynamic inverse problem) is proposed in Chapter 2 under the assumption of a previously proposed mathematical model on the transduction of neural activity to the BOLD signal. Also, in this chapter we clarify the meaning of the neural activity function used as the input for an intrinsic dynamic system which can be viewed as an advanced substitute for the impulse response function. Chapter 3 describes an approach for recovering neural timing information (mental chronometry) in an object interaction decision task via solving the hemodynamic inverse problem. In contrast to the hemodynamic level, at the neural level, we were able to determine statistically significant latencies in activation between functional units in the model used. In Chapter 4, two approaches for regularization parameter tuning in a regularized-regression analysis are compared in an attempt to find the optimal amount of smoothing to be imposed on fMRI data in determining an empirical hemodynamic response function. We found that the noise autocorrelation structure can be improved by tuning the regularization parameter but the whitening-based criterion provides too much smoothing when compared to cross-validation. Chapter~5 illustrates that the smoothing techniques proposed in Chapter 4 can be useful in the issue of correlating behavioral and hemodynamic characteristics. Specifically, Chapter 5, based on the smoothing techniques from Chapter 4, seeks to correlate several parameters characterizing the hemodynamic response in Broca's area to behavioral measures in a naming task. In particular, a condition for independence between two routes of converting print to speech in a dual route cognitive model was verified in terms of hemodynamic parameters.
6

Le problème inverse en l'électrocardiographie / The resolution of the inverse problem in electrocardiography

Lopez Rincon, Alejandro 20 December 2013 (has links)
Dans le problème inverse d’électrocardiographie, le cible est faire la reconstruction de l’activité électrophysiologique dans le cœur sans mesurer directement dans sa surface (sans interventions avec cathéter). Il est important remarque que en l’actualité la solution numérique du problème inverse est résolu avec le modèle quasi-statique. Ce modèle ne considère pas la dynamique du cœur et peut produire des erreurs dans la reconstruction de la solution sur la surface du cœur. Dans cette thèse, différents méthodologies était investigue pour résoudre le problème inverse d’électrocardiographie comme intelligence artificielle, et modèles dynamiques limites. Aussi, les effets de différents opérateurs en utilisant méthodes d’éléments de frontière , et méthodes d’élément finis était investigue. / In the inverse problem of electrocardiography, the target is to make the reconstruction of electrophysiological activity in the heart without measuring directly in its surface (without interventions with catheter). It is important to note that the current numerical solution of the inverse problem is solved with the quasi-static model. This model does not consider the dynamics of the heart and can cause errors in the reconstruction of the solution on the surface of the heart. This thesis investigates different methodologies was to solve the inverse problem of electrocardiography as artificial intelligence and dynamic models limits. Also, the effects of different operators using boundary element methods, finite element methods, and was investigates.
7

Regularization for MRI Diffusion Inverse Problem

Almabruk, Tahani 17 June 2008 (has links)
In this thesis, we introduce a novel method of reconstructing fibre directions from diffusion images. By modelling the Principal Diffusion Direction PDD (the fibre direction) directly, we are able to apply regularization to the fibre direction explicitly, which was not possible before. Diffusion Tensor Imaging (DTI) is a technique which extracts information from multiple Magnetic Resonance Images about the amount and orientation of diffusion within the body. It is commonly used for brain connectivity studies, providing information about the white matter structure. Many methods have been represented in the literature for estimating diffusion tensors with and without regularization. Previous methods of regularization applied to the source images or diffusion tensors. The process of extracting PDDs therefore required two or three numerical procedures, in which regularization (including filtering) is applied in earlier steps before the PDD is extracted. Such methods require and/or impose smoothness on all components of the signal, which is inherently less efficient than using regularizing terms that penalize non-smoothness in the principal diffusion direction directly. Our model can be interpreted as a restriction of the diffusion tensor model, in which the principal eigenvalue of the diffusion tensor is a model variable and not a derived quantity. We test the model using a numerical phantom designed to test many fibre orientations in parallel, and process a set of thigh muscle diffusion-weighted images. / Thesis / Master of Science (MSc)
8

Simulation-inversion des diagraphies / Simulation-inversion of logs

Vandamme, Thibaud 12 November 2018 (has links)
L’évaluation des formations géologiques consiste en l’analyse et la synthèse de données de différentes sources, de différentes échelles (microscopique à kilométrique) et acquises à des dates très variables. Le processus conventionnel de caractérisation des formations relève alors de l’interprétation physique spécialisée de chacune de ces sources de données et leur mise en cohérence par des processus de synthèse essentiellement d’ordre statistique (corrélation, apprentissage, up-scaling…). Il s’avère cependant qu’une source de données présente un caractère central : les diagraphies. Ces mesures physiques de différentes natures (nucléaires, acoustiques, électromagnétiques…) sont réalisées le long de la paroi d’un puits à l’aide de différentes sondes. Elles sont sensibles aux propriétés in situ des roches, et ce, sur une gamme d’échelle centimétrique à métrique intermédiaire aux carottes et données de test de production. De par leur profondeur d’investigation, les données diagraphiques sont particulièrement sensibles au phénomène d’invasion de boue se produisant lors du forage dans l’abord puits. Traditionnellement, l’invasion est modélisée de façon frustre au moment de l’interprétation diagraphiques par un simple effet piston. Ce modèle simple permet d’honorer le bilan de volume mais ne prend aucunement en compte la physique réelle d’invasion et prive, de fait, les diagraphies de toute portée dynamique. Des essais de modélisation de l’historique d’invasion couplés aux données diagraphiques ont déjà été élaborés par différents laboratoires et une abondante littérature sur le sujet est disponible. Les limitations majeures de ces approches résident dans le caractère sous déterminé des problèmes inverses issus de ces modèles physiques et dans le fait que la donnée diagraphique est réalisée en général sur un intervalle de temps inadaptée au regard du développement de l’invasion. Nous proposons une approche différente qui s’attèle non pas à décrire la physique de l’écoulement mais celle de l’équilibre radial des fluides dans le domaine envahi lorsque les diagraphies sont acquises. Nous montrons qu’en introduisant quelques contraintes pétrophysiques supplémentaires, il est possible d’inverser efficacement la distribution des propriétés dynamiques pour chaque faciès géologique. L’inversion prend en compte le phénomène d’invasion radial dans la zone à eau ainsi que l’équilibre capillaire vertical caractérisant le profil de saturation dans le réservoir pour chaque facies. A chaque profondeur du puits, sont ainsi obtenues perméabilités, pressions capillaires et facteurs de cimentation avec leurs incertitudes ainsi que les lois pétrophysiques propres à chaque faciès. Cette méthode a été appliquée à deux puits réels. En guise de validation, les résultats d’inversion ont été comparés aux mesures laboratoire faites sur carotte. De plus, les perméabilités inversées ont été comparées aux transitoires de pression de mini-tests. La cohérence des résultats montre que, d’une part, les hypothèses de base du modèle sont validées et que, d’autre part, l’approche fournit une estimation fiable de grandeurs dynamiques à toute échelle pour chaque faciès réservoir, et ce, dès l’acquisition des données diagraphiques. L’approche d’inversion proposée a permis de lever une limitation majeure des précédentes tentatives de prédiction des propriétés dynamiques par les diagraphies en reconsidérant la problématique non pas sous l’angle d’une modélisation phénoménologique exacte mais en l’abordant de manière globale à l’échelle d’une chaîne d’étude complète. Cette approche permet de fait une mise en cohérence très précoce des données, d’identifier les faciès d’intérêt et de qualifier les besoins véritables en données. Cet outil s’avère très puissant pour qualifier et caractériser les hétérogénéités pétrophysiques des formations et aider ainsi à résoudre le problème de mise à l’échelle des grandeurs dynamiques / The current geological formation evaluation process is built on a workflow using data from differentsources, different scales (microscopic to kilometric) and acquired at different times. Theconventional process of formation evaluation belongs to the dedicated study of each of thesesource of data and their reconciliation through a synthesis step, often based on statisticalconsideration (correlation, learning, up-scaling …). It turns out that there exists a source of datawhich is of considerable importance: logs. These physical measurements of different nature(nuclear, acoustic, electro-magnetic…) are acquired all across the well thanks to multiple probes.They are sensitive to the in situ properties of the rock on an intermediate scale between core dataand well tests (from centimeters to several meters). Because of their depth of investigation, logsare particularly sensitive to the mud filtrate invasion, a phenomenon which occurs during thedrilling in the near well-bore environment. The invasion is conventionally modeled in a rough waywith a piston effect hypothesis. This simple model allows to ensure the volume balance but doesnot take into account the physical processes of the invasion and thus prevent any estimation ofdynamic properties from log interpretation. Several attempts of simulating the complete history ofinvasion have been made by different laboratories in the past, and a rich literature is available onthis topic. The major pitfalls of these approaches come from the under-determination of theinverse problems derived from such models. Furthermore, logs are generally made in a time lapsewhich does not allow to fully characterize the process of invasion. We propose a differentapproach which does not fully describe the physics of the invasion but considers that a radialequilibrium has been reached between the fluids in the invaded zone when logs are acquired. Weshow that it is possible to efficiently invert the distribution of dynamical properties for eachgeological facies by adding some petrophysical constraints. The inversion takes into account thephenomenon of radial invasion in the water zone and the vertical capillary equilibrium describingthe water saturation profile in the reservoir for each facies. At each depth, permeabilities, capillarypressures and cementation factors are thus obtained, along with their uncertainties and thepetrophysical laws specific to each facies. This method has been applied to two wells. Weobtained good results when comparing inverted parameters to the measurements made on coresamples in laboratory. Furthermore, inverted permeabilities have also been compared topermeabilities derived from mini-tests. The consistency of the results shows that, on the one hand,the hypothesis behind our model are valid and, on the other hand, this approach can provide areliable estimation of dynamical parameters at different scales for each reservoir facies as soon asthe logs are acquired. The proposed approach allows to overcome a major limitation of theprevious attempts of the dynamical properties estimation from log interpretation. It allows areconciliation of different data and a facies recognition at an early stage of interpretation, and canindicate the real contribution of each source of data. The technique can even help in identifying theformation heterogeneities and for the petrophysical upscaling.
9

Desenvolvimento de algoritmo de imagens absolutas de tomografia por impedância elétrica para uso clínico. / Development of an absolute image algorithm for electrical impedance tomography for clinical application.

Camargo, Erick Darío León Bueno de 20 May 2013 (has links)
A Tomografia de Impedância Elétrica é uma técnica de obtenção de imagens não invasiva que pode ser usada em aplicações clínicas para estimar a impeditividade dos tecidos a partir de medidas elétricas na superfície do corpo. Matematicamente este é um problema inverso, não linear e mal posto. Geralmente é usado um filtro espacial Gaussiano passa alta como método de regularização para resolver o problema inverso. O objetivo principal deste trabalho é propor o uso de informação estatística fisiológica e anatômica da distribuição de resistividades dos tecidos do tórax, também chamada de atlas anatômico, em conjunto com o filtro Gaussiano como métodos de regularização. A metodologia proposta usa o método dos elementos finitos e o algoritmo de Gauss-Newton para reconstruir imagens de resistividade tridimensionais. A Teoria do Erro de Aproximação é utilizada para reduzir os erros relacionados à discretização e dimensões da malha de elementos finitos. Dados de tomografia de impedância elétrica e imagens de tomografia computadorizada coletados in vivo em um suíno com diferentes alterações fisiológicas pulmonares foram utilizados para validar o algoritmo proposto. As imagens obtidas foram consistentes com os fenômenos de atelectasia, derrame pleural, pneumotórax e variações associadas a diferentes níveis de pressão durante a ventilação mecânica. Os resultados mostram que a reconstrução de imagens de suínos com informação clínica significativa é possível quando tanto o filtro Gaussiano quanto o atlas anatômico são usados como métodos de regularização. / Electrical Impedance Tomography is a non invasive imaging technique that can be used in clinical applications to infer living tissue impeditivity from boundary electrical measurements. Mathematically this is an non-linear ill-posed inverse problem. Usually a spatial high-pass Gaussian filter is used as a regularization method for solving the inverse problem. The main objective of this work is to propose the use of physiological and anatomical priors of tissue resistivity distribution within the thorax, also known as anatomical atlas, in conjunction with the Gaussian filter as regularization methods. The proposed methodology employs the finite element method and the Gauss-Newton algorithm in order to reconstruct three-dimensional resistivity images. The Approximation Error Theory is used to reduce discretization effects and mesh size errors. Electrical impedance tomography data and computed tomography images of physiological pulmonary changes collected in vivo in a swine were used to validate the proposed method. The images obtained are compatible with atelectasis, pneumothorax, pleural effusion and different ventilation pressures during mechanical ventilation. The results show that image reconstruction from swines with clinically significant information is feasible when both the Gaussian filter and the anatomical atlas are used as regularization methods.
10

Sound Field Reconstruction for an Under-Determined System and its Application

Tongyang Shi (6580166) 10 June 2019 (has links)
<div>Near-Field Acoustical Holography (NAH) is an inverse process in which sound pressure measurements made in the near-field of an unknown sound source can be used to reconstruct the sound field so that source locations can be identified. Usually a large number of measurements is required for the usual NAH methods since a large number of parameters in the source or field model need to be determined. However, a large-scale microphone measurement is costly and hard to perform, so the use of NAH is limited by practical experimental conditions. In the present work, with the motivation of decreasing the number of microphone measurements required, and thus facilitating the measurement process, two sparse Equivalent Source Method (ESM) algorithms were studied: i.e., Wideband Acoustical Holography (WBH) and l_1-norm minimization. Based on these two algorithms, a new hybrid NAH procedure was proposed and demonstrated.To study and verify the above mentioned algorithms, simulations of different sources were conducted and then experiments were conducted on different sources: i.e., a loudspeaker cabinet and a diesel engine.</div>

Page generated in 0.0648 seconds