• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 3
  • Tagged with
  • 7
  • 7
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Bayesian inversion framework for subsurface seismic imaging problems

Urozayev, Dias 11 1900 (has links)
This thesis considers the reconstruction of subsurface models from seismic observations, a well-known high-dimensional and ill-posed problem. As a first regularization to such a problem, a reduction of the parameters' space is considered following a truncated Discrete Cosine Transform (DCT). This helps regularizing the seismic inverse problem and alleviates its computational complexity. A second regularization based on Laplace priors as a way of accounting for sparsity in the model is further proposed to enhance the reconstruction quality. More specifically, two Laplace-based penalizations are applied: one for the DCT coefficients and another one for the spatial variations of the subsurface model, which leads to an enhanced representation of cross-correlations of the DCT coefficients. The Laplace priors are represented by hierarchical forms that are suitable for deriving efficient inversion schemes. The corresponding inverse problem, which is formulated within a Bayesian framework, lies in computing the joint posteriors of the target model parameters and the hyperparameters of the introduced priors. Such a joint posterior is indeed approximated using the Variational Bayesian (VB) approach with a separable form of marginals under the minimization of Kullback-Leibler divergence criterion. The VB approach can provide an efficient means of obtaining not only point estimates but also closed forms of the posterior probability distributions of the quantities of interest, in contrast with the classical deterministic optimization methods. The case in which the observations are contaminated with outliers is further considered. For that case, a robust inversion scheme is proposed based on a Student-t prior for the observation noise. The proposed approaches are applied to successfully reconstruct the subsurface acoustic impedance model of the Volve oilfield.
2

Natural Language Processing, Statistical Inference, and American Foreign Policy

Lauretig, Adam M. 06 November 2019 (has links)
No description available.
3

Small Blob Detection in Medical Images

January 2015 (has links)
abstract: Recent advances in medical imaging technology have greatly enhanced imaging based diagnosis which requires computational effective and accurate algorithms to process the images (e.g., measure the objects) for quantitative assessment. In this dissertation, one type of imaging objects is of interest: small blobs. Example small blob objects are cells in histopathology images, small breast lesions in ultrasound images, glomeruli in kidney MR images etc. This problem is particularly challenging because the small blobs often have inhomogeneous intensity distribution and indistinct boundary against the background. This research develops a generalized four-phased system for small blob detections. The system includes (1) raw image transformation, (2) Hessian pre-segmentation, (3) feature extraction and (4) unsupervised clustering for post-pruning. First, detecting blobs from 2D images is studied where a Hessian-based Laplacian of Gaussian (HLoG) detector is proposed. Using the scale space theory as foundation, the image is smoothed via LoG. Hessian analysis is then launched to identify the single optimal scale based on which a pre-segmentation is conducted. Novel Regional features are extracted from pre-segmented blob candidates and fed to Variational Bayesian Gaussian Mixture Models (VBGMM) for post pruning. Sixteen cell histology images and two hundred cell fluorescent images are tested to demonstrate the performances of HLoG. Next, as an extension, Hessian-based Difference of Gaussians (HDoG) is proposed which is capable to identify the small blobs from 3D images. Specifically, kidney glomeruli segmentation from 3D MRI (6 rats, 3 humans) is investigated. The experimental results show that HDoG has the potential to automatically detect glomeruli, enabling new measurements of renal microstructures and pathology in preclinical and clinical studies. Realizing the computation time is a key factor impacting the clinical adoption, the last phase of this research is to investigate the data reduction technique for VBGMM in HDoG to handle large-scale datasets. A new coreset algorithm is developed for variational Bayesian mixture models. Using the same MRI dataset, it is observed that the four-phased system with coreset-VBGMM has similar performance as using the full dataset but about 20 times faster. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2015
4

A probabilistic approach to non-rigid medical image registration

Simpson, Ivor James Alexander January 2012 (has links)
Non-rigid image registration is an important tool for analysing morphometric differences in subjects with Alzheimer's disease from structural magnetic resonance images of the brain. This thesis describes a novel probabilistic approach to non-rigid registration of medical images, and explores the benefits of its use in this area of neuroimaging. Many image registration approaches have been developed for neuroimaging. The vast majority suffer from two limitations: Firstly, the trade-off between image fidelity and regularisation requires selection. Secondly, only a point-estimate of the mapping between images is inferred, overlooking the presence of uncertainty in the estimation. This thesis introduces a novel probabilistic non-rigid registration model and inference scheme. This framework allows the inference of the parameters that control the level of regularisation, and data fidelity in a data-driven fashion. To allow greater flexibility, this model is extended to allow the level of data fidelity to vary across space. A benefit of this approach, is that the registration can adapt to anatomical variability and other image acquisition differences. A further advantage of the proposed registration framework is that it provides an estimate of the distribution of probable transformations. Additional novel contributions of this thesis include two proposals for exploiting the estimated registration uncertainty. The first of these estimates a local image smoothing filter, which is based on the registration uncertainty. The second approach incorporates the distribution of transformations into an ensemble learning scheme for statistical prediction. These techniques are integrated into standard frameworks for morphometric analysis, and are demonstrated to improve the ability to distinguish subjects with Alzheimer's disease from healthy controls.
5

Algorithmes bayésiens variationnels accélérés et applications aux problèmes inverses de grande taille / Fast variational Bayesian algorithms and their application to large dimensional inverse problems

Zheng, Yuling 04 December 2014 (has links)
Dans le cadre de cette thèse, notre préoccupation principale est de développer des approches non supervisées permettant de résoudre des problèmes de grande taille le plus efficacement possible. Pour ce faire, nous avons considéré des approches bayésiennes qui permettent d'estimer conjointement les paramètres de la méthode avec l'objet d'intérêt. Dans ce cadre, la difficulté principale est que la loi a posteriori est en général complexe. Pour résoudre ce problème, nous nous sommes intéressés à l'approximation bayésienne variationnelle (BV) qui offre une approximation séparable de la loi a posteriori. Néanmoins, les méthodes d’approximation BV classiques souffrent d’une vitesse de convergence faible. La première contribution de cette thèse consiste à transposer les méthodes d'optimisation par sous-espace dans l'espace fonctionnel impliqué dans le cadre BV, ce qui nous permet de proposer une nouvelle méthode d'approximation BV. Nous avons montré l’efficacité de notre nouvelle méthode par les comparaisons avec les approches de l’état de l’art.Nous avons voulu ensuite confronter notre nouvelle méthodologie à des problèmes de traitement d'images de grande taille. De plus nous avons voulu favoriser les images régulières par morceau. Nous avons donc considéré un a priori de Variation Total (TV) et un autre a priori à variables cachées ressemblant à un mélange scalaire de gaussiennes par changement de positions. Avec ces deux modèles a priori, en appliquant notre méthode d’approximation BV, nous avons développé deux approches non-supervisées rapides et bien adaptées aux images régulières par morceau.En effet, les deux lois a priori introduites précédemment sont corrélées ce qui rend l'estimation des paramètres de méthode très compliquée : nous sommes souvent confronté à une fonction de partition non explicite. Pour contourner ce problème, nous avons considéré ensuite de travailler dans le domaine des ondelettes. Comme les coefficients d'ondelettes des images naturelles sont généralement parcimonieux, nous avons considéré des lois de la famille de mélange scalaire de gaussiennes par changement d'échelle (GSM) pour décrire la parcimonie. Une autre contribution est donc de développer une approche non-supervisée pour les lois de la famille GSM dont la densité est explicitement connue, en utilisant la méthode d'approximation BV proposée. / In this thesis, our main objective is to develop efficient unsupervised approaches for large dimensional problems. To do this, we consider Bayesian approaches, which allow us to jointly estimate regularization parameters and the object of interest. In this context, the main difficulty is that the posterior distribution is generally complex. To tackle this problem, we consider variational Bayesian (VB) approximation, which provides a separable approximation of the posterior distribution. Nevertheless, classical VB methods suffer from slow convergence speed. The first contribution of this thesis is to transpose the subspace optimization methods to the functional space involved in VB framework, which allows us to propose a new VB approximation method. We have shown the efficiency of the proposed method by comparisons with the state of the art approaches. Then we consider the application of our new methodology to large dimensional problems in image processing. Moreover, we are interested in piecewise smooth images. As a result, we have considered a Total Variation (TV) prior and a Gaussian location mixture-like hidden variable model. With these two priors, using our VB approximation method, we have developed two fast unsupervised approaches well adapted to piecewise smooth images.In fact, the priors introduced above are correlated which makes the estimation of regularization parameters very complicated: we often have a non-explicit partition function. To sidestep this problem, we have considered working in the wavelet domain. As the wavelet coefficients of natural images are generally sparse, we considered prior distributions of the Gaussian scale mixture family to enforce sparsity. Another contribution is therefore the development of an unsupervised approach for a prior distribution of the GSM family whose density is explicitly known, using the proposed VB approximation method.
6

Modèles bayésiens pour la détection de synchronisations au sein de signaux électro-corticaux / Bayesian models for synchronizations detection in electrocortical signals

Rio, Maxime 16 July 2013 (has links)
Cette thèse propose de nouvelles méthodes d'analyse d'enregistrements cérébraux intra-crâniens (potentiels de champs locaux), qui pallie les lacunes de la méthode temps-fréquence standard d'analyse des perturbations spectrales événementielles : le calcul d'une moyenne sur les enregistrements et l'emploi de l'activité dans la période pré-stimulus. La première méthode proposée repose sur la détection de sous-ensembles d'électrodes dont l'activité présente des synchronisations cooccurrentes en un même point du plan temps-fréquence, à l'aide de modèles bayésiens de mélange gaussiens. Les sous-ensembles d'électrodes pertinents sont validés par une mesure de stabilité calculée entre les résultats obtenus sur les différents enregistrements. Pour la seconde méthode proposée, le constat qu'un bruit blanc dans le domaine temporel se transforme en bruit ricien dans le domaine de l'amplitude d'une transformée temps-fréquence a permis de mettre au point une segmentation du signal de chaque enregistrement dans chaque bande de fréquence en deux niveaux possibles, haut ou bas, à l'aide de modèles bayésiens de mélange ricien à deux composantes. À partir de ces deux niveaux, une analyse statistique permet de détecter des régions temps-fréquence plus ou moins actives. Pour développer le modèle bayésien de mélange ricien, de nouveaux algorithmes d'inférence bayésienne variationnelle ont été créés pour les distributions de Rice et de mélange ricien. Les performances des nouvelles méthodes ont été évaluées sur des données artificielles et sur des données expérimentales enregistrées sur des singes. Il ressort que les nouvelles méthodes génèrent moins de faux-positifs et sont plus robustes à l'absence de données dans la période pré-stimulus / This thesis promotes new methods to analyze intracranial cerebral signals (local field potentials), which overcome limitations of the standard time-frequency method of event-related spectral perturbations analysis: averaging over the trials and relying on the activity in the pre-stimulus period. The first proposed method is based on the detection of sub-networks of electrodes whose activity presents cooccurring synchronisations at a same point of the time-frequency plan, using bayesian gaussian mixture models. The relevant sub-networks are validated with a stability measure computed over the results obtained from different trials. For the second proposed method, the fact that a white noise in the temporal domain is transformed into a rician noise in the amplitude domain of a time-frequency transform made possible the development of a segmentation of the signal in each frequency band of each trial into two possible levels, a high one and a low one, using bayesian rician mixture models with two components. From these two levels, a statistical analysis can detect time-frequency regions more or less active. To develop the bayesian rician mixture model, new algorithms of variational bayesian inference have been created for the Rice distribution and the rician mixture distribution. Performances of the new methods have been evaluated on artificial data and experimental data recorded on monkeys. It appears that the new methods generate less false positive results and are more robust to a lack of data in the pre-stimulus period
7

Fusion pour la séparation de sources audio / Fusion for audio source separation

Jaureguiberry, Xabier 16 June 2015 (has links)
La séparation aveugle de sources audio dans le cas sous-déterminé est un problème mathématique complexe dont il est aujourd'hui possible d'obtenir une solution satisfaisante, à condition de sélectionner la méthode la plus adaptée au problème posé et de savoir paramétrer celle-ci soigneusement. Afin d'automatiser cette étape de sélection déterminante, nous proposons dans cette thèse de recourir au principe de fusion. L'idée est simple : il s'agit, pour un problème donné, de sélectionner plusieurs méthodes de résolution plutôt qu'une seule et de les combiner afin d'en améliorer la solution. Pour cela, nous introduisons un cadre général de fusion qui consiste à formuler l'estimée d'une source comme la combinaison de plusieurs estimées de cette même source données par différents algorithmes de séparation, chaque estimée étant pondérée par un coefficient de fusion. Ces coefficients peuvent notamment être appris sur un ensemble d'apprentissage représentatif du problème posé par minimisation d'une fonction de coût liée à l'objectif de séparation. Pour aller plus loin, nous proposons également deux approches permettant d'adapter les coefficients de fusion au signal à séparer. La première formule la fusion dans un cadre bayésien, à la manière du moyennage bayésien de modèles. La deuxième exploite les réseaux de neurones profonds afin de déterminer des coefficients de fusion variant en temps. Toutes ces approches ont été évaluées sur deux corpus distincts : l'un dédié au rehaussement de la parole, l'autre dédié à l'extraction de voix chantée. Quelle que soit l'approche considérée, nos résultats montrent l'intérêt systématique de la fusion par rapport à la simple sélection, la fusion adaptative par réseau de neurones se révélant être la plus performante. / Underdetermined blind source separation is a complex mathematical problem that can be satisfyingly resolved for some practical applications, providing that the right separation method has been selected and carefully tuned. In order to automate this selection process, we propose in this thesis to resort to the principle of fusion which has been widely used in the related field of classification yet is still marginally exploited in source separation. Fusion consists in combining several methods to solve a given problem instead of selecting a unique one. To do so, we introduce a general fusion framework in which a source estimate is expressed as a linear combination of estimates of this same source given by different separation algorithms, each source estimate being weighted by a fusion coefficient. For a given task, fusion coefficients can then be learned on a representative training dataset by minimizing a cost function related to the separation objective. To go further, we also propose two ways to adapt the fusion coefficients to the mixture to be separated. The first one expresses the fusion of several non-negative matrix factorization (NMF) models in a Bayesian fashion similar to Bayesian model averaging. The second one aims at learning time-varying fusion coefficients thanks to deep neural networks. All proposed methods have been evaluated on two distinct corpora. The first one is dedicated to speech enhancement while the other deals with singing voice extraction. Experimental results show that fusion always outperform simple selection in all considered cases, best results being obtained by adaptive time-varying fusion with neural networks.

Page generated in 0.1341 seconds