• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 11
  • 10
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 109
  • 109
  • 109
  • 27
  • 23
  • 19
  • 15
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Understanding, Modeling and Detecting Brain Tumors : Graphical Models and Concurrent Segmentation/Registration methods / Compréhension, modélisation et détection de tumeurs cérébrales : modèles graphiques et méthodes de recalage/segmentation simultanés

Parisot, Sarah 18 November 2013 (has links)
L'objectif principal de cette thèse est la modélisation, compréhension et segmentation automatique de tumeurs diffuses et infiltrantes appelées Gliomes Diffus de Bas Grade. Deux approches exploitant des connaissances a priori de l'ordre spatial et anatomique ont été proposées. Dans un premier temps, la construction d'un atlas probabiliste qui illustre les positions préférentielles des tumeurs dans le cerveau est présentée. Cet atlas représente un excellent outil pour l'étude des mécanismes associés à la genèse des tumeurs et fournit des indications sur la position probable des tumeurs. Cette information est exploitée dans une méthode de segmentation basée sur des champs de Markov aléatoires, dans laquelle l'atlas guide la segmentation et caractérise la position préférentielle de la tumeur. Dans un second temps, nous présentons une méthode pour la segmentation de tumeur et le recalage avec absence de correspondances simultanés. Le recalage introduit des informations anatomiques qui améliorent les résultats de segmentation tandis que la détection progressive de la tumeur permet de surmonter l'absence de correspondances sans l'introduction d'un a priori. La méthode est modélisée comme un champ de Markov aléatoire hiérarchique et à base de grille sur laquelle les paramètres de segmentation et recalage sont estimés simultanément. Notre dernière contribution est une méthode d'échantillonnage adaptatif guidé par les incertitudes pour de tels modèles discrets. Ceci permet d'avoir une grande précision tout en maintenant la robustesse et rapidité de la méthode. Le potentiel des deux méthodes est démontré sur de grandes bases de données de gliomes diffus de bas grade hétérogènes. De par leur modularité, les méthodes proposées ne se limitent pas au contexte clinique présenté et pourraient facilement être adaptées à d'autres problèmes cliniques ou de vision par ordinateur. / The main objective of this thesis is the automatic modeling, understanding and segmentation of diffusively infiltrative tumors known as Diffuse Low-Grade Gliomas. Two approaches exploiting anatomical and spatial prior knowledge have been proposed. We first present the construction of a tumor specific probabilistic atlas describing the tumors' preferential locations in the brain. The proposed atlas constitutes an excellent tool for the study of the mechanisms behind the genesis of the tumors and provides strong spatial cues on where they are expected to appear. The latter characteristic is exploited in a Markov Random Field based segmentation method where the atlas guides the segmentation process as well as characterizes the tumor's preferential location. Second, we introduce a concurrent tumor segmentation and registration with missing correspondences method. The anatomical knowledge introduced by the registration process increases the segmentation quality, while progressively acknowledging the presence of the tumor ensures that the registration is not violated by the missing correspondences without the introduction of a bias. The method is designed as a hierarchical grid-based Markov Random Field model where the segmentation and registration parameters are estimated simultaneously on the grid's control point. The last contribution of this thesis is an uncertainty-driven adaptive sampling approach for such grid-based models in order to ensure precision and accuracy while maintaining robustness and computational efficiency. The potentials of both methods have been demonstrated on a large data-set of heterogeneous Diffuse Low-Grade Gliomas. The proposed methods go beyond the scope of the presented clinical context due to their strong modularity and could easily be adapted to other clinical or computer vision problems.
62

Discretisation-invariant and computationally efficient correlation priors for Bayesian inversion

Roininen, L. (Lassi) 05 June 2015 (has links)
Abstract We are interested in studying Gaussian Markov random fields as correlation priors for Bayesian inversion. We construct the correlation priors to be discretisation-invariant, which means, loosely speaking, that the discrete priors converge to continuous priors at the discretisation limit. We construct the priors with stochastic partial differential equations, which guarantees computational efficiency via sparse matrix approximations. The stationary correlation priors have a clear statistical interpretation through the autocorrelation function. We also consider how to make structural model of an unknown object with anisotropic and inhomogeneous Gaussian Markov random fields. Finally we consider these fields on unstructured meshes, which are needed on complex domains. The publications in this thesis contain fundamental mathematical and computational results of correlation priors. We have considered one application in this thesis, the electrical impedance tomography. These fundamental results and application provide a platform for engineers and researchers to use correlation priors in other inverse problem applications.
63

Information Retrieval using Markov random Fields and Restricted Boltzmann Machines

Monika Kamma (10276277) 06 April 2021 (has links)
<div>When a user types in a search query in an Information Retrieval system, a list of top ‘n’ ranked documents relevant to the query are returned by the system. Relevant means not just returning documents that belong to the same category as that of the search query, but also returning documents that provide a concise answer to the search query. Determining the relevance of the documents is a significant challenge as the classic indexing techniques that use term/word frequencies do not consider the term (word) dependencies or the impact of previous terms on the current words or the meaning of the words in the document. There is a need to model the dependencies of the terms in the text data and learn the underlying statistical patterns to find the similarity between the user query and the documents to determine the relevancy.</div><div><br></div><div>This research proposes a solution based on Markov Random Fields (MRF) and Restricted Boltzmann Machines (RBM) to solve the problem of term dependencies and learn the underlying patterns to return documents that are very similar to the user query.</div>
64

FUZZY MARKOV RANDOM FIELDS FOR OPTICAL AND MICROWAVE REMOTE SENSING IMAGE ANALYSIS : SUPER RESOLUTION MAPPING (SRM) AND MULTISOURCE IMAGE CLASSIFICATION (MIC) / ファジーマルコフ確率場による光学およびマイクロ波リモートセンシング画像解析 : 超解像度マッピングと複数センサ画像分類

Duminda Ranganath Welikanna 24 September 2014 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第18561号 / 工博第3922号 / 新制||工||1603(附属図書館) / 31461 / 京都大学大学院工学研究科社会基盤工学専攻 / (主査)教授 田村 正行, 准教授 須﨑 純一, 准教授 田中 賢治 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
65

Inferring RNA 3D Motifs from Sequence

Roll, James Elwood 05 September 2019 (has links)
No description available.
66

Inference in ERGMs and Ising Models.

Xu, Yuanzhe January 2023 (has links)
Discrete exponential families have drawn a lot of attention in probability, statistics, and machine learning, both classically and in the recent literature. This thesis studies in depth two discrete exponential families of concrete interest, (i) Exponential Random Graph Models (ERGMs) and (ii) Ising Models. In the ERGM setting, this thesis consider a “degree corrected” version of standard ERGMs, and in the Ising model setting, this thesis focus on Ising models on dense regular graphs, both from the point of view of statistical inference. The first part of the thesis studies the problem of testing for sparse signals present on the vertices of ERGMs. It proposes computably efficient tests for a wide class of ERGMs. Focusing on the two star ERGM, it shows that the tests studied are “asymptotically efficient” in all parameter regimes except one, which is referred to as “critical point”. In the critical regime, it is shown that improved detection is possible. This shows that compared to the standard belief, in this setting dependence is actually beneficial to the inference problem. The main proof idea for analyzing the two star ERGM is a correlations estimate between degrees under local alternatives, which is possibly of independent interest. In the second part of the thesis, we derive the limit of experiments for a class of one parameter Ising models on dense regular graphs. In particular, we show that the limiting experiment is Gaussian in the “low temperature” regime, non Gaussian in the “critical” regime, and an infinite collection of Gaussians in the “high temperature” regime. We also derive the limiting distributions of commonlt studied estimators, and study limiting power for tests of hypothesis against contiguous alternatives (whose scaling changes across the regimes). To the best of our knowledge, this is the first attempt at establishing the classical limits of experiments for Ising models (and more generally, Markov random fields).
67

Computational Reconstruction and Quantification of Aerospace Materials

Long, Matthew Thomas 14 May 2024 (has links)
Microstructure reconstruction is a necessary tool for use in multi-scale modeling, as it allows for the analysis of the microstructure of a material without the cost of measuring all of the required data for the analysis. For microstructure reconstruction to be effective, the synthetic microstructure needs to predict what a small sample of measured data would look like on a larger domain. The Markov Random Field (MRF) algorithm is a method of generating statistically similar microstructures for this process. In this work, two key factors of the MRF algorithm are analyzed. The first factor explored is how the base features of the microstructure related to orientation and grain/phase topology information influence the selection of the MRF parameters to perform the reconstruction. The second focus is on the analysis of the numerical uncertainty (epistemic uncertainty) that arises from the use of the MRF algorithm. This is done by first removing the material uncertainty (aleatoric uncertainty), which is the noise that is inherent in the original image representing the experimental data. The epistemic uncertainty that arises from the MRF algorithm is analyzed through the study of the percentage of isolated pixels and the difference in average grain sizes between the initial image and the reconstructed image. This research mainly focuses on two different microstructures, B4C-TiB2 and Ti-7Al, which are a ceramic composite and a metallic alloy, respectively. Both of them are candidate materials for many aerospace systems owing to their desirable mechanical performance under large thermo-mechanical stresses. / Master of Science / Microstructure reconstruction is a necessary tool for use in multi-scale modeling, as it allows for the analysis of the microstructure of a material without the cost of measuring all of the required data for the analysis. For microstructure reconstruction to be effective, the synthetic microstructure needs to predict what a small sample of measured data would look like on a larger domain. The Markov Random Field (MRF) algorithm is a method of generating statistically similar microstructures for this process. In this work, two key factors of the MRF algorithm are analyzed. The first factor explored is how the base features of the microstructures related to orientation and grain/phase topology information influence the selection of the MRF parameters to perform the reconstruction. The second focus is on the analysis of the numerical uncertainty that arises from the use of the MRF algorithm. This is done by first removing the material uncertainty, which is the noise that is inherent in the original image representing the experimental data. This research mainly focuses on two different microstructures, B4C-TiB2 and Ti-7Al, which are a ceramic composite and a metallic alloy, respectively. Both of them are candidate materials for many aerospace systems owing to their desirable mechanical performance under large thermo-mechanical stresses.
68

Statistical computation and inference for functional data analysis

Jiang, Huijing 09 November 2010 (has links)
My doctoral research dissertation focuses on two aspects of functional data analysis (FDA): FDA under spatial interdependence and FDA for multi-level data. The first part of my thesis focuses on developing modeling and inference procedure for functional data under spatial dependence. The methodology introduced in this part is motivated by a research study on inequities in accessibility to financial services. The first research problem in this part is concerned with a novel model-based method for clustering random time functions which are spatially interdependent. A cluster consists of time functions which are similar in shape. The time functions are decomposed into spatial global and time-dependent cluster effects using a semi-parametric model. We also assume that the clustering membership is a realization from a Markov random field. Under these model assumptions, we borrow information across curves from nearby locations resulting in enhanced estimation accuracy of the cluster effects and of the cluster membership. In a simulation study, we assess the estimation accuracy of our clustering algorithm under a series of settings: small number of time points, high noise level and varying dependence structures. Over all simulation settings, the spatial-functional clustering method outperforms existing model-based clustering methods. In the case study presented in this project, we focus on estimates and classifies service accessibility patterns varying over a large geographic area (California and Georgia) and over a period of 15 years. The focus of this study is on financial services but it generally applies to any other service operation. The second research project of this part studies an association analysis of space-time varying processes, which is rigorous, computational feasible and implementable with standard software. We introduce general measures to model different aspects of the temporal and spatial association between processes varying in space and time. Using a nonparametric spatiotemporal model, we show that the proposed association estimators are asymptotically unbiased and consistent. We complement the point association estimates with simultaneous confidence bands to assess the uncertainty in the point estimates. In a simulation study, we evaluate the accuracy of the association estimates with respect to the sample size as well as the coverage of the confidence bands. In the case study in this project, we investigate the association between service accessibility and income level. The primary objective of this association analysis is to assess whether there are significant changes in the income-driven equity of financial service accessibility over time and to identify potential under-served markets. The second part of the thesis discusses novel statistical methodology for analyzing multilevel functional data including a clustering method based on a functional ANOVA model and a spatio-temporal model for functional data with a nested hierarchical structure. In this part, I introduce and compare a series of clustering approaches for multilevel functional data. For brevity, I present the clustering methods for two-level data: multiple samples of random functions, each sample corresponding to a case and each random function within a sample/case corresponding to a measurement type. A cluster consists of cases which have similar within-case means (level-1 clustering) or similar between-case means (level-2 clustering). Our primary focus is to evaluate a model-based clustering to more straightforward hard clustering methods. The clustering model is based on a multilevel functional principal component analysis. In a simulation study, we assess the estimation accuracy of our clustering algorithm under a series of settings: small vs. moderate number of time points, high noise level and small number of measurement types. We demonstrate the applicability of the clustering analysis to a real data set consisting of time-varying sales for multiple products sold by a large retailer in the U.S. My ongoing research work in multilevel functional data analysis is developing a statistical model for estimating temporal and spatial associations of a series of time-varying variables with an intrinsic nested hierarchical structure. This work has a great potential in many real applications where the data are areal data collected from different data sources and over geographic regions of different spatial resolution.
69

Stochastic m-estimators: controlling accuracy-cost tradeoffs in machine learning

Dillon, Joshua V. 15 November 2011 (has links)
m-Estimation represents a broad class of estimators, including least-squares and maximum likelihood, and is a widely used tool for statistical inference. Its successful application however, often requires negotiating physical resources for desired levels of accuracy. These limiting factors, which we abstractly refer as costs, may be computational, such as time-limited cluster access for parameter learning, or they may be financial, such as purchasing human-labeled training data under a fixed budget. This thesis explores these accuracy- cost tradeoffs by proposing a family of estimators that maximizes a stochastic variation of the traditional m-estimator. Such "stochastic m-estimators" (SMEs) are constructed by stitching together different m-estimators, at random. Each such instantiation resolves the accuracy-cost tradeoff differently, and taken together they span a continuous spectrum of accuracy-cost tradeoff resolutions. We prove the consistency of the estimators and provide formulas for their asymptotic variance and statistical robustness. We also assess their cost for two concerns typical to machine learning: computational complexity and labeling expense. For the sake of concreteness, we discuss experimental results in the context of a variety of discriminative and generative Markov random fields, including Boltzmann machines, conditional random fields, model mixtures, etc. The theoretical and experimental studies demonstrate the effectiveness of the estimators when computational resources are insufficient or when obtaining additional labeled samples is necessary. We also demonstrate that in some cases the stochastic m-estimator is associated with robustness thereby increasing its statistical accuracy and representing a win-win.
70

Introduction to graphical models with an application in finding coplanar points

Roux, Jeanne-Marie 03 1900 (has links)
Thesis (MSc (Applied Mathematics))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: This thesis provides an introduction to the statistical modeling technique known as graphical models. Since graph theory and probability theory are the two legs of graphical models, these two topics are presented, and then combined to produce two examples of graphical models: Bayesian Networks and Markov Random Fields. Furthermore, the max-sum, sum-product and junction tree algorithms are discussed. The graphical modeling technique is then applied to the specific problem of finding coplanar points in stereo images, taken with an uncalibrated camera. Although it is discovered that graphical models might not be the best method, in terms of speed, to use for this appliation, it does illustrate how to apply this technique in a real-life problem. / AFRIKAANSE OPSOMMING: Hierdie tesis stel die leser voor aan die statistiese modelerings-tegniek genoemd grafiese modelle. Aangesien grafiek teorie en waarskynlikheidsleer die twee bene van grafiese modelle is, word hierdie areas aangespreek en dan gekombineer om twee voorbeelde van grafiese modelle te vind: Bayesian Netwerke en Markov Lukrake Liggaam. Die maks-som, som-produk en aansluitboom algoritmes word ook bestudeer. Nadat die teorie van grafiese modelle en hierdie drie algoritmes afgehandel is, word grafiese modelle dan toegepas op ’n spesifieke probleem— om punte op ’n gemeenskaplike vlak in stereo beelde te vind, wat met ’n ongekalibreerde kamera geneem is. Alhoewel gevind is dat grafiese modelle nie die optimale metode is om punte op ’n gemeenskaplike vlak te vind, in terme van spoed, word die gebruik van grafiese modelle wel ten toongestel met hierdie praktiese voorbeeld. / National Research Foundation (South Africa)

Page generated in 0.0715 seconds