271 |
Topics in living cell miultiphoton laser scanning microscopy (MPLSM) image analysisZhang, Weimin 30 October 2006 (has links)
Multiphoton laser scanning microscopy (MPLSM) is an advanced fluorescence
imaging technology which can produce a less noisy microscope image and minimize the
damage in living tissue. The MPLSM image in this research is the dehydroergosterol
(DHE, a fluorescent sterol which closely mimics those of cholesterol in lipoproteins and
membranes) on living cell's plasma membrane area. The objective is to use a statistical
image analysis method to describe how cholesterol is distributed on a living cell's
membrane. Statistical image analysis methods applied in this research include image
segmentation/classification and spatial analysis. In image segmentation analysis, we
design a supervised learning method by using smoothing technique with rank statistics.
This approach is especially useful in a situation where we have only very limited
information of classes we want to segment. We also apply unsupervised leaning methods
on the image data. In image data spatial analysis, we explore the spatial correlation of
segmented data by a Monte Carlo test. Our research shows that the distributions of DHE
exhibit a spatially aggregated pattern. We fit two aggregated point pattern models, an
area-interaction process model and a Poisson cluster process model, to the data. For the area interaction process model, we design algorithms for maximum pseudo-likelihood
estimator and Monte Carlo maximum likelihood estimator under lattice data setting. For
the Poisson Cluster process parameter estimation, the method for implicit statistical
model parameter estimate is used. A group of simulation studies shows that the Monte
Carlo maximum estimation method produces consistent parameter estimates. The
goodness-of-fit tests show that we cannot reject both models. We propose to use the area
interaction process model in further research.
|
272 |
Analyse spectrale à haute résolution de signaux irrégulièrement échantillonnés : application à l'Astrophysique.Bourguignon, Sébastien 14 December 2006 (has links) (PDF)
L'étude de nombreux phénomènes astronomiques repose sur la recherche de périodicités dans des séries temporelles (courbes de lumière ou de vitesse radiale). En raison des contraintes observationnelles, la couverture temporelle des données résultantes est souvent incomplète, présentant des trous périodiques ainsi qu'un échantillonnage irrégulier. L'analyse du contenu fréquentiel de telles séries basée sur le spectre de Fourier s'avère alors inefficace et les méthodes heuristiques de déconvolution de type CLEAN, couramment utilisées en astronomie, ne donnent pas entière satisfaction. Cette thèse s'inscrit dans le formalisme fréquemment rencontré depuis les années 1990 abordant l'analyse spectrale sous la forme d'un problème inverse, le spectre étant discrétisé sur une grille fréquentielle arbitrairement fine. Sa régularisation est alors envisagée en traduisant la nature a priori parcimonieuse de l'objet à reconstruire: nous nous intéressons ici à la recherche de raies spectrales. <br />Une première approche envisagée a trait au domaine de l'optimisation et consiste à minimiser un critère de type moindres carrés, pénalisé par une fonction favorisant les solutions parcimonieuses. La pénalisation par la norme l1 est en particulier étudiée en extension à des variables complexes et s'avère satisfaisante en termes de modélisation. Nous proposons des solutions algorithmiques particulièrement performantes permettant d'envisager une analyse à très haute résolution fréquentielle. <br />Nous étudions ensuite la modélisation probabiliste des amplitudes spectrales sous la forme d'un processus Bernoulli-Gaussien, dont les paramètres sont estimés au sens de la moyenne a posteriori à partir de techniques d'échantillonnage stochastique, permettant d'envisager une estimation totalement non supervisée. L'interprétation probabiliste du résultat ainsi que l'obtention conjointe des variances associées, sont alors d'un intérêt astrophysique majeur, s'interprétant en termes de niveaux de confiance sur les composantes spectrales détectées. Nous proposons dans un premier temps des améliorations de l'algorithme échantillonneur de Gibbs permettant d'accélérer l'exploration de la loi échantillonnée. Ensuite, nous introduisons des variables de décalage fréquentiel à valeur continue, permettant d'augmenter la précision de l'estimation sans trop pénaliser le coût calculatoire associé. <br />Pour chaque méthode proposée, nous illustrons sur des simulations la qualité de l'estimation ainsi que les performances des algorithmes développés. Leur application à un jeu de données issu d'observations astrophysiques est enfin présentée, mettant en évidence l'apport d'une telle méthodologie par rapport aux méthodes d'analyse spectrale habituellement utilisées.
|
273 |
Hessian-based response surface approximations for uncertainty quantification in large-scale statistical inverse problems, with applications to groundwater flowFlath, Hannah Pearl 11 September 2013 (has links)
Subsurface flow phenomena characterize many important societal issues in energy and the environment. A key feature of these problems is that subsurface properties are uncertain, due to the sparsity of direct observations of the subsurface. The Bayesian formulation of this inverse problem provides a systematic framework for inferring uncertainty in the properties given uncertainties in the data, the forward model, and prior knowledge of the properties. We address the problem: given noisy measurements of the head, the pdf describing the noise, prior information in the form of a pdf of the hydraulic conductivity, and a groundwater flow model relating the head to the hydraulic conductivity, find the posterior probability density function (pdf) of the parameters describing the hydraulic conductivity field. Unfortunately, conventional sampling of this pdf to compute statistical moments is intractable for problems governed by large-scale forward models and high-dimensional parameter spaces. We construct a Gaussian process surrogate of the posterior pdf based on Bayesian interpolation between a set of "training" points. We employ a greedy algorithm to find the training points by solving a sequence of optimization problems where each new training point is placed at the maximizer of the error in the approximation. Scalable Newton optimization methods solve this "optimal" training point problem. We tailor the Gaussian process surrogate to the curvature of the underlying posterior pdf according to the Hessian of the log posterior at a subset of training points, made computationally tractable by a low-rank approximation of the data misfit Hessian. A Gaussian mixture approximation of the posterior is extracted from the Gaussian process surrogate, and used as a proposal in a Markov chain Monte Carlo method for sampling both the surrogate as well as the true posterior. The Gaussian process surrogate is used as a first stage approximation in a two-stage delayed acceptance MCMC method. We provide evidence for the viability of the low-rank approximation of the Hessian through numerical experiments on a large scale atmospheric contaminant transport problem and analysis of an infinite dimensional model problem. We provide similar results for our groundwater problem. We then present results from the proposed MCMC algorithms. / text
|
274 |
Particle filters and Markov chains for learning of dynamical systemsLindsten, Fredrik January 2013 (has links)
Sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC) methods provide computational tools for systematic inference and learning in complex dynamical systems, such as nonlinear and non-Gaussian state-space models. This thesis builds upon several methodological advances within these classes of Monte Carlo methods.Particular emphasis is placed on the combination of SMC and MCMC in so called particle MCMC algorithms. These algorithms rely on SMC for generating samples from the often highly autocorrelated state-trajectory. A specific particle MCMC algorithm, referred to as particle Gibbs with ancestor sampling (PGAS), is suggested. By making use of backward sampling ideas, albeit implemented in a forward-only fashion, PGAS enjoys good mixing even when using seemingly few particles in the underlying SMC sampler. This results in a computationally competitive particle MCMC algorithm. As illustrated in this thesis, PGAS is a useful tool for both Bayesian and frequentistic parameter inference as well as for state smoothing. The PGAS sampler is successfully applied to the classical problem of Wiener system identification, and it is also used for inference in the challenging class of non-Markovian latent variable models.Many nonlinear models encountered in practice contain some tractable substructure. As a second problem considered in this thesis, we develop Monte Carlo methods capable of exploiting such substructures to obtain more accurate estimators than what is provided otherwise. For the filtering problem, this can be done by using the well known Rao-Blackwellized particle filter (RBPF). The RBPF is analysed in terms of asymptotic variance, resulting in an expression for the performance gain offered by Rao-Blackwellization. Furthermore, a Rao-Blackwellized particle smoother is derived, capable of addressing the smoothing problem in so called mixed linear/nonlinear state-space models. The idea of Rao-Blackwellization is also used to develop an online algorithm for Bayesian parameter inference in nonlinear state-space models with affine parameter dependencies. / CNDM / CADICS
|
275 |
Détection et caractérisation d'exoplanètes : développement et exploitation du banc d'interférométrie annulante Nulltimate et conception d'un système automatisé de classement des transits détectés par CoRoTDemangeon, Olivier 28 June 2013 (has links) (PDF)
Parmi les méthodes qui permettent de détecter des exoplanètes, la photométrie des transits est celle qui a connu le plus grand essor ces dernières années grâce à l'arrivée des télescopes spatiaux CoRoT (en 2006) puis Kepler (en 2009). Ces deux satellites ont permis de détecter des milliers de transits potentiellement planétaires. Étant donnés leur nombre et l'effort nécessaire à la confirmation de leur nature, il est essentiel d'effectuer, à partir des données photométriques, un classement efficace permettant d'identifier les transits les plus prometteurs et qui soit réalisable en un temps raisonnable. Pour ma thèse, j'ai développé un outil logiciel, rapide et automatisé, appelé BART (Bayesian Analysis for the Ranking of Transits) qui permet de réaliser un tel classement grâce une estimation de la probabilité que chaque transit soit de nature planétaire. Pour cela, mon outil s'appuie notamment sur le formalisme bayésien des probabilités et l'exploration de l'espace des paramètres libres par méthode de Monte Carlo avec des chaînes de Markov (mcmc).Une fois les exoplanètes détectées, l'étape suivante consiste à les caractériser. L'étude du système solaire nous a démontré, si cela était nécessaire, que l'information spectrale est un point clé pour comprendre la physique et l'histoire d'une planète. L'interférométrie annulante est une solution technologique très prometteuse qui pourrait permettre cela. Pour ma thèse, j'ai travaillé sur le banc optique Nulltimate afin d'étudier la faisabilité de certains objectifs technologiques liés à cette technique. Au-delà de la performance d'un taux d'extinction de 3,7.10^-5 en monochromatique et de 6,3.10^-4 en polychromatique dans l'infrarouge proche, ainsi qu'une stabilité de σN30 ms = 3,7.10^-5 estimée sur 1 heure, mon travail a permis d'assainir la situation en réalisant un budget d'erreur détaillé, une simulation en optique gaussienne de la transmission du banc et une refonte complète de l'informatique de commande. Tout cela m'a finalement permis d'identifier les faiblesses de Nulltimate.
|
276 |
A bayesian solution for the law of categorical judgment with category boundary variability and examination of robustness to model violationsKing, David R. 12 January 2015 (has links)
Previous solutions for the the Law of Categorical Judgment with category boundary variability have either constrained the standard deviations of the category boundaries in some way or have violated the assumptions of the scaling model. In the current work, a fully Bayesian Markov chain Monte Carlo solution for the Law of Categorical Judgment is given that estimates all model parameters (i.e. scale values, category boundaries, and the associated standard deviations). The importance of measuring category boundary standard deviations is discussed in the context of previous research in signal detection theory, which gives evidence of interindividual variability in how respondents perceive category boundaries and even intraindividual variability in how a respondent perceives category boundaries across trials. Although the measurement of category boundary standard deviations appears to be important for describing the way respondents perceive category boundaries on the latent scale, the inclusion of category boundary standard deviations in the scaling model exposes an inconsistency between the model and the rating method. Namely, with category boundary variability, the scaling model suggests that a respondent could experience disordinal category boundaries on a given trial. However, the idea that a respondent actually experiences disordinal category boundaries seems unlikely. The discrepancy between the assumptions of the scaling model and the way responses are made at the individual level indicates that the assumptions of the model will likely not be met. Therefore, the current work examined how well model parameters could be estimated when the assumptions of the model were violated in various ways as a consequence of disordinal category boundary perceptions. A parameter recovery study examined the effect of model violations on estimation accuracy by comparing estimates obtained from three response processes that violated the assumptions of the model with estimates obtained from a novel response process that did not violate the assumptions of the model. Results suggest all parameters in the Law of Categorical Judgment can be estimated reasonably well when these particular model violations occur, albeit to a lesser degree of accuracy than when the assumptions of the model are met.
|
277 |
Inverse Modeling of Cloud – Aerosol InteractionsPartridge, Daniel January 2011 (has links)
The role of aerosols and clouds is one of the largest sources of uncertainty in understanding climate change. The primary scientific goal of this thesis is to improve the understanding of cloud-aerosol interactions by applying inverse modeling using Markov Chain Monte Carlo (MCMC) simulation. Through a set of synthetic tests using a pseudo-adiabatic cloud parcel model, it is shown that a self adaptive MCMC algorithm can efficiently find the correct optimal values of meteorological and aerosol physiochemical parameters for a specified droplet size distribution and determine the global sensitivity of these parameters. For an updraft velocity of 0.3 m s-1, a shift towards an increase in the relative importance of chemistry compared to the accumulation mode number concentration is shown to exist somewhere between marine (~75 cm-3) and rural continental (~450 cm-3) aerosol regimes. Examination of in-situ measurements from the Marine Stratus/Stratocumulus Experiment (MASE II) shows that for air masses with higher number concentrations of accumulation mode (Dp = 60-120 nm) particles (~450 cm-3), an accurate simulation of the measured droplet size distribution requires an accurate representation of the particle chemistry. The chemistry is relatively more important than the accumulation mode particle number concentration, and similar in importance to the particle mean radius. This result is somewhat at odds with current theory that suggests chemistry can be ignored in all except for the most polluted environments. Under anthropogenic influence, we must consider particle chemistry also in marine environments that may be deemed relatively clean. The MCMC algorithm can successfully reproduce the observed marine stratocumulus droplet size distributions. However, optimising towards the broadness of the measured droplet size distribution resulted in a discrepancy between the updraft velocity, and mean radius/geometric standard deviation of the accumulation mode. This suggests that we are missing a dynamical process in the pseudo-adiabatic cloud parcel model. / At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 3: Submitted. Paper 4: Manuscript.
|
278 |
Efficient Bayesian Inference for Multivariate Factor Stochastic Volatility ModelsKastner, Gregor, Frühwirth-Schnatter, Sylvia, Lopes, Hedibert Freitas 24 February 2016 (has links) (PDF)
We discuss efficient Bayesian estimation of dynamic covariance matrices in multivariate time series through a factor stochastic volatility model. In particular, we propose two interweaving strategies (Yu and Meng, Journal of Computational and Graphical Statistics, 20(3), 531-570, 2011) to substantially accelerate convergence and mixing of standard MCMC approaches. Similar to marginal data augmentation techniques, the proposed acceleration procedures exploit non-identifiability issues which frequently arise in factor models. Our new interweaving strategies are easy to implement and come at almost no extra computational cost; nevertheless, they can boost estimation efficiency by several orders of magnitude as is shown in extensive simulation studies. To conclude, the application of our algorithm to a 26-dimensional exchange rate data set illustrates the superior performance of the new approach for real-world data. / Series: Research Report Series / Department of Statistics and Mathematics
|
279 |
Estimação bayesiana para medidas de desempenho de testes diagnósticos.Pinho, Eloísa Moralles do 05 January 2006 (has links)
Made available in DSpace on 2016-06-02T20:05:58Z (GMT). No. of bitstreams: 1
DissEMP.pdf: 2351835 bytes, checksum: 336e30a60b741bebe39a08dc4f379ba0 (MD5)
Previous issue date: 2006-01-05 / In the medical area, diagnostic tests are used to classify a patient as positive or
negative with respect to a given disease. There are simple and more elaborate tests, each
one with a speci9ed rate of misclassi9cation.
To verify the accuracy of the medical tests, we could have comparisons with a "gold
stantard", here is a test with no error.
In many situations we could not have "gold standard", by ethical reasons or by chance
that the individual is disease free or by high costs of the test.
Joseph et al (1999) introduces a Bayesian approach that solves the lack of a gold
standard, by using latent variables. In this work, we introduce this Bayesian methodology
giving generalizations in the presence of covariates. A comparative study is made with
the presence or not of gold standard to check the accuracy of the medical tests. Some
diGerent proportions of patients without gold standard are considered in a simulation
study. Numerical examples are considered using the proposed methodology.
We conclude the dissertation assuming dependence among two or more tests. / Na área médica testes diagnósticos são usados para classi9car um paciente como positivo
ou negativo com relação a uma determinada condição ou moléstia. Existem testes
mais simples e outros mais elaborados, cada um fornecendo diferentes chances de erro de
classi9cação dos pacientes. Para quanti9car a precisão dos testes diagnósticos, podemos
compará-los com testes Padrão Ouro , termo utilizado para testes com satisfatória exatidão,
como biopsias, inspeções cirúrgicas e outros. Existem algumas condições que não
possuem testes considerados Padrão Ouro, outras até possuem, mas não é ético aplicá-los
em indivíduos sem a evidência da moléstia, ou ainda o seu uso pode ser inviável devido a
seu alto custo ou por oferecer risco ao paciente.
Joseph et al. (1999) [16] propõem a abordagem Bayesiana que supera o problema de
pacientes não veri9cados pelo teste Padrão Ouro introduzindo variáveis latentes. Apresentamos
também esta metodologia considerando a presença de covariáveis, que fornece
subsídios para a tomada de decisão médica. Um estudo comparativo é feito para situações
com ausência de Padrão Ouro para todos, alguns ou nenhum paciente, e assim, descrevemos
sobre a importância de se considerar uma porcentagem de pacientes veri9cados pelo
teste Padrão Ouro para melhores estimativas das medidas de desempenho dos testes diagnósticos.
Introduzimos um novo parâmetro que classsi9ca o grupo veri9cado ou não
veri9cado pelo teste Padrão Ouro. As metodologias propostas são demonstradas através
de exemplos numéricos. Como sugestão de continuidade, demonstramos a metodologia
para a veri9cação de dependência condicional entre testes diagnósticos.
|
280 |
Uma análise bayesiana para dados composicionais.Obage, Simone Cristina 03 March 2005 (has links)
Made available in DSpace on 2016-06-02T20:05:59Z (GMT). No. of bitstreams: 1
DissSCO.pdf: 3276753 bytes, checksum: eea407b94c282f57d7fb7e97200ee05a (MD5)
Previous issue date: 2005-03-03 / Universidade Federal de Sao Carlos / Compositional data are given by vectors of positive numbers with sum equals to one.
These kinds of data are common in many applications, as in geology, biology, economy
among many others. In this paper, we introduce a Bayesian analysis for compositional
data considering additive log-ratio (ALR) and Box-Cox transformations assuming a mul-
tivariate normal distribution for correlated errors. These results generalize some existing
Bayesian approaches assuming uncorrelated errors. We also consider the use of expo-
nential power distributions for uncorrelated errors considering additive log-ratio (ALR)
transformation. We illustrate the proposed methodology considering a real data set. / Dados Composicionais são dados por vetores com elementos positivos cuja soma é um.
Exemplos típicos de dados desta natureza são encontrados nas mais diversas áreas; como
em geologia, biologia, economia entre outras. Neste trabalho, introduzimos uma análise
Bayesiana para dados composicionais considerando as transformações razão log-aditiva e
Box-Cox, assumindo a distribuição normal multivariada para erros correlacionados. Estes
resultados generalizam uma abordagem bayesiana assumindo erros não correlacionados.
Também consideramos o uso da distribuição potência exponencial para erros não correla-
cionados, assumindo a transformação razão log-aditiva. Nós ilustramos a metodologia
proposta considerando um conjunto de dados reais.
|
Page generated in 0.0465 seconds