• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 17
  • 15
  • 11
  • 10
  • 8
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 333
  • 333
  • 333
  • 333
  • 145
  • 78
  • 73
  • 54
  • 47
  • 46
  • 43
  • 42
  • 42
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Méthodes de Monte Carlo en Vision Stéréoscopique

Senegas, Julien 13 September 2002 (has links) (PDF)
Cette thèse a pour objet l'étude de l'incertitude attachée à l'estimation de la géometrie d'une scène à partir d'un couple stéréoscopique d'images. La mise en correspondance des points homologues d'un couple suppose la similarité locale des deux images et nécessite une information radiométrique discriminante. Dans de nombreuses situations cependant (déformations géométriques, bruit d'acquisition, manque de contraste, ....), ces hypothèses sont mises en défaut et les erreurs d'appariemment qui en résultent dépendent fortement de l'information contenue dans le couple et non du sytème stéréoscopique lui-meme. <br />Afin d'aborder ce problème, nous proposons un cadre bayésien et l'application de méthodes de Monte Carlo par chaînes de Markov. Celles-ci consistent à simuler la distribution conditionnelle du champ de disparité connaissant le couple stéréoscopique et permettent de déterminer les zones où des erreurs importantes peuvent apparaitre avec une probabilité éventuellement faible. Différents modèles stochastiques sont comparés et testés a partir de scènes stéréoscopiques SPOT, et nous donnons quelques pistes pour étendre ces modèles à d'autres types d'images. Nous nous intéressons également au probleme de l'estimation des paramètres de ces modèles et proposons un certain nombre d'algorithmes permettant une estimation automatique. Enfin, une part importante du travail est consacrée à l'étude d'algorithmes de simulation reposant sur la théorie des chaînes de Markov. L'apport essentiel réside dans l'extension de l'algorithme de Metropolis-Hastings dans une perspective multi-dimensionnelle. Une application performante reposant sur l'utilisation de la loi gaussienne est donnée. De plus, nous montrons comment le recours à des techniques d'échantillonnage d'importance permet de diminuer efficacement le temps de calcul.
212

Physiologically based pharmacokinetic modeling in risk assessment - Development of Bayesian population methods

Jonsson, Fredrik January 2001 (has links)
In risk assessment of risk chemicals, variability in susceptibility in the population is an important aspect. The health hazard of a pollutant is related to the internal exposure to the chemical, i.e. the target dose, rather than the external exposure. The target dose may be calculated by physiologically based pharmacokinetic (PBPK) modeling. Furthermore, variability in target dose may be estimated by introducing variability in the physiological, anatomical, and biochemical parameters of the model. Data on these toxicokinetic model parameters may be found in the scientific literature. Since the early seventies, a large number of experimental inhalation studies of the kinetics of several volatiles in human volunteers have been performed at the National Institute for Working Life in Solna. To this day, only very limited analyses of these extensive data have been performed. A Bayesian analysis makes it possible to merge a priori knowledge from the literature with the information in experimental data. If combined with population PBPK modeling, the Bayesian approach may yield posterior estimates of the toxicokinetic parameters for each subject, as well as for the population. One way of producing these estimates is by so-called Markov-chain Monte Carlo (MCMC) simulation. The aim of the thesis was to apply the MCMC technique on previously published experimental data. Another objective was to assess the reliability of PBPK models in general by the combination of the extensive data and Bayesian population techniques. The population kinetics of methyl chloride, dichloromethane, toluene and styrene were assessed. The calibrated model for dichloromethane was used to predict cancer risk in a simulated Swedish population. In some cases, the respiratory uptake of volatiles was found to be lower than predicted from reference values on alveolar ventilation. The perfusion of fat tissue was found to be a complex process that needs special attention in PBPK modeling. These results provide a significant contribution to the field of PBPK modeling of risk chemicals. Appropriate statistical treatment of uncertainty and variability may increase confidence in model results and ultimately contribute to an improved scientific basis for the estimation of occupational health risks.
213

On perfect simulation and EM estimation

Larson, Kajsa January 2010 (has links)
Perfect simulation  and the EM algorithm are the main topics in this thesis. In paper I, we present coupling from the past (CFTP) algorithms that generate perfectly distributed samples from the multi-type Widom--Rowlin-son (W--R) model and some generalizations of it. The classical W--R model is a point process in the plane or the  space consisting of points of several different types. Points of different types are not allowed to be closer than some specified distance, whereas points of the same type can be arbitrary close. A stick-model and soft-core generalizations are also considered. Further, we  generate samples without edge effects, and give a bound on sufficiently small intensities (of the points) for the algorithm to terminate. In paper II, we consider the  forestry problem on how to estimate  seedling dispersal distributions and effective plant fecundities from spatially data of adult trees  and seedlings, when the origin of the seedlings are unknown.   Traditional models for fecundities build on allometric assumptions, where the fecundity is related to some  characteristic of the adult tree (e.g.\ diameter). However, the allometric assumptions are generally too restrictive and lead to nonrealistic estimates. Therefore we present a new model, the unrestricted fecundity (UF) model, which uses no allometric assumptions. We propose an EM algorithm to estimate the unknown parameters.   Evaluations on real and simulated data indicates better performance for the UF model. In paper III, we propose  EM algorithms to  estimate the passage time distribution on a graph.Data is obtained by observing a flow only at the nodes -- what happens on the edges is unknown. Therefore the sample of passage times, i.e. the times it takes for the flow to stream between two neighbors, consists of right censored and uncensored observations where it sometimes is unknown which is which.       For discrete passage time distributions, we show that the maximum likelihood (ML) estimate is strongly consistent under certain  weak conditions. We also show that our propsed EM algorithm  converges to the ML estimate if the sample size is sufficiently large and the starting value is sufficiently close to the true parameter. In a special case we show that it always converges.  In the continuous case, we propose an EM algorithm for fitting  phase-type distributions to data.
214

Bayesian Methods in Gaussian Graphical Models

Mitsakakis, Nikolaos 31 August 2010 (has links)
This thesis contributes to the field of Gaussian Graphical Models by exploring either numerically or theoretically various topics of Bayesian Methods in Gaussian Graphical Models and by providing a number of interesting results, the further exploration of which would be promising, pointing to numerous future research directions. Gaussian Graphical Models are statistical methods for the investigation and representation of interdependencies between components of continuous random vectors. This thesis aims to investigate some issues related to the application of Bayesian methods for Gaussian Graphical Models. We adopt the popular $G$-Wishart conjugate prior $W_G(\delta,D)$ for the precision matrix. We propose an efficient sampling method for the $G$-Wishart distribution based on the Metropolis Hastings algorithm and show its validity through a number of numerical experiments. We show that this method can be easily used to estimate the Deviance Information Criterion, providing a computationally inexpensive approach for model selection. In addition, we look at the marginal likelihood of a graphical model given a set of data. This is proportional to the ratio of the posterior over the prior normalizing constant. We explore methods for the estimation of this ratio, focusing primarily on applying the Monte Carlo simulation method of path sampling. We also explore numerically the effect of the completion of the incomplete matrix $D^{\mathcal{V}}$, hyperparameter of the $G$-Wishart distribution, for the estimation of the normalizing constant. We also derive a series of exact and approximate expressions for the Bayes Factor between two graphs that differ by one edge. A new theoretical result regarding the limit of the normalizing constant multiplied by the hyperparameter $\delta$ is given and its implications to the validity of an improper prior and of the subsequent Bayes Factor are discussed.
215

Bayesian Methods in Gaussian Graphical Models

Mitsakakis, Nikolaos 31 August 2010 (has links)
This thesis contributes to the field of Gaussian Graphical Models by exploring either numerically or theoretically various topics of Bayesian Methods in Gaussian Graphical Models and by providing a number of interesting results, the further exploration of which would be promising, pointing to numerous future research directions. Gaussian Graphical Models are statistical methods for the investigation and representation of interdependencies between components of continuous random vectors. This thesis aims to investigate some issues related to the application of Bayesian methods for Gaussian Graphical Models. We adopt the popular $G$-Wishart conjugate prior $W_G(\delta,D)$ for the precision matrix. We propose an efficient sampling method for the $G$-Wishart distribution based on the Metropolis Hastings algorithm and show its validity through a number of numerical experiments. We show that this method can be easily used to estimate the Deviance Information Criterion, providing a computationally inexpensive approach for model selection. In addition, we look at the marginal likelihood of a graphical model given a set of data. This is proportional to the ratio of the posterior over the prior normalizing constant. We explore methods for the estimation of this ratio, focusing primarily on applying the Monte Carlo simulation method of path sampling. We also explore numerically the effect of the completion of the incomplete matrix $D^{\mathcal{V}}$, hyperparameter of the $G$-Wishart distribution, for the estimation of the normalizing constant. We also derive a series of exact and approximate expressions for the Bayes Factor between two graphs that differ by one edge. A new theoretical result regarding the limit of the normalizing constant multiplied by the hyperparameter $\delta$ is given and its implications to the validity of an improper prior and of the subsequent Bayes Factor are discussed.
216

Bayesian Inference in Structural Second-Price Auctions

Wegmann, Bertil January 2011 (has links)
The aim of this thesis is to develop efficient and practically useful Bayesian methods for statistical inference in structural second-price auctions. The models are applied to a carefully collected coin auction dataset with bids and auction-specific characteristics from one thousand Internet auctions on eBay. Bidders are assumed to be risk-neutral and symmetric, and compete for a single object using the same game-theoretic strategy. A key contribution in the thesis is the derivation of very accurate approximations of the otherwise intractable equilibrium bid functions under different model assumptions. These easily computed and numerically stable approximations are shown to be crucial for statistical inference, where the inverse bid functions typically needs to be evaluated several million times. In the first paper, the approximate bid is a linear function of a bidder's signal and a Gaussian common value model is estimated. We find that the publicly available book value and the condition of the auctioned object are important determinants of bidders' valuations, while eBay's detailed seller information is essentially ignored by the bidders. In the second paper, the Gaussian model in the first paper is contrasted to a Gamma model that allows intrinsically non-negative common values. The Gaussian model performs slightly better than the Gamma model on the eBay data, which we attribute to an almost normal or at least symmetrical distribution of valuations. The third paper compares the model in the first paper to a directly comparable model for private values. We find many interesting empirical regularities between the models, but no strong and consistent evidence in favor of one model over the other. In the last paper, we consider auctions with both private-value and common-value bidders. The equilibrium bid function is given as the solution to an ordinary differential equation, from which we derive an approximate inverse bid as an explicit function of a given bid. The paper proposes an elaborate model where the probability of being a common value bidder is a function of covariates at the auction level. The model is estimated by a Metropolis-within-Gibbs algorithm and the results point strongly to an active influx of both private-value and common-value bidders. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: Epub ahead of print. Paper 2: Manuscript. Paper 3: Manuscript. Paper 4: Manuscript.</p>
217

Automatic Markov Chain Monte Carlo Procedures for Sampling from Multivariate Distributions

Karawatzki, Roman, Leydold, Josef January 2005 (has links) (PDF)
Generating samples from multivariate distributions efficiently is an important task in Monte Carlo integration and many other stochastic simulation problems. Markov chain Monte Carlo has been shown to be very efficient compared to "conventional methods", especially when many dimensions are involved. In this article we propose a Hit-and-Run sampler in combination with the Ratio-of-Uniforms method. We show that it is well suited for an algorithm to generate points from quite arbitrary distributions, which include all log-concave distributions. The algorithm works automatically in the sense that only the mode (or an approximation of it) and an oracle is required, i.e., a subroutine that returns the value of the density function at any point x. We show that the number of evaluations of the density increases slowly with dimension. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
218

Bayesian Analysis and Computational Methods for Dynamic Modeling

Niemi, Jarad January 2009 (has links)
<p>Dynamic models, also termed state space models, comprise an extremely rich model class for time series analysis. This dissertation focuses on building state space models for a variety of contexts and computationally efficient methods for Bayesian inference for simultaneous estimation of latent states and unknown fixed parameters.</p><p>Chapter 1 introduces state space models and methods of inference in these models. Chapter 2 describes a novel method for jointly sampling the entire latent state vector in a nonlinear Gaussian state space model using a computationally efficient adaptive mixture modeling procedure. This method is embedded in an overall Markov chain Monte Carlo algorithm for estimating fixed parameters as well as states. In Chapter 3 the method of the previous chapter is implemented in a few illustrative</p><p>nonlinear models and compared to standard existing methods. This chapter also looks at the effect of the number of mixture components as well as length of the time series on the efficiency of the method. I then turn to an biological application in Chapter 4. I discuss modeling choices as well as derivation of the state space model to be used in this application. Parameter and state estimation are analyzed in these models for both simulated and real data. Chapter 5 extends the methodology introduced in Chapter 2 from nonlinear Gaussian models to general state space models. The method is then applied to a financial</p><p>stochastic volatility model on US $ - British £ exchange rates. Bayesian inference in the previous chapter is accomplished through Markov chain Monte Carlo which is suitable for batch analyses, but computationally limiting in sequential analysis. Chapter 6 introduces sequential Monte Carlo. It discusses two methods currently available for simultaneous sequential estimation of latent states and fixed parameters and then introduces a novel algorithm that reduces the key, limiting degeneracy issue while being usable in a wide model class. Chapter 7 implements the novel algorithm in a disease surveillance context modeling influenza epidemics. Finally, Chapter 8 suggests areas for future work in both modeling and Bayesian inference. Several appendices provide detailed technical support material as well as relevant related work.</p> / Dissertation
219

Comparison Of Missing Value Imputation Methods For Meteorological Time Series Data

Aslan, Sipan 01 September 2010 (has links) (PDF)
Dealing with missing data in spatio-temporal time series constitutes important branch of general missing data problem. Since the statistical properties of time-dependent data characterized by sequentiality of observations then any interruption of consecutiveness in time series will cause severe problems. In order to make reliable analyses in this case missing data must be handled cautiously without disturbing the series statistical properties, mainly as temporal and spatial dependencies. In this study we aimed to compare several imputation methods for the appropriate completion of missing values of the spatio-temporal meteorological time series. For this purpose, several missing imputation methods are assessed on their imputation performances for artificially created missing data in monthly total precipitation and monthly mean temperature series which are obtained from the climate stations of Turkish State Meteorological Service. Artificially created missing data are estimated by using six methods. Single Arithmetic Average (SAA), Normal Ratio (NR) and NR Weighted with Correlations (NRWC) are the three simple methods used in the study. On the other hand, we used two computational intensive methods for missing data imputation which are called Multi Layer Perceptron type Neural Network (MLPNN) and Monte Carlo Markov Chain based on Expectation-Maximization Algorithm (EM-MCMC). In addition to these, we propose a modification in the EM-MCMC method in which results of simple imputation methods are used as auxiliary variables. Beside the using accuracy measure based on squared errors we proposed Correlation Dimension (CD) technique for appropriate evaluation of imputation performances which is also important subject of Nonlinear Dynamic Time Series Analysis.
220

Additive Latent Variable (ALV) Modeling: Assessing Variation in Intervention Impact in Randomized Field Trials

Toyinbo, Peter Ayo 23 October 2009 (has links)
In order to personalize or tailor treatments to maximize impact among different subgroups, there is need to model not only the main effects of intervention but also the variation in intervention impact by baseline individual level risk characteristics. To this end a suitable statistical model will allow researchers to answer a major research question: who benefits or is harmed by this intervention program? Commonly in social and psychological research, the baseline risk may be unobservable and have to be estimated from observed indicators that are measured with errors; also it may have nonlinear relationship with the outcome. Most of the existing nonlinear structural equation models (SEM’s) developed to address such problems employ polynomial or fully parametric nonlinear functions to define the structural equations. These methods are limited because they require functional forms to be specified beforehand and even if the models include higher order polynomials there may be problems when the focus of interest relates to the function over its whole domain. To develop a more flexible statistical modeling technique for assessing complex relationships between a proximal/distal outcome and 1) baseline characteristics measured with errors, and 2) baseline-treatment interaction; such that the shapes of these relationships are data driven and there is no need for the shapes to be determined a priori. In the ALV model structure the nonlinear components of the regression equations are represented as generalized additive model (GAM), or generalized additive mixed-effects model (GAMM). Replication study results show that the ALV model estimates of underlying relationships in the data are sufficiently close to the true pattern. The ALV modeling technique allows researchers to assess how an intervention affects individuals differently as a function of baseline risk that is itself measured with error, and uncover complex relationships in the data that might otherwise be missed. Although the ALV approach is computationally intensive, it relieves its users from the need to decide functional forms before the model is run. It can be extended to examine complex nonlinearity between growth factors and distal outcomes in a longitudinal study.

Page generated in 0.1033 seconds