1 |
Objective Bayesian analysis of Kriging models with anisotropic correlation kernel / Analyse bayésienne objective des modèles de krigeage avec noyau de corrélation anisotropeMuré, Joseph 05 October 2018 (has links)
Les métamodèles statistiques sont régulièrement confrontés au manque de données qui engendre des difficultés à estimer les paramètres. Le paradigme bayésien fournit un moyen élégant de contourner le problème en décrivant la connaissance que nous avons des paramètres par une loi de probabilité a posteriori au lieu de la résumer par une estimation ponctuelle. Cependant, ce paradigme nécessite de définir une loi a priori adéquate, ce qui est un exercice difficile en l'absence de jugement d'expert. L'école bayésienne objective propose des priors par défaut dans ce genre de situation telle que le prior de référence de Berger-Bernardo. Un tel prior a été calculé par Berger, De Oliveira and Sansó [2001] pour le modèle de krigeage avec noyau de covariance isotrope. Une extension directe au cas des noyaux anisotropes poserait des problèmes théoriques aussi bien que pratiques car la théorie de Berger-Bernardo ne peut s'appliquer qu'à un jeu de paramètres ordonnés. Or dans ce cas de figure, tout ordre serait nécessairement arbitraire. Nous y substituons une solution bayésienne objective fondée sur les posteriors de référence conditionnels. Cette solution est rendue possible par une théorie du compromis entre lois conditionnelles incompatibles. Nous montrons en outre qu'elle est compatible avec le krigeage trans-gaussien. Elle est appliquée à un cas industriel avec des données non-stationnaires afin de calculer des Probabilités de Détection de défauts (POD de l'anglais Probability Of Detection) par tests non-destructifs dans les tubes de générateur de vapeur de centrales nucléaires. / A recurring problem in surrogate modelling is the scarcity of available data which hinders efforts to estimate model parameters. The Bayesian paradigm offers an elegant way to circumvent the problem by describing knowledge of the parameters by a posterior probability distribution instead of a pointwise estimate. However, it involves defining a prior distribution on the parameter. In the absence of expert opinion, finding an adequate prior can be a trying exercise. The Objective Bayesian school proposes default priors for such can be a trying exercise. The Objective Bayesian school proposes default priors for such situations, like the Berger-Bernardo reference prior. Such a prior was derived by Berger, De Oliveira and Sansó [2001] for the Kriging surrogate model with isotropic covariance kernel. Directly extending it to anisotropic kernels poses theoretical as well as practical problems because the reference prior framework requires ordering the parameters. Any ordering would in this case be arbitrary. Instead, we propose an Objective Bayesian solution for Kriging models with anisotropic covariance kernels based on conditional reference posterior distributions. This solution is made possible by a theory of compromise between incompatible conditional distributions. The work is then shown to be compatible with Trans-Gaussian Kriging. It is applied to an industrial case with nonstationary data in order to derive Probability Of defect Detection (POD) by non-destructive tests in steam generator tubes of nuclear power plants.
|
2 |
Noninformative Prior Bayesian Analysis for Statistical Calibration ProblemsEno, Daniel R. 24 April 1999 (has links)
In simple linear regression, it is assumed that two variables are linearly related, with unknown intercept and slope parameters. In particular, a regressor variable is assumed to be precisely measurable, and a response is assumed to be a random variable whose mean depends on the regressor via a linear function. For the simple linear regression problem, interest typically centers on estimation of the unknown model parameters, and perhaps application of the resulting estimated linear relationship to make predictions about future response values corresponding to given regressor values. The linear statistical calibration problem (or, more precisely, the absolute linear calibration problem), bears a resemblance to simple linear regression. It is still assumed that the two variables are linearly related, with unknown intercept and slope parameters. However, in calibration, interest centers on estimating an unknown value of the regressor, corresponding to an observed value of the response variable.
We consider Bayesian methods of analysis for the linear statistical calibration problem, based on noninformative priors. Posterior analyses are assessed and compared with classical inference procedures. It is shown that noninformative prior Bayesian analysis is a strong competitor, yielding posterior inferences that can, in many cases, be correctly interpreted in a frequentist context.
We also consider extensions of the linear statistical calibration problem to polynomial models and multivariate regression models. For these models, noninformative priors are developed, and posterior inferences are derived. The results are illustrated with analyses of published data sets. In addition, a certain type of heteroscedasticity is considered, which relaxes the traditional assumptions made in the analysis of a statistical calibration problem. It is shown that the resulting analysis can yield more reliable results than an analysis of the homoscedastic model. / Ph. D.
|
3 |
Some Bayesian Methods in the Estimation of Parameters in the Measurement Error Models and Crossover TrialWang, Guojun 31 March 2004 (has links)
No description available.
|
4 |
Estimation of the Binomial parameter: in defence of Bayes (1763)Tuyl, Frank Adrianus Wilhelmus Maria January 2007 (has links)
Research Doctorate - Doctor of Philosophy (PhD) / Interval estimation of the Binomial parameter è, representing the true probability of a success, is a problem of long standing in statistical inference. The landmark work is by Bayes (1763) who applied the uniform prior to derive the Beta posterior that is the normalised Binomial likelihood function. It is not well known that Bayes favoured this ‘noninformative’ prior as a result of considering the observable random variable x as opposed to the unknown parameter è, which is an important difference. In this thesis we develop additional arguments in favour of the uniform prior for estimation of è. We start by describing the frequentist and Bayesian approaches to interval estimation. It is well known that for common continuous models, while different in interpretation, frequentist and Bayesian intervals are often identical, which is directly related to the existence of a pivotal quantity. The Binomial model, and its Poisson sister also, lack a pivotal quantity, despite having sufficient statistics. Lack of a pivotal quantity is the reason why there is no consensus on one particular estimation method, more so than its discreteness: frequentist (unconditional) coverage depends on è. Exact methods guarantee minimum coverage to be at least equal to nominal and approximate methods aim for mean coverage to be close to nominal. We agree with what seems like the majority of frequentists, that exact methods are too conservative in practice, and show additional undesirable properties. This includes more recent ‘short’ exact intervals. We argue that Bayesian intervals based on noninformative priors are preferable to the family of frequentist approximate intervals, some of which are wider than exact intervals for particular data values. A particular property of the interval based on the uniform prior is that its mean coverage is exactly equal to nominal. However, once committed to the Bayesian approach there is no denying that the current preferred choice, by ‘objective’ Bayesians, is the U-shaped Jeffreys prior which results from various methods aimed at finding noninformative priors. The most successful such method seems to be reference analysis which has led to sensible priors in previously unsolved problems, concerning multiparameter models that include ‘nuisance’ parameters. However, we argue that there is a class of models for which the Jeffreys/reference prior may be suboptimal and that in the case of the Binomial distribution the requirement of a uniform prior predictive distribution leads to a more reasonable ‘consensus’ prior.
|
5 |
Approximation de lois impropres et applications / Approximation of improper priors and applicationsBioche, Christèle 27 November 2015 (has links)
Le but de cette thèse est d’étudier l’approximation d’a priori impropres par des suites d’a priori propres. Nous définissons un mode de convergence sur les mesures de Radon strictement positives pour lequel une suite de mesures de probabilité peut admettre une mesure impropre pour limite. Ce mode de convergence, que nous appelons convergence q-vague, est indépendant du modèle statistique. Il permet de comprendre l’origine du paradoxe de Jeffreys-Lindley. Ensuite, nous nous intéressons à l’estimation de la taille d’une population. Nous considérons le modèle du removal sampling. Nous établissons des conditions nécessaires et suffisantes sur un certain type d’a priori pour obtenir des estimateurs a posteriori bien définis. Enfin, nous montrons à l’aide de la convergence q-vague, que l’utilisation d’a priori vagues n’est pas adaptée car les estimateurs obtenus montrent une grande dépendance aux hyperparamètres. / The purpose of this thesis is to study the approximation of improper priors by proper priors. We define a convergence mode on the positive Radon measures for which a sequence of probability measures could converge to an improper limiting measure. This convergence mode, called q-vague convergence, is independant from the statistical model. It explains the origin of the Jeffreys-Lindley paradox. Then, we focus on the estimation of the size of a population. We consider the removal sampling model. We give necessary and sufficient conditions on the hyperparameters in order to have proper posterior distributions and well define estimate of abundance. In the light of the q-vague convergence, we show that the use of vague priors is not appropriate in removal sampling since the estimates obtained depend crucially on hyperparameters.
|
Page generated in 0.0616 seconds