• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 15
  • 9
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Comparison of Bayesian and frequentist approaches / Srovnání bayesovského a četnostního přístupu

Ageyeva, Anna January 2010 (has links)
The thesis deals with Bayesian approach to statistics and its comparison to frequentist approach. The main aim of the thesis is to compare frequentist and Bayesian approaches to statistics by analyzing statistical inferences, examining the question of subjectivity and objectivity in statistics. Another goal of the thesis is to draw attention to the importance and necessity to teach Bayesian statistics at our University more profound. The thesis includes three chapters. The first chapter presents a Bayesian approach to statistics and its main notions and principles. Statistical inferences are treated in the second chapter. The third chapter deals with comparing Bayesian and frequentist approaches. The final chapter concerns the place of Bayesian approach nowadays in science. Appendix concludes the list of Bayesian textbooks and Bayesian free software.
2

Model Uncertainty & Model Averaging Techniques

Amini Moghadam, Shahram 24 August 2012 (has links)
The primary aim of this research is to shed more light on the issue of model uncertainty in applied econometrics in general and cross-country growth as well as happiness and well-being regressions in particular. Model uncertainty consists of three main types: theory uncertainty, focusing on which principal determinants of economic growth or happiness should be included in a model; heterogeneity uncertainty, relating to whether or not the parameters that describe growth or happiness are identical across countries; and functional form uncertainty, relating to which growth and well-being regressors enter the model linearly and which ones enter nonlinearly. Model averaging methods including Bayesian model averaging and Frequentist model averaging are the main statistical tools that incorporate theory uncertainty into the estimation process. To address functional form uncertainty, a variety of techniques have been proposed in the literature. One suggestion, for example, involves adding regressors that are nonlinear functions of the initial set of theory-based regressors or adding regressors whose values are zero below some threshold and non-zero above that threshold. In recent years, however, there has been a rising interest in using nonparametric framework to address nonlinearities in growth and happiness regressions. The goal of this research is twofold. First, while Bayesian approaches are dominant methods used in economic empirics to average over the model space, I take a fresh look into Frequentist model averaging techniques and propose statistical routines that computationally ease the implementation of these methods. I provide empirical examples showing that Frequentist estimators can compete with their Bayesian peers. The second objective is to use recently-developed nonparametric techniques to overcome the issue of functional form uncertainty while analyzing the variance of distribution of per capita income. Nonparametric paradigm allows for addressing nonlinearities in growth and well-being regressions by relaxing both the functional form assumptions and traditional assumptions on the structure of error terms. / Ph. D.
3

Parametric Resampling Methods for Retrospective Changepoint Analysis

Duggins, Jonathan William 07 July 2010 (has links)
Changepoint analysis is a useful tool in environmental statistics in that it provides a methodology for threshold detection and modeling processes subject to periodic changes in the underlying model due to anthropogenic effects or natural phenomena. Several applications of changepoint analysis are investigated here. The use of inappropriate changepoint detection methods is first discussed and the need for a simple, flexible, correct method is established and such a method is proposed for the mean-shift model. Data from the Everglades, Florida, USA is used to showcase the methodology in a real-world setting. An extension to the case of time-series data represented via transition matrices is presented as a result of joint work with Matt Williams (Department of Statistics, Virginia Tech) and rainfall data from Kenya, Africa is presented as a case-study. Finally the multivariate changepoint problem is addressed by a two-stage approach beginning with dimension reduction via principal component analysis (PCA). After the dimension reduction step the location of the changepoint in principal component space is estimated and assuming at most one change in a mean-shift setting, all possible sub-models are investigated. / Ph. D.
4

Comparison of Different Methods for Estimating Log-normal Means

Tang, Qi 01 May 2014 (has links)
The log-normal distribution is a popular model in many areas, especially in biostatistics and survival analysis where the data tend to be right skewed. In our research, a total of ten different estimators of log-normal means are compared theoretically. Simulations are done using different values of parameters and sample size. As a result of comparison, ``A degree of freedom adjusted" maximum likelihood estimator and Bayesian estimator under quadratic loss are the best when using the mean square error (MSE) as a criterion. The ten estimators are applied to a real dataset, an environmental study from Naval Construction Battalion Center (NCBC), Super Fund Site in Rhode Island.
5

Bayesovská statistika - limity a možnosti využití v sociologii / Bayesian Statistics - Limits and its Application in Sociology

Krčková, Anna January 2014 (has links)
The purpose of this thesis is to find how we can use Bayesian statistics in analysis of sociological data and to compare outcomes of frequentist and Bayesian approach. Bayesian statistics uses probability distributions on statistical parameters. In the beginning of the analysis in Bayesian approach a prior probability (that is chosen on the basis of relevant information) is attached to the parameters. After combining prior probability and our observed data, posterior probability is computed. Because of the posterior probability we can make statistical conclusions. Comparison of approaches was made from the view of theoretical foundations and procedures and also by means of analysis of sociological data. Point estimates, interval estimates, hypothesis testing (on the example of two-sample t-test) and multiple linear regression analysis were compared. The outcome of this thesis is that considering its philosophy and thanks to the interpretational simplicity the Bayesian analysis is more suitable for sociological data analysis than common frequentist approach. Comparison showed that there is no difference between outcomes of frequentist and objective Bayesian analysis regardless of the sample size. For hypothesis testing we can use Bayesian credible intervals. Using subjective Bayesian analysis on...
6

Estimation of the Effects of Parental Measures on Child Aggression Using Structural Equation Modeling

Pyper, Jordan Daniel 08 June 2012 (has links) (PDF)
A child's parents are the primary source of knowledge and learned behaviors for developing children, and the benefits or repercussions of certain parental practices can be long lasting. Although parenting practices affect behavioral outcomes for children, families tend to be diverse in their circumstances and needs. Research attempting to ascertain cause and effect relationships between parental influences and child behavior can be difficult due to the complex nature of family dynamics and the intricacies of real life. Structural equation modeling (SEM) is an appropriate method for this research as it is able to account for the complicated nature of child-parent relationships. Both Frequentist and Bayesian methods are used to estimate the effect of latent parental behavior variables on child aggression and anxiety in order to allow for comparison and contrast between the two statistical paradigms in the context of structural equation modeling. Estimates produced from both methods prove to be comparable, but subtle differences do exist in those coefficients and in the conclusions to which a researcher would arrive. Although model estimates between the two paradigms generally agree, they diverge in the model selection process. The mother's behaviors are estimated to be the most influential on child aggression, while the influence of the father, socio-economic status, parental involvement, and the relationship quality of the couple also prove to be significant in predicting child aggression.
7

Generalized Empirical Bayes: Theory, Methodology, and Applications

Fletcher, Douglas January 2019 (has links)
The two key issues of modern Bayesian statistics are: (i) establishing a principled approach for \textit{distilling} a statistical prior distribution that is \textit{consistent} with the given data from an initial believable scientific prior; and (ii) development of a \textit{consolidated} Bayes-frequentist data analysis workflow that is more effective than either of the two separately. In this thesis, we propose generalized empirical Bayes as a new framework for exploring these fundamental questions along with a wide range of applications spanning fields as diverse as clinical trials, metrology, insurance, medicine, and ecology. Our research marks a significant step towards bridging the ``gap'' between Bayesian and frequentist schools of thought that has plagued statisticians for over 250 years. Chapters 1 and 2---based on \cite{mukhopadhyay2018generalized}---introduces the core theory and methods of our proposed generalized empirical Bayes (gEB) framework that solves a long-standing puzzle of modern Bayes, originally posed by Herbert Robbins (1980). One of the main contributions of this research is to introduce and study a new class of nonparametric priors ${\rm DS}(G, m)$ that allows exploratory Bayesian modeling. However, at a practical level, major practical advantages of our proposal are: (i) computational ease (it does not require Markov chain Monte Carlo (MCMC), variational methods, or any other sophisticated computational techniques); (ii) simplicity and interpretability of the underlying theoretical framework which is general enough to include almost all commonly encountered models; and (iii) easy integration with mainframe Bayesian analysis that makes it readily applicable to a wide range of problems. Connections with other Bayesian cultures are also presented in the chapter. Chapter 3 deals with the topic of measurement uncertainty from a new angle by introducing the foundation of nonparametric meta-analysis. We have applied the proposed methodology to real data examples from astronomy, physics, and medical disciplines. Chapter 4 discusses some further extensions and application of our theory to distributed big data modeling and the missing species problem. The dissertation concludes by highlighting two important areas of future work: a full Bayesian implementation workflow and potential applications in cybersecurity. / Statistics
8

Noninformative Prior Bayesian Analysis for Statistical Calibration Problems

Eno, Daniel R. 24 April 1999 (has links)
In simple linear regression, it is assumed that two variables are linearly related, with unknown intercept and slope parameters. In particular, a regressor variable is assumed to be precisely measurable, and a response is assumed to be a random variable whose mean depends on the regressor via a linear function. For the simple linear regression problem, interest typically centers on estimation of the unknown model parameters, and perhaps application of the resulting estimated linear relationship to make predictions about future response values corresponding to given regressor values. The linear statistical calibration problem (or, more precisely, the absolute linear calibration problem), bears a resemblance to simple linear regression. It is still assumed that the two variables are linearly related, with unknown intercept and slope parameters. However, in calibration, interest centers on estimating an unknown value of the regressor, corresponding to an observed value of the response variable. We consider Bayesian methods of analysis for the linear statistical calibration problem, based on noninformative priors. Posterior analyses are assessed and compared with classical inference procedures. It is shown that noninformative prior Bayesian analysis is a strong competitor, yielding posterior inferences that can, in many cases, be correctly interpreted in a frequentist context. We also consider extensions of the linear statistical calibration problem to polynomial models and multivariate regression models. For these models, noninformative priors are developed, and posterior inferences are derived. The results are illustrated with analyses of published data sets. In addition, a certain type of heteroscedasticity is considered, which relaxes the traditional assumptions made in the analysis of a statistical calibration problem. It is shown that the resulting analysis can yield more reliable results than an analysis of the homoscedastic model. / Ph. D.
9

Modelování Výnosů Akcií s Ohledem na Nejistotu: Frekventistická Průměrovací Metoda / Stock Return Predictability and Model Uncertainty: A Frequentist Model Averaging Approach

Pacák, Vojtěch January 2019 (has links)
The model uncertainty is a phenomenon where general consensus about the form of specific model is unclear. Stock returns perfectly meet this condition, as extensive literature offers diverse methods and potential drivers without a clear winner among them. Relatively recently, averaging techniques emerged as a possible solution to such scenarios. The two major averaging branches, Bayesian (BMA) and Frequentist (FMA) averaging, naturally deal with uncertainty by averaging over all model candidates rather than choosing the "best" one of them. We focus on FMA and apply this method to our data from U.S. market about S&P 500 index, that I help to explain with the set of eleven explanatory variables chosen in accordance with related literature. To preserve a real-world applicability, I use rolling window scheme to regularly update data in the fitting model for quarterly based re- estimation. Consequently, predictions are obtained with the use of most recent data. Firstly, we find out that simple historical average model can be beaten with a standard model selection approach based on AIC value, with variables as Dividend Yield, Earnings ratio, and Book-to-Market value proving consistently as most significant across quarterly models. With FMA techniques, I was not able to consistently beat the benchmark...
10

Cryptosporidiumutbrottet i Östersunds kommun 2010 : Påverkan på kommunens barn

Jansson, Nils-Henrik, Pavlov, Patrik January 2013 (has links)
Målet med den här studien är att undersöka hur barn under 15 år påverkades av Cryptosporidiumutbrottet i slutet av år 2010 i Östersunds kommun. Datamaterialet utgörs av svar på en enkätundersökning från 514 barn rörande deras hälsa relaterad till utbrottet. Dessa enkäter togs fram av svenska Smittskyddsinstitutet kort efter utbrottet och det är i uppdrag av denna myndighet som studien utförs. Analys av riskfaktorer och följdsymptom utförs med logistiska regressionsmodeller utifrån både ett Bayesianskt och ett frekventistiskt tillvägagångssätt för att på så sätt betrakta datamaterialet från fler synvinklar och samtidigt identifiera skillnader mellan dessa två tillvägagångssätten. En annan del av arbetet presenterar bortfallskalibrerade skattningar av antalet Cryptosporidiumfall både totalt och månadsvis men också skattningar av fallprevalensen i olika redovisningsgrupper. Slutligen analyseras sambanden mellan följdsymptomen med logistisk regression. Dessutom utförs variabelklustring av följdsymptom med metoden fuzzy clustering för att se hur dessa kan grupperas. Resultaten visar att Glas vatten, Inom VA. område, Tidigare lös avföring och Kön identifieras som riskfaktorer medan de bäst förklarande följdsymptomen är Vattnig diarré, Buk- eller magsmärtor, Feber och Trött/utmattad. / The purpose of this study is to analyze how children under the age of 15 years were affected by the 2010 Östersund Cryptosporidium outbreak. The data consists of responses to a questionnaire from 514 children concerning their health related to the outbreak. The questionnaire was developed by the Swedish Institute for Infectious Disease Control shortly after the outbreak. The analysis of risk factors and the analysis of symptoms associated with infection were performed using logistic regression models based on both a Bayesian and a frequentist approach. Using the two different approaches we thus consider the dataset from different angels and at the same time try to identify the differences between these two approaches. Another part of the paper presents estimates calibrated for nonresponse of the number of Cryptosporidium infections both totally and on a monthly basis. Additionally estimates of the prevalence of cases in various domain groups are presented. Finally, associations between the symptoms are investigated using logistic regression. With the same goal we performed variable clustering of the symptoms using the fuzzy clustering approach. The results shows that higher water intake, getting water thru the municipal water distribution system, Former loose stools and Gender could be identified as risk factors while the best-explanatory symptoms were watery diarrhea, abdominal or stomach pain, fever and tiredness/exhaustion.

Page generated in 0.1909 seconds