• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2011
  • 601
  • 260
  • 258
  • 61
  • 32
  • 26
  • 19
  • 15
  • 14
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 4097
  • 795
  • 750
  • 722
  • 715
  • 704
  • 696
  • 655
  • 566
  • 445
  • 427
  • 416
  • 398
  • 365
  • 310
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

The relationships between crime rate and income inequality : evidence from China

Zhang, Wenjie, active 2013 05 December 2013 (has links)
The main purpose of this study is to determine if a Bayesian approach can better capture and provide reasonable predictions for the complex linkage between crime and income inequality. In this research, we conduct a model comparison between classical inference and Bayesian inference. The conventional studies on the relationship between crime and income inequality usually employ regression analysis to demonstrate whether these two issues are associated. However, there seems to be lack of use of Bayesian approaches in regard to this matter. Studying the panel data of China from 1993 to 2009, we found that in addition to a linear mixed effects model, a Bayesian hierarchical model with informative prior is also a good model to describe the linkage between crime rate and income inequality. The choice of models really depends on the research needs and data availability. / text
232

Bayesian mediation analysis for partially clustered designs

Chu, Yiyi 05 December 2013 (has links)
Partially clustered design is common in medicine, social sciences, intervention and psychological research. With some participants clustered and others not, the structure of partially clustering data is not parallel. Despite its common occurrence in practice, limited attention has been given regarding the evaluation of intervention effects in partially clustered data. Mediation analysis is used to identify the mechanism underlying the relationship between an independent variable and a dependent variable via a mediator variable. While most of the literature is focused on conventional frequentist mediation models, no research has studied a Bayesian mediation model in the context of a partially clustered design yet. Therefore, the primary objectives of this paper are to address conceptual considerations in estimating the mediation effects in the partially clustered randomized designs, and to examine the performances of the proposed model using both simulated data and real data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 (ECLS-K). A small-scale simulation study was also conducted and the results indicate that under large sample sizes, negligible relative parameter bias was found in the Bayesian estimates of the indirect effects and of covariance between the components of the indirect effect. Coverage rates for the 95% credible interval for these two estimates were found to be close to the nominal level. These results supported use of the proposed Bayesian model for partially clustered mediation in conditions when the sample size is moderately large. / text
233

Bayesian analysis in Markov regime-switching models

Koh, You Beng., 辜有明. January 2012 (has links)
van Norden and Schaller (1996) develop a standard regime-switching model to study stock market crashes. In their seminal paper, they use the maximum likelihood estimation to estimate the model parameters and show that a two-regime speculative bubble model has significant explanatory power for stock market returns in some observed periods. However, it is well known that the maximum likelihood estimation can lead to bias if the model contains multiple local maximum points or the estimation starts with poor initial values. Therefore, a better approach to estimate the parameters in the regime-switching models is to be found. One possible way is the Bayesian Gibbs-sampling approach, where its advantages are well discussed in Albert and Chib (1993). In this thesis, the Bayesian Gibbs-sampling estimation is examined by using two U.S. stock datasets: CRSP monthly value-weighted index from Jan 1926 to Dec 2010 and S&P 500 index from Jan 1871 to Dec 2010. It is found that the Gibbs-sampling estimation explains the U.S. data better than the maximum likelihood estimation. Moreover, the existing standard regime-switching speculative behaviour model is extended by considering the time-varying transition probabilities which are governed by the first-order Markov chain. It is shown that the time-varying first-order transition probabilities of Markov regime-switching speculative rational bubbles can lead stock market returns to have a second-order Markov regime. In addition, a Bayesian Gibbs-sampling algorithm is developed to estimate the parameters in the second-order two-state Markov regime-switching model. / published_or_final_version / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy
234

Estimating phylogenetic trees from discrete morphological data

Wright, April Marie 04 September 2015 (has links)
Morphological characters have a long history of use in the estimation of phylogenetic trees. Datasets consisting of morphological characters are most often analyzed using the maximum parsimony criterion, which seeks to minimize the amount of character change across a phylogenetic tree. When combined with molecular data, characters are often analyzed using model-based methods, such as maximum likelihood or, more commonly, Bayesian estimation. The efficacy of likelihood and Bayesian methods using a common model for estimating topology from discrete morphological characters, the Mk model, is poorly-explored. In Chapter One, I explore the efficacy of Bayesian estimation of phylogeny, using the Mk model, under conditions that are commonly encountered in paleontological studies. Using simulated data, I describe the relative performances of parsimony and the Mk model under a range of realistic conditions that include common scenarios of missing data and rate heterogeneity. I further examine the use of the Mk model in Chapter Two. Like any model, the Mk model makes a number of assumptions. One is that transition between character states are symmetric (i.e., there is an equal probability of changing from state 0 to state 1 and from state 1 to state 0). Many characters, including alleged Dollo characters and extremely labile characters, may not fit this assumption. I tested methods for relaxing this assumption in a Bayesian context. Using empirical datasets, I performed model fitting to demonstrate cases in which modelling asymmetric transitions among characters is preferred. I used simulated datasets to demonstrate that choosing the best-fit model of transition state symmetry can improve model fit and phylogenetic estimation. In my final chapter, I looked at the use of partitions to model datasets more appropriately. Common in molecular studies, partitioning breaks up the dataset into pieces that evolve according to similar mechanisms. These pieces, called partitions, are then modeled separately. This practice has not been widely adopted in morphological studies. I extended the PartitionFinder software, which is used in molecular studies to score different possible partition schemes to find the one which best models the dataset. I used empirical datasets to demonstrate the effects of partitioning datasets on model likelihoods and on the phylogenetic trees estimated from those datasets. / text
235

Bayesian variable selection for GLM

Wang, Xinlei 28 August 2008 (has links)
Not available / text
236

A comparison of the performance of testlet-based computer adaptive tests and multistage tests

Keng, Leslie, 1974- 29 August 2008 (has links)
Computer adaptive testing (CAT) has grown both in research and implementation. Test construction and security issues, however, have led many to reconsider the merits of CAT. Multistage testing (MST) is an alternative adaptive test design that purportedly addresses CAT's shortcomings. Yet considerably less research has been conducted on MST. Also, most research in adaptive testing has been based on item response theory (IRT). Many tests now make use of testlets -- bundles of items administered together, often based on a common stimulus. The use of testlets violates local independence, a fundamental assumptions of IRT. Testlet response theory (TRT) is a relatively new measurement model designed to measure testlet-based tests. Few studies though have examined its use in testlet-based CAT and MST designs. This dissertation investigated the performance of testlet-based CATs and MSTs measured using the TRT model. The test designs compared included a CAT that is adaptive at the testlet level only (testlet-level CAT), a CAT that is adaptive at both the testlet and item levels (item-level CAT) and a MST design (MST). Test conditions manipulated included test length, item pool size, and examinee ability distribution. Examinee data were generated using TRT-calibrated item parameters based on data from a large-scale reading assessment. The three test designs were evaluated based on measurement effectiveness and exposure control properties. The study found that all three adaptive test designs yielded similar and good measurement accuracy. Overall, the item-level CAT produced better measurement precision, followed by the MST design. However, the MST and CAT designs yielded better measurement precision at different areas of the ability scale. All three test designs yielded acceptable exposure control properties at the testlet level. At the item level, the testlet-level CAT produced the best overall result. The item-level CAT had less than ideal pool utilization, but was able to meet its pre-specified maximum exposure control rate and maintain low item exposure rates. The MST had excellent pool utilization, but a higher percentage of items with high exposure rates. Skewing the underlying ability distribution also had a particularly notable negative effect on the exposure control properties of the MST. / text
237

Bayesian methods to improve the assessment and management advice of anchovy in the Bay of Biscay

Contreras, Leire Ibaibarriaga January 2012 (has links)
No description available.
238

Municipal-level estimates of child mortality for Brazil : a new approach using Bayesian statistics

McKinnon, Sarah Ann 14 December 2010 (has links)
Current efforts to measure child mortality for municipalities in Brazil are hampered by the relative rarity of child deaths, which often results in unstable and unreliable estimates. As a result, it is not possible to accurately assess true levels of child mortality for many areas, hindering efforts towards constructing and implementing effective policy initiatives for the reduction of child mortality. However, with a spatial smoothing process based upon Bayesian Statistics it is possible to “borrow” information from neighboring areas in order to generate more stable and accurate estimates of mortality in smaller areas. The objective of this study is to use this spatial smoothing process to derive estimates of child mortality at the level of the municipality in Brazil. Using data from the 2000 Brazil Census, I derive both Bayesian and non-Bayesian estimates of mortality for each municipality. In comparing the smoothed and raw estimates of this parameter, I find that the Bayesian estimates yield a clearer spatial pattern of child mortality with smaller variances in less populated municipalities, thus, more accurately reflecting the true mortality situation of those municipalities. These estimates can then be used, ultimately, to lead to more effective policies and health initiatives in the fight for the reduction of child mortality in Brazil. / text
239

UTeach summer masters statistics course : a journey from traditional to Bayesian analysis

Fitzpatrick, Daniel Lee 05 January 2011 (has links)
This paper will outline some of the key parts of the Statistics course offered through the UTeach Summer Master’s Program as taught by Dr. Martha K. Smith. The paper begins with the introduction of the normal probability density function and is proven with calculus techniques and Euclidean geometry. Probability is discussed at great length in Smith’s course and the importance of understanding probability in statistical analysis is demonstrated through a reference to a study on how medical doctors confuse false positives in breast cancer testing. The frequentist perspective is concluded with a proof that the normal probability density function is zero. The shift from traditional to Bayesian inference begins with a brief introduction to the terminology involved, as well as an example with patient testing. The pros and cons of Bayesian inference are discussed and a proof is shown using the normal probability density function in finding a Bayes estimate for µ. It will be argued that a Statistics course moving from traditional to Bayesian analysis, such as that offered by the UTeach Summer Master’s Program and Smith, would supplement the traditional Statistics course offered at most universities. Such a course would be relevant for the mathematics major, mathematics educator, professionals in the medical industry, and individuals seeking to gain insights into how to understand data sets in new ways. / text
240

Development of a method for model calibration with non-normal data

Wang, Dongyuan 09 May 2011 (has links)
Not available / text

Page generated in 0.0544 seconds