• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Bayesian Predictive Inference and Multivariate Benchmarking for Small Area Means

Toto, Ma. Criselda Santos 20 April 2010 (has links)
Direct survey estimates for small areas are likely to yield unacceptably large standard errors due to the small sample sizes in the areas. This makes it necessary to use models to“borrow strength" from related areas to find more reliable estimate for a given area or, simultaneously, for several areas. For instance, in many applications, data on related multiple characteristics and auxiliary variables are available. Thus, multivariate modeling of related characteristics with multiple regression can be implemented. However, while model-based small area estimates are very useful, one potential difficulty with such estimates when models are used is that the combined estimate from all small areas does not usually match the value of the single estimate on the large area. Benchmarking is done by applying a constraint to ensure that the“total" of the small areas matches the“grand total". Benchmarking can help to prevent model failure, an important issue in small area estimation. It can also lead to improved prediction for most areas because of the information incorporated in the sample space due to the additional constraint. We describe both the univariate and multivariate Bayesian nested error regression models and develop a Bayesian predictive inference with a benchmarking constraint to estimate the finite population means of small areas. Our models are unique in the sense that our benchmarking constraint involves unit-level sampling weights and the prior distribution for the covariance of the area effects follows a specific structure. We use Markov chain Monte Carlo procedures to fit our models. Specifically, we use Gibbs sampling to fit the multivariate model; our univariate benchmarking only needs random samples. We use two datasets, namely the crop data (corn and soybeans) from the LANDSAT and Enumerative survey and the NHANES III data (body mass index and bone mineral density), to illustrate our results. We also conduct a simulation study to assess frequentist properties of our models.
2

Generating Predictive Inferences when Multiple Alternatives are Available

Cranford, Edward Andrew 09 December 2016 (has links)
The generation of predictive inferences may be difficult when a story leads to multiple possible consequences. Prior research has shown that readers only generate predictive inferences automatically, under normal reading conditions, when the story is based on familiar events for which the reader has readily available knowledge about what may happen next, there is enough constraining information in the text so that the inference is highly predictable, and there are few or no alternative inferences available (McKoon & Ratcliff, 1992). However, some evidence shows predictive inferences were generated when the likelihood of the targeted inference was reduced and the story implied an alternative consequence could occur (Klin, Murray, Levine, & Guzmán, 1999). It is possible, though, that the alternative was not a likely enough consequence to affect processing of the targeted inference. Prior research did not examine whether the alternative inference was drawn or whether multiple inferences could be entertained simultaneously. The experiments in this dissertation were designed to further assess the nature of interference when multiple consequences are possible by increasing the likelihood of the alternative so that both inferences were more equally likely to occur. The first two experiments used a word-naming task and showed that neither inference was activated when probed at 500 ms after the story (Experiment 1A) or when probed at 1000 ms (Experiment 1B), suggesting the alternative inference interferes with activation of the targeted inference. Experiments 2 and 3 used a contradictory reading paradigm to assess whether the inferences were activated but only at a minimal level so that they were not detected in a word-naming task. Reading time was slower when a sentence contradicted both inferences but not when it contradicted only one inference, even after reading a lengthy filler text. Reading time was also slower in Experiment 3 when the filler text was removed. These results imply both inferences were generated at a minimal level of activation that does not strengthen over time. The results are discussed in the light of comprehension theories that could account for the representation of minimally encoded inferences (Kintsch, 1998; Myers & O'Brien, 1998).

Page generated in 0.0941 seconds