• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 29
  • 29
  • 29
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Assessing the Potential of a Locally Adapted Conservation Agriculture Production System to Reduce Rural Poverty in Uganda's Tororo District

Farris, Jarrad 26 June 2015 (has links)
This paper demonstrates the utility of small area estimation (SAE) of poverty methods for researchers that wish to conduct a detailed welfare analysis as part of a larger survey of a small geographic area of interest. Researchers studying context-specific technologies or interventions can incorporate the survey-based SAE of poverty approach to conduct detailed poverty analyzes of their specific area of interest without the expense of collecting household consumption data. This study applies SAE methods as part of an impact assessment of a locally adapted conservation agriculture production system in Uganda's Tororo District. Using SAE, I assess the Tororo District's Foster-Greer-Thorbecke (FGT) rural poverty indices, estimate the effects of per acre farm profit increases to poor households on the district's rural poverty indices, and compare the findings to current estimates of the net returns from conservation agriculture in the Tororo District. The SAE results suggest that increasing the farm profits of the bottom 30% of households by two U.S. dollars per acre per season could reduce the district's rural poverty incidence by one percentage point. The available data on the net returns to conservation agriculture in the Tororo District, however, indicate that these modest increases may only be achievable for adopting households that face high land preparation costs. / Master of Science
2

Bayesian Predictive Inference and Multivariate Benchmarking for Small Area Means

Toto, Ma. Criselda Santos 20 April 2010 (has links)
Direct survey estimates for small areas are likely to yield unacceptably large standard errors due to the small sample sizes in the areas. This makes it necessary to use models to“borrow strength" from related areas to find more reliable estimate for a given area or, simultaneously, for several areas. For instance, in many applications, data on related multiple characteristics and auxiliary variables are available. Thus, multivariate modeling of related characteristics with multiple regression can be implemented. However, while model-based small area estimates are very useful, one potential difficulty with such estimates when models are used is that the combined estimate from all small areas does not usually match the value of the single estimate on the large area. Benchmarking is done by applying a constraint to ensure that the“total" of the small areas matches the“grand total". Benchmarking can help to prevent model failure, an important issue in small area estimation. It can also lead to improved prediction for most areas because of the information incorporated in the sample space due to the additional constraint. We describe both the univariate and multivariate Bayesian nested error regression models and develop a Bayesian predictive inference with a benchmarking constraint to estimate the finite population means of small areas. Our models are unique in the sense that our benchmarking constraint involves unit-level sampling weights and the prior distribution for the covariance of the area effects follows a specific structure. We use Markov chain Monte Carlo procedures to fit our models. Specifically, we use Gibbs sampling to fit the multivariate model; our univariate benchmarking only needs random samples. We use two datasets, namely the crop data (corn and soybeans) from the LANDSAT and Enumerative survey and the NHANES III data (body mass index and bone mineral density), to illustrate our results. We also conduct a simulation study to assess frequentist properties of our models.
3

Bayesian Analysis of Crime Survey Data with Nonresponse

Liu, Shiao 26 April 2018 (has links)
Bayesian hierarchical models are effective tools for small area estimation by pooling small datasets together. The pooling procedures allow individual areas to “borrow strength” from each other to desirably improve the estimation. This work is an extension of Nandram and Choi (2002), NC, to perform inference on finite population proportions when there exists non-identifiability of the missing pattern for nonresponse in binary survey data. We review the small-area selection model (SSM) in NC which is able to incorporate the non-identifiability. Moreover, the proposed SSM, together with the individual-area selection model (ISM), and the small-area pattern-mixture model (SPM) are evaluated by real crime data in Stasny (1991). Furthermore, the methodology is compared to ISM and SPM using simulated small area datasets. Computational issues related to the MCMC are also discussed.
4

Efficient Small Area Estimation in the Presence of Measurement Error in Covariates

Singh, Trijya 2011 August 1900 (has links)
Small area estimation is an arena that has seen rapid development in the past 50 years, due to its widespread applicability in government projects, marketing research and many other areas. However, it is often difficult to obtain error-free data for this purpose. In this dissertation, each project describes a model used for small area estimation in which the covariates are measured with error. We applied different methods of bias correction to improve the estimates of the parameter of interest in the small areas. There is a variety of methods available for bias correction of estimates in the presence of measurement error. We applied the simulation extrapolation (SIMEX), ordinary corrected scores and Monte Carlo corrected scores methods of bias correction in the Fay-Herriot model, and investigated the performance of the bias-corrected estimators. The performance of the estimators in the presence of non-normal measurement error and of the SIMEX estimator in the presence of non-additive measurement error was also studied. For each of these situations, we presented simulation studies to observe the performance of the proposed correction procedures. In addition, we applied our proposed methodology to analyze a real life, nontrivial data set and present the results. We showed that the Lohr-Ybarra estimator is slightly inefficient and that applying methods of bias correction like SIMEX, corrected scores or Monte Carlo corrected scores (MCCS) increases the efficiency of the small area estimates. In particular, we showed that the simulation based bias correction methods like SIMEX and MCCS provide a greater gain in efficiency. We also showed that the SIMEX method of bias correction is robust with respect to departures from normality or additivity of measurement error. We showed that the MCCS method is robust with respect to departure from normality of measurement error.
5

A Spatial Analysis of the Relationship between Obesity and the Built Environment in Southern Illinois

Deitz, Shiloh Leah 01 May 2016 (has links)
Scholars have established that our geographic environments – including infrastructure for walking and food availability - contribute to the current obesity epidemic in the United States. However, the relationship between food, walkability, and obesity has largely only been investigated in large urban areas. Further, many studies have not taken an in-depth look at the spatial fabric of walkability, food, and obesity. The purpose of this study was two-fold: 1) to explore reliable methods, using sociodemographic census data, for estimating obesity at the neighborhood level in one region of the U.S. made up of rural areas and small towns – southern Illinois; and 2) to investigate the ways that the food environment and walkability correlate with obesity across neighborhoods with different geographies, population densities, and socio-demographic characteristics. This study uses spatial analysis techniques and GIS, namely geographically weighted multivariate linear regression and cluster analysis, to estimate obesity at the census block group level. Walkability and the food environment are investigated in depth before the relationship between obesity and the built environment is analyzed using GIS and spatial analysis. The study finds that the influence of various food and walkability measures on obesity is spatially varying and significantly mediated by socio-demographic factors. The study concludes that the relationship between obesity and the built environment can be studied quantitatively in study areas of any size or population density but an open-minded approach toward measures must be taken and geographic variation cannot be ignored. This work is timely and important because of the dearth of small area obesity data, as well an absence of research on obesogenic physical environments outside of large urban areas.
6

Decision Support for Operational Plantation Forest Inventories through Auxiliary Information and Simulation

Green, Patrick Corey 25 October 2019 (has links)
Informed forest management requires accurate, up-to-date information. Ground-based forest inventory is commonly conducted to generate estimates of forest characteristics with a predetermined level of statistical confidence. As the importance of monitoring forest resources has increased, budgetary and logistical constraints often limit the resources needed for precise estimates. In this research, the incorporation of ancillary information in planted loblolly pine (Pinus taeda L.) forest inventory was investigated. Additionally, a simulation study using synthetic populations provided the basis for investigating the effects of plot and stand-level inventory aggregations on predictions and projections of future forest conditions. Forest regeneration surveys are important for assessing conditions immediately after plantation establishment. An unmanned aircraft system was evaluated for its ability to capture imagery that could be used to automate seedling counting using two computer vision approaches. The imagery was found to be unreliable for consistent detection in the conditions evaluated. Following establishment, conditions are assessed throughout the lifespan of forest plantations. Using small area estimation (SAE) methods, the incorporation of light detection and ranging (lidar) and thinning status improved the precision of inventory estimates compared with ground data alone. Further investigation found that reduced density lidar point clouds and lower resolution elevation models could be used to generate estimates with similar increases in precision. Individual tree detection estimates of stand density were found to provide minimal improvements in estimation precision when incorporated into the SAE models. Plot and stand level inventory aggregations were found to provide similar estimates of future conditions in simulated stands without high levels of spatial heterogeneity. Significant differences were noted when spatial heterogeneity was high. Model form was found to have a more significant effect on the observed differences than plot size or thinning status. The results of this research are of interest to forest managers who regularly conduct forest inventories and generate estimates of future stand conditions. The incorporation of auxiliary data in mid-rotation stands using SAE techniques improved estimate precision in most cases. Further, guidance on strategies for using this information for predicting future conditions is provided. / Doctor of Philosophy / Informed forest management requires accurate, up-to-date information. Groundbased sampling (inventory) is commonly used to generate estimates of forest characteristics such as total wood volume, stem density per unit area, heights, and regeneration survival. As the importance of assessing forest resources has increased, resources are often not available to conduct proper assessments. In this research, the incorporation of ancillary information in planted loblolly pine (Pinus taeda L.) forest inventory was investigated. Additionally, a simulation study investigated the effects of two forest inventory data aggregation methods on predictions and projections of future forest conditions. Forest regeneration surveys are important for assessing conditions immediately after tree planting. An unmanned aircraft system was evaluated for its ability to capture imagery that could be used to automate seedling counting. The imagery was found to be unreliable for use in accurately detecting seedlings in the conditions evaluated. Following establishment, forest conditions are assessed at additional points in forest development. Using a class of statistical estimators known as small-area estimation, a combination of ground and light detection and ranging data generated more confident estimates of forest conditions. Further investigation found that more coarse ancillary information can be used with similar confidence in the conditions evaluated. Forest inventory data are used to generate estimates of future conditions needed for management decisions. The final component of this research found that there are significant differences between two inventory data aggregation strategies when forest conditions are highly spatially variable. The results of this research are of interest to forest managers who regularly assess forest resources with inventories and models. The incorporation of ancillary information has potential to enhance forest resource assessments. Further, managers have guidance on strategies for using this information for estimating future conditions.
7

On Small Area Estimation Problems with Measurement Errors and Clustering

Torkashvand, Elaheh 05 October 2016 (has links)
In this dissertation, we first develop new statistical methodologies for small area estimation problems with measurement errors. The prediction of small area means for the unit-level regression model with the functional measurement error in the area-specific covariate is considered. We obtain the James-Stein (JS) estimate of the true area-specific covariate. Consequently, we construct the pseudo Bayes (PB) and pseudo empirical Bayes (PEB) predictors of small area means and estimate the mean squared prediction error (MSPE) associated with each predictor. Secondly, we modify the point estimation of the true area-specific covariate obtained earlier such that the histogram of the predictors of the small area means gets closer to its true one. We propose the constrained Bayes (CB) estimate of the true area-specific covariate. We show the superiority of the CB over the maximum likelihood (ML) estimate in terms of the Bayes risk. We also show the PB predictor of the small area mean based on the CB estimate of the true area-specific covariate dominates its counterpart based on the ML estimate in terms of the Bayes risk. We compare the performance of different predictors of the small area means using measures such as sensitivity, specificity, positive predictive value, negative predictive value, and MSPE. We believe that using the PEB and pseudo hierarchical Bayes predictors of small area means based on the constrained empirical Bayes (CEB) and constrained hierarchical Bayes (CHB) offers higher precision in recognizing socio-economic groups which are in danger of the prehypertension. Clustering the small areas to understand the behavior of the random effects better and accordingly, to predict the small area means is the final problem we address. We consider the Fay-Herriot model for this problem. We design a statistical test to evaluate the assumption of the equality of the variance components in different clusters. In the case of rejection of the null hypothesis of the equality of the variance components, we implement a modified version of Tukey's method. We calculate the MSPE to evaluate the effect of the clustering on the precision of predictors of the small area means. We apply our methodologies to real data sets. / February 2017
8

Bayesian Nonparametric Models for Multi-Stage Sample Surveys

Yin, Jiani 27 April 2016 (has links)
It is a standard practice in small area estimation (SAE) to use a model-based approach to borrow information from neighboring areas or from areas with similar characteristics. However, survey data tend to have gaps, ties and outliers, and parametric models may be problematic because statistical inference is sensitive to parametric assumptions. We propose nonparametric hierarchical Bayesian models for multi-stage finite population sampling to robustify the inference and allow for heterogeneity, outliers, skewness, etc. Bayesian predictive inference for SAE is studied by embedding a parametric model in a nonparametric model. The Dirichlet process (DP) has attractive properties such as clustering that permits borrowing information. We exemplify by considering in detail two-stage and three-stage hierarchical Bayesian models with DPs at various stages. The computational difficulties of the predictive inference when the population size is much larger than the sample size can be overcome by the stick-breaking algorithm and approximate methods. Moreover, the model comparison is conducted by computing log pseudo marginal likelihood and Bayes factors. We illustrate the methodology using body mass index (BMI) data from the National Health and Nutrition Examination Survey and simulated data. We conclude that a nonparametric model should be used unless there is a strong belief in the specific parametric form of a model.
9

Three Essays of Applied Bayesian Modeling: Financial Return Contagion, Benchmarking Small Area Estimates, and Time-Varying Dependence

Vesper, Andrew Jay 27 September 2013 (has links)
This dissertation is composed of three chapters, each an application of Bayesian statistical models to particular research questions. In Chapter 1, we evaluate systemic risk exposure of financial institutions. Building upon traditional regime switching approaches, we propose a network model for volatility contagion to assess linkages between institutions in the financial system. Focusing empirical analysis on the financial sector, we find that network connectivity has dynamic properties, with linkages between institutions increasing immediately before the recent crisis. Out-of-sample forecasts demonstrate the ability of the model to predict losses during distress periods. We find that institutional exposure to crisis events depends upon the structure of linkages, not strictly the number of linkages. In Chapter 2, we develop procedures for benchmarking small area estimates. In sample surveys, precision can be increased by introducing small area models which "borrow strength" by incorporating auxiliary covariate information. One consequence of using small area models is that small area estimates at lower geographical levels typically will not aggregate to the estimate at the corresponding higher geographical levels. Benchmarking is the statistical procedure for reconciling these differences. Two new approaches to Bayesian benchmarking are introduced, one procedure based on Minimum Discrimination Information, and another for Bayesian self-consistent conditional benchmarking. Notably the proposed procedures construct adjusted posterior distributions whose moments all satisfy benchmarking constraints. In the context of the Fay-Herriot model, simulations are conducted to assess benchmarking performance. In Chapter 3, we exploit the Pair Copula Construction (PCC) to develop a flexible multivariate model for time-varying dependence. The PCC is an extremely flexible model for capturing complex, but static, multivariate dependency. We use a Bayesian framework to extend the PCC to account for time dynamic dependence structures. In particular, we model the time series of a transformation of parameters of the PCC as an autoregressive model, conducting inference using a Markov Chain Monte Carlo algorithm. We use financial data to illustrate empirical evidence for the existence of time dynamic dependence structures, show improved out-of-sample forecasts for our time dynamic PCC, and assess performance of dynamic PCC models for forecasting Value-at-Risk. / Statistics
10

AComparison of Methods for Estimating State Subgroup Performance on the National Assessment of Educational Progress:

Bamat, David January 2021 (has links)
Thesis advisor: Henry Braun / The State NAEP program only reports the mean achievement estimate of a subgroup within a given state if it samples at least 62 students who identify with the subgroup. Since some subgroups of students constitute small proportions of certain states’ general student populations, these low-incidence groups of students are seldom sufficiently sampled to meet this rule-of-62 requirement. As a result, education researchers and policymakers are frequently left without a full understanding of how states are supporting the learning and achievement of different subgroups of students.Using grade 8 mathematics results in 2015, this dissertation addresses the problem by comparing the performance of three different techniques in predicting mean subgroup achievement on NAEP. The methodology involves simulating scenarios in which subgroup samples greater or equal to 62 are treated as not available for calculating mean achievement estimates. These techniques comprise an adaptation of Multivariate Imputation by Chained Equations (MICE), a common form of Small Area Estimation known as the Fay-Herriot model (FH), and a Cross-Survey analysis approach that emphasizes flexibility in model specification, referred to as Flexible Cross-Survey Analysis (FLEX CS) in this study. Data used for the prediction study include public-use state-level estimates of mean subgroup achievement on NAEP, restricted-use student-level achievement data on NAEP, public-use state-level administrative data from Education Week, the Common Core of Data, the U.S. Census Bureau, and public-use district-level achievement data in NAEP-referenced units from the Stanford Education Data Archive. To evaluate the accuracy of the techniques, a weighted measure of Mean Absolute Error and a coverage indicator quantify differences between predicted and target values. To evaluate whether a technique could be recommended for use in practice, accuracy measures for each technique are compared to benchmark values established as markers of successful prediction based on results from a simulation analysis with example NAEP data. Results indicate that both the FH and FLEX CS techniques may be suitable for use in practice and that the FH technique is particularly appealing. However, before definitive recommendations are made, the analyses from this dissertation should be conducted employing math achievement data from other years, as well as data from NAEP Reading. / Thesis (PhD) — Boston College, 2021. / Submitted to: Boston College. Lynch School of Education. / Discipline: Educational Research, Measurement and Evaluation.

Page generated in 0.1507 seconds