• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 6
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 44
  • 12
  • 12
  • 11
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Nonresponse in industrial censuses in developing countries : some proposals for the correction of biased estimators

Sogunro, Babatunde Oluwasegun January 1988 (has links)
No description available.
2

"The Application of Multiple Imputation in Correcting for Unit Nonresponse Bias"

Arntsen, Stian Fagerli January 2010 (has links)
No description available.
3

Student Satisfaction Surveys and Nonresponse: Ignorable Survey, Ignorable Nonresponse

Boyer, Luc January 2009 (has links)
With an increasing reliance on satisfaction exit surveys to measure how university alumni qualify their experiences during their degree program, it is uncertain whether satisfaction is sufficiently salient, for some alumni, to generate distinguishable satisfaction scores between respondents and nonrespondents. This thesis explores whether, to what extent, and why nonresponse to student satisfaction surveys makes any difference to our understanding of student university experiences. A modified version of Michalos’ multiple discrepancies theory was utilized as the conceptual framework to ascertain which aspects of the student experience are likely to be nonignorable, and which are likely to be ignorable. In recognition of the hierarchical structure of educational organizations, the thesis explores the impact of alumnus and departmental characteristics on nonresponse error. The impact of survey protocols on nonresponse error is also explored. Nonignorable nonresponse was investigated using a multi-method approach. Quantitative analyses were based on a combined dataset gathered by the Graduate Student Exit Survey, conducted at each convocation over a period of three years. These data were compared against basic enrolment variables, departmental characteristics, and the public version of Statistic Canada’s National Graduate Survey. Analyses were conducted to ascertain whether nonresponse is nonignorable at the descriptive and analytical levels (form resistant hypothesis). Qualitative analyses were based on nine cognitive interviews from both recent and soon-to-be alumni. Results were severely weakened by external and internal validity issues, and are therefore indicative but not conclusive. The findings suggest that nonrespondents are different from respondents, satisfaction intensity is weakly related to response rate, and that the ensuing nonresponse error in the marginals can be classified, albeit not fully, as missing at random. The form resistant hypothesis remains unaffected for variations in response rates. Cognitive interviews confirmed the presence of measurement errors which further weakens the case for nonignorability. An inadvertent methodological alignment of response pool homogeneity, a misspecified conceptual model, measurement error (dilution), and a non-salient, bureaucratically-inspired, survey topic are proposed as the likely reasons for the findings of ignorability. Methodological and organizational implications of the results are also discussed.
4

Student Satisfaction Surveys and Nonresponse: Ignorable Survey, Ignorable Nonresponse

Boyer, Luc January 2009 (has links)
With an increasing reliance on satisfaction exit surveys to measure how university alumni qualify their experiences during their degree program, it is uncertain whether satisfaction is sufficiently salient, for some alumni, to generate distinguishable satisfaction scores between respondents and nonrespondents. This thesis explores whether, to what extent, and why nonresponse to student satisfaction surveys makes any difference to our understanding of student university experiences. A modified version of Michalos’ multiple discrepancies theory was utilized as the conceptual framework to ascertain which aspects of the student experience are likely to be nonignorable, and which are likely to be ignorable. In recognition of the hierarchical structure of educational organizations, the thesis explores the impact of alumnus and departmental characteristics on nonresponse error. The impact of survey protocols on nonresponse error is also explored. Nonignorable nonresponse was investigated using a multi-method approach. Quantitative analyses were based on a combined dataset gathered by the Graduate Student Exit Survey, conducted at each convocation over a period of three years. These data were compared against basic enrolment variables, departmental characteristics, and the public version of Statistic Canada’s National Graduate Survey. Analyses were conducted to ascertain whether nonresponse is nonignorable at the descriptive and analytical levels (form resistant hypothesis). Qualitative analyses were based on nine cognitive interviews from both recent and soon-to-be alumni. Results were severely weakened by external and internal validity issues, and are therefore indicative but not conclusive. The findings suggest that nonrespondents are different from respondents, satisfaction intensity is weakly related to response rate, and that the ensuing nonresponse error in the marginals can be classified, albeit not fully, as missing at random. The form resistant hypothesis remains unaffected for variations in response rates. Cognitive interviews confirmed the presence of measurement errors which further weakens the case for nonignorability. An inadvertent methodological alignment of response pool homogeneity, a misspecified conceptual model, measurement error (dilution), and a non-salient, bureaucratically-inspired, survey topic are proposed as the likely reasons for the findings of ignorability. Methodological and organizational implications of the results are also discussed.
5

Estimating the Effect of Nonresponse Bias in a Survey of Hospital Organizations

Lewis, Emily F., Hardy, Maryann L., Snaith, Beverly 01 August 2013 (has links)
No / Nonresponse bias in survey research can result in misleading or inaccurate findings and assessment of nonresponse bias is advocated to determine response sample representativeness. Four methods of assessing nonresponse bias (analysis of known characteristics of a population, subsampling of nonresponders, wave analysis, and linear extrapolation) were applied to the results of a postal survey of U.K. hospital organizations. The purpose was to establish whether validated methods for assessing nonresponse bias at the individual level can be successfully applied to an organizational level survey. The aim of the initial survey was to investigate trends in the implementation of radiographer abnormality detection schemes, and a response rate of 63.7% (325/510) was achieved. This study identified conflicting trends in the outcomes of analysis of nonresponse bias between the different methods applied and we were unable to validate the continuum of resistance theory as applied to organizational survey data. Further work is required to ensure established nonresponse bias analysis approaches can be successfully applied to organizational survey data. Until then, it is suggested that a combination of methods should be used to enhance the rigor of survey analysis.
6

A Bayesian Analysis of BMI Data of Children from Small Domains: Adjustment for Nonresponse

Zhao, Hong 21 December 2006 (has links)
"We analyze data on body mass index (BMI) in the third National Health and Nutrition Examination survey, predict finite population BMI stratified by different domains of race, sex and family income, and investigate what adjustment needed for nonresponse mechanism. We built two types of models to analyze the data. In the ignorable nonresponse models, each model is within the hierarchical Bayesian framework. For Model 1, BMI is only related to age. For Model 2, the linear regression is height on weight, and weight on age. The parameters, nonresponse and the nonsampled BMI values are generated from each model. We mainly use the composition method to obtain samples for Model 1, and Gibbs sampler to generate samples for Model 2. We also built two nonignorable nonresponse models corresponding to the ignorable nonresponse models. Our nonignorable nonresponse models have one important feature: the response indicators are not related to BMI and neither weight nor height, but we use the same parameters corresponding to the ignorable nonresponse models. We use sample important resampling (SIR) algorithm to generate parameters and nonresponse, nonsample values. Our results show that the ignorable nonresponse Model 2 (modeling height and weight) is more reliable than Model 1 (modeling BMI), since the predicted finite population mean BMI of Model 1 changes very little with age. The predicted finite population mean of BMI is affected by different domain of race, sex and family income. Our results also show that the nonignorable nonresponse models infer smaller standard deviation of regression coefficients and population BMI than in the ignorable nonresponse models. It is due to the fact that we are incorporating information from the response indicators, and there are no additional parameters. Therefore, the nonignorable nonresponse models allow wider inference."
7

A Comparison of the Impact of Two Different Levels of Item Response Effort Upon the Return Rate of Mailed Questionnaires

Rodgers, Philip L. 01 May 1997 (has links)
Mail questionnaires are a popular and valuable method of data collection. Nonresponse bias is, however, a potentially serious threat to their validity. The best way to combat this threat is to obtain the highest possible return rate. To this end, many factors that are believed to influence return rates have been empirically studied. One factor that has not been empirically examined is the impact of item response effort on return rates, where response effort is defined as the amount of effort that is required by a respondent to answer questionnaire items. The purpose of this study was to determine if the type of item response effort required to complete a questionnaire had any differential impact on the response rate of a mailed questionnaire. For this study, two questionnaires that differed only in the level of item response effort were sent to two randomly selected and assigned groups. The first group received a mailed questionnaire with seven questions that were answered by a simple item response type (5-point Likert scale). The second group received a mailed questionnaire with seven questions that required a more difficult item response type (short answer). A large difference between the return rates of the two questionnaires was observed, with the questionnaire containing questions that could be answered on a Likert scale having a higher return rate (56%) than the questionnaire containing questions requiring a short written response (30%). The results of this study provide evidence that the difficulty of item response effort affects the response rate of mailed questionnaires. The practical application of this finding is that researchers should endeavor to keep the types of item response on mailed questionnaires as simple as possible, to maximize response rates (unless, of course, the needed information can only be elicited by providing written responses).
8

A Monte Carlo Study of Single Imputation in Survey Sampling

Xu, Nuo January 2013 (has links)
Missing values in sample survey can lead to biased estimation if not treated. Imputation was posted asa popular way to deal with missing values. In this paper, based on Särndal (1994, 2005)’s research, aMonte-Carlo simulation is conducted to study how the estimators work in different situations and howdifferent imputation methods work for different response distributions.
9

Auxiliary variables a weight against nonresponse bias : A simulation study

Lindberg, Mattias, Guban, Peter January 2014 (has links)
Today’s surveys face a growing problem with increasing nonresponse.  The increase in nonresponse rate causes a need for better and more effective ways to reduce the nonresponse bias.  There are three major scientific orientation of today’s research dealing with nonresponse. One is examining the social factors, the second one studies different data collection methods and the third investigating the use of weights to adjust estimators for nonresponse.  We would like to contribute to the third orientation by evaluating estimators which use and adjust weights based on auxiliary variables to balance the survey nonresponse through simulations. For the simulation we use an artificial population consisting of 35455 participants from the Representativity Indicators for Survey Quality project. We model three nonresponse mechanisms (MCAR, MAR and MNAR) with three different coefficient of determination s between our study variable and the auxiliary variables and under three response rates resulting in 63 simulation scenarios. The scenarios are replicated 1000 times to acquire the results. We outline our findings and results for each estimator in all scenarios with the help of bias measures.
10

Bayesian Analysis of Crime Survey Data with Nonresponse

Liu, Shiao 26 April 2018 (has links)
Bayesian hierarchical models are effective tools for small area estimation by pooling small datasets together. The pooling procedures allow individual areas to “borrow strength” from each other to desirably improve the estimation. This work is an extension of Nandram and Choi (2002), NC, to perform inference on finite population proportions when there exists non-identifiability of the missing pattern for nonresponse in binary survey data. We review the small-area selection model (SSM) in NC which is able to incorporate the non-identifiability. Moreover, the proposed SSM, together with the individual-area selection model (ISM), and the small-area pattern-mixture model (SPM) are evaluated by real crime data in Stasny (1991). Furthermore, the methodology is compared to ISM and SPM using simulated small area datasets. Computational issues related to the MCMC are also discussed.

Page generated in 0.0671 seconds