Spelling suggestions: "subject:"nonresponse dias"" "subject:"nonresponse bias""
1 |
A Comparison of the Impact of Two Different Levels of Item Response Effort Upon the Return Rate of Mailed QuestionnairesRodgers, Philip L. 01 May 1997 (has links)
Mail questionnaires are a popular and valuable method of data collection. Nonresponse bias is, however, a potentially serious threat to their validity. The best way to combat this threat is to obtain the highest possible return rate. To this end, many factors that are believed to influence return rates have been empirically studied. One factor that has not been empirically examined is the impact of item response effort on return rates, where response effort is defined as the amount of effort that is required by a respondent to answer questionnaire items.
The purpose of this study was to determine if the type of item response effort required to complete a questionnaire had any differential impact on the response rate of a mailed questionnaire. For this study, two questionnaires that differed only in the level of item response effort were sent to two randomly selected and assigned groups. The first group received a mailed questionnaire with seven questions that were answered by a simple item response type (5-point Likert scale). The second group received a mailed questionnaire with seven questions that required a more difficult item response type (short answer).
A large difference between the return rates of the two questionnaires was observed, with the questionnaire containing questions that could be answered on a Likert scale having a higher return rate (56%) than the questionnaire containing questions requiring a short written response (30%). The results of this study provide evidence that the difficulty of item response effort affects the response rate of mailed questionnaires. The practical application of this finding is that researchers should endeavor to keep the types of item response on mailed questionnaires as simple as possible, to maximize response rates (unless, of course, the needed information can only be elicited by providing written responses).
|
2 |
Estimating the Effect of Nonresponse Bias in a Survey of Hospital OrganizationsLewis, Emily F., Hardy, Maryann L., Snaith, Beverly 01 August 2013 (has links)
No / Nonresponse bias in survey research can result in misleading or inaccurate findings and assessment of nonresponse bias is advocated to determine response sample representativeness. Four methods of assessing nonresponse bias (analysis of known characteristics of a population, subsampling of nonresponders, wave analysis, and linear extrapolation) were applied to the results of a postal survey of U.K. hospital organizations. The purpose was to establish whether validated methods for assessing nonresponse bias at the individual level can be successfully applied to an organizational level survey. The aim of the initial survey was to investigate trends in the implementation of radiographer abnormality detection schemes, and a response rate of 63.7% (325/510) was achieved. This study identified conflicting trends in the outcomes of analysis of nonresponse bias between the different methods applied and we were unable to validate the continuum of resistance theory as applied to organizational survey data. Further work is required to ensure established nonresponse bias analysis approaches can be successfully applied to organizational survey data. Until then, it is suggested that a combination of methods should be used to enhance the rigor of survey analysis.
|
3 |
Auxiliary variables a weight against nonresponse bias : A simulation studyLindberg, Mattias, Guban, Peter January 2014 (has links)
Today’s surveys face a growing problem with increasing nonresponse. The increase in nonresponse rate causes a need for better and more effective ways to reduce the nonresponse bias. There are three major scientific orientation of today’s research dealing with nonresponse. One is examining the social factors, the second one studies different data collection methods and the third investigating the use of weights to adjust estimators for nonresponse. We would like to contribute to the third orientation by evaluating estimators which use and adjust weights based on auxiliary variables to balance the survey nonresponse through simulations. For the simulation we use an artificial population consisting of 35455 participants from the Representativity Indicators for Survey Quality project. We model three nonresponse mechanisms (MCAR, MAR and MNAR) with three different coefficient of determination s between our study variable and the auxiliary variables and under three response rates resulting in 63 simulation scenarios. The scenarios are replicated 1000 times to acquire the results. We outline our findings and results for each estimator in all scenarios with the help of bias measures.
|
4 |
Epidemiological Analysis of SARS-CoV-2: Three Papers Examining Health Status, Response Bias, and Strategies for EngagmentDuszynski, Thomas J. 02 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The emergence of the global SARS-CoV-2 pandemic created tremendous impact on humanity beginning in late 2019. Public health researchers at Indiana University Richard M. Fairbanks School of Public Health responded by conducting research into the etiological profile of the virus, including a large Indiana state-wide population-based prevalence study in early 2020.
Methods
Data on demographics, tobacco use, health status, and reasons for participating in the population prevalence study were used to conduct three retrospective cross-sectional studies. The first study assessed the association of self-reported health and tobacco behaviors with COVID-19 infection (n=8,241). The second study used successive wave analysis to assess nonresponse bias (n=3,658). Finally, participants demographics were characterized by who responded to text, email, phone calls, or postcards and by the number of prompts needed to elicit participation (n= 3,658).
Results
The first study found self-identified health status of those reporting “poor, “fair” or good” had a higher risk of past or current infections compared to “very good” or “excellent” health status (P <0.02). Positive smoking status was inversely associated with SARS-CoV-2 infection (p <0.001). When assessing the sample for non-response bias (n=3,658), 40.9% responded in wave 1 of recruitment, 34.1% in wave 2 and 25.0% in wave 3 for an overall participation rate of 23.6%. There were no significant differences in response by waves and demographics, being recently exposed or reasons for participating. In the final study, compared to males, females made up 54.6% of the sample and responded at a higher rate to postcards (8.2% vs. 7.5%) and text/emails (28.1 vs. 24.6%, 2= 7.43, p 0.025); and responded at a higher percentage after 1 contact (21.4 vs. 17.9%, 2 = 7.6, p 0.023).
Conclusion
This research contributed to the scientific understanding of the etiological picture of SARS-CoV-2. Additionally, the current study used a novel method that public health practitioners can easily implement to detect non-response bias in primary data collection without advanced statistical methods. Finally, the current study allows researchers to focus not only on the modality of inviting participants, but the frequency of invitations needed to secure specific populations, reducing time and resources.
|
Page generated in 0.5077 seconds