1 |
Student Satisfaction Surveys and Nonresponse: Ignorable Survey, Ignorable NonresponseBoyer, Luc January 2009 (has links)
With an increasing reliance on satisfaction exit surveys to measure how university alumni qualify their experiences during their degree program, it is uncertain whether satisfaction is sufficiently salient, for some alumni, to generate distinguishable satisfaction scores between respondents and nonrespondents.
This thesis explores whether, to what extent, and why nonresponse to student satisfaction surveys makes any difference to our understanding of student university experiences. A modified version of Michalos’ multiple discrepancies theory was utilized as the conceptual framework to ascertain which aspects of the student experience are likely to be nonignorable, and which are likely to be ignorable. In recognition of the hierarchical structure of educational organizations, the thesis explores the impact of alumnus and departmental characteristics on nonresponse error. The impact of survey protocols on nonresponse error is also explored.
Nonignorable nonresponse was investigated using a multi-method approach. Quantitative analyses were based on a combined dataset gathered by the Graduate Student Exit Survey, conducted at each convocation over a period of three years. These data were compared against basic enrolment variables, departmental characteristics, and the public version of Statistic Canada’s National Graduate Survey. Analyses were conducted to ascertain whether nonresponse is nonignorable at the descriptive and analytical levels (form resistant hypothesis). Qualitative analyses were based on nine cognitive interviews from both recent and soon-to-be alumni.
Results were severely weakened by external and internal validity issues, and are therefore indicative but not conclusive. The findings suggest that nonrespondents are different from respondents, satisfaction intensity is weakly related to response rate, and that the ensuing nonresponse error in the marginals can be classified, albeit not fully, as missing at random. The form resistant hypothesis remains unaffected for variations in response rates. Cognitive interviews confirmed the presence of measurement errors which further weakens the case for nonignorability. An inadvertent methodological alignment of response pool homogeneity, a misspecified conceptual model, measurement error (dilution), and a non-salient, bureaucratically-inspired, survey topic are proposed as the likely reasons for the findings of ignorability. Methodological and organizational implications of the results are also discussed.
|
2 |
Student Satisfaction Surveys and Nonresponse: Ignorable Survey, Ignorable NonresponseBoyer, Luc January 2009 (has links)
With an increasing reliance on satisfaction exit surveys to measure how university alumni qualify their experiences during their degree program, it is uncertain whether satisfaction is sufficiently salient, for some alumni, to generate distinguishable satisfaction scores between respondents and nonrespondents.
This thesis explores whether, to what extent, and why nonresponse to student satisfaction surveys makes any difference to our understanding of student university experiences. A modified version of Michalos’ multiple discrepancies theory was utilized as the conceptual framework to ascertain which aspects of the student experience are likely to be nonignorable, and which are likely to be ignorable. In recognition of the hierarchical structure of educational organizations, the thesis explores the impact of alumnus and departmental characteristics on nonresponse error. The impact of survey protocols on nonresponse error is also explored.
Nonignorable nonresponse was investigated using a multi-method approach. Quantitative analyses were based on a combined dataset gathered by the Graduate Student Exit Survey, conducted at each convocation over a period of three years. These data were compared against basic enrolment variables, departmental characteristics, and the public version of Statistic Canada’s National Graduate Survey. Analyses were conducted to ascertain whether nonresponse is nonignorable at the descriptive and analytical levels (form resistant hypothesis). Qualitative analyses were based on nine cognitive interviews from both recent and soon-to-be alumni.
Results were severely weakened by external and internal validity issues, and are therefore indicative but not conclusive. The findings suggest that nonrespondents are different from respondents, satisfaction intensity is weakly related to response rate, and that the ensuing nonresponse error in the marginals can be classified, albeit not fully, as missing at random. The form resistant hypothesis remains unaffected for variations in response rates. Cognitive interviews confirmed the presence of measurement errors which further weakens the case for nonignorability. An inadvertent methodological alignment of response pool homogeneity, a misspecified conceptual model, measurement error (dilution), and a non-salient, bureaucratically-inspired, survey topic are proposed as the likely reasons for the findings of ignorability. Methodological and organizational implications of the results are also discussed.
|
3 |
Non-réponse totale dans les enquêtes de surveillance épidémiologique / Unit Nonresponse in Epidemiologic Surveillance SurveysSantin, Gaëlle 09 February 2015 (has links)
La non-réponse, rencontrée dans la plupart des enquêtes épidémiologiques, est génératrice de biais de sélection (qui, dans ce cas est un biais de non-réponse) lorsqu’elle est liée aux variables d’intérêt. En surveillance épidémiologique, dont un des objectifs est d’estimer des prévalences, on a souvent recours à des enquêtes par sondage. On est alors confronté à la non-réponse totale et on peut utiliser des méthodes issues de la statistique d’enquête pour la corriger. Le biais de non-réponse peut être exprimé comme le produit de l’inverse du taux de réponse et de la covariance entre la probabilité de réponse et la variable d’intérêt. Ainsi, deux types de solution peuvent généralement être envisagés pour diminuer ce biais. La première consiste à chercher à augmenter le taux de réponse au moment de la planification de l’enquête. Cependant, la maximisation du taux de réponse peut entraîner d’autres types de biais, comme des biais de mesure. Dans la seconde, après avoir recueilli les données, on utilise des informations liées a priori aux variables d’intérêt et à la probabilité de réponse, et disponibles à la fois pour les répondants et les non-répondants pour calculer des facteurs correctifs. Cette solution nécessite donc de disposer d’informations sur l'ensemble de l'échantillon tiré au sort (que les personnes aient répondu ou non) ; or ces informations sont en général peu nombreuses. Les possibilités récentes d'accès aux bases médico-administratives (notamment celles de l'assurance maladie) ouvrent de nouvelles perspectives sur cet aspect.Les objectifs de ce travail, qui sont centrés sur les biais de non-réponse, étaient d’étudier l’apport de données supplémentaires (enquête complémentaire auprès de non-répondants et bases médico-administratives) et de discuter l’influence du taux de réponse sur l’erreur de non-réponse et l’erreur de mesure.L'analyse était centrée sur la surveillance épidémiologique des risques professionnels via l’exploitation des données de la phase pilote de la cohorte Coset-MSA à l’inclusion. Dans cette enquête, en plus des données recueillies par questionnaire (enquête initiale et enquête complémentaire auprès de non-répondants), des informations auxiliaires issues de bases médico-administratives (SNIIR-AM et MSA) étaient disponibles pour les répondants mais aussi pour les non-répondants à l’enquête par questionnaire.Les résultats montrent que les données de l’enquête initiale, qui présentait un taux de réponse de 24%, corrigées pour la non-réponse avec des informations auxiliaires directement liées à la thématique de l’enquête (la santé et le travail) fournissent des estimations de prévalence en général proches de celles obtenues grâce à la combinaison des données de l’enquête initiale et de l’enquête complémentaire (dont le taux de réponse atteignait 63%) après correction de la non réponse par ces mêmes informations auxiliaires. La recherche d'un taux de réponse maximal à l’aide d’une enquête complémentaire n’apparait donc pas nécessaire pour diminuer le biais de non réponse. Cette étude a néanmoins mis en avant l’existence de potentiels biais de mesure plus importants pour l’enquête initiale que pour l’enquête complémentaire. L’étude spécifique du compromis entre erreur de non-réponse et erreur de mesure montre que, pour les variables qui ont pu être étudiées, après correction de la non-réponse, la somme de l’erreur de non-réponse de l’erreur de mesure est équivalente dans l’enquête initiale et dans les enquêtes combinées (enquête initiale et complémentaire).Ce travail a montré l’intérêt des bases médico-administratives pour diminuer l’erreur de non-réponse et étudier les erreurs de mesure dans une enquête de surveillance épidémiologique. / Nonresponse occurs in most epidemiologic surveys and may generate selection bias (which is, in this case, a nonresponse bias) when it is linked to outcome variables. In epidemiologic surveillance, whose one of the purpose is to estimate prevalences, it is usual to use survey sampling. In this case, unit nonresponse occurs and it is possible to use methods coming from survey sampling to correct for nonresponse. Nonresponse bias can be expressed as the product of the inverse of the response rate and the covariance between the probability of response and the outcome variable. Thus, two options are available to reduce the effect of nonresponse. The first is to increase the response rate by developing appropriate strategies at the study design phase. However, the maximization of the response rate can prompt other kinds of bias, such as measurement bias. In the second option, after data collection, information associated with both nonresponse and the outcome variable, and available for both respondents and nonrespondents, can be used to calculate corrective factors. This solution requires having information on the complete random sample (respondents and nonrespondents); but this information is rarely sufficient. Recent possibilities to access administrative databases (particularly those pertaining to health insurance) offer new perspectives on this aspect.The objectives of this work focused on the nonresponse bias were to study the contribution of supplementary data (administrative databases and complementary survey among nonrespondents) and to discuss the influence of the response rate on the nonresponse error and the measurement error. The analyses focused on occupational health epidemiologic surveillance, using data (at inclusion) from the Coset-MSA cohort pilot study. In this study, in addition to the data collected by questionnaire (initial and complementary survey among nonrespondents), auxiliary information from health and occupational administrative databases was available for both respondents and nonrespondents.Results show that the data from the initial survey (response rate : 24%), corrected for nonresponse with information directly linked to the study subject (health and work) produce estimations of prevalence close to those obtained by combining data from the initial survey and the complementary survey (response rate : 63%), after nonresponse adjustment on the same auxiliary information. Using a complementary survey to attain a maximal response rate does not seem to be necessary in order to decrease nonresponse bias. Nevertheless, this study highlights potential measurement bias which could be more consequential for the initial survey than for the complementary survey. The specific study of the trade-off between nonresponse error and measurement error shows that, for the studied variables and after correction for nonresponse, the sum of the nonresponse error and the measurement error is equivalent in the initial survey and in the combined surveys (initial plus complementary survey). This work illustrated the potential of administrative databases for decreasing the nonresponse error and for evaluating measurement error in an epidemiologic surveillance survey.
|
4 |
Developing a New Mixed-Mode Methodology For a Provincial Park Camper Survey in British ColumbiaDyck, Brian Wesley 08 July 2013 (has links)
Park and resource management agencies are looking for less costly ways to undertake park visitor surveys. The use of the Internet is often suggested as a way to reduce the costs of these surveys. By itself, however, the use of the Internet for park visitor surveys faces a number of methodological challenges that include the potential for coverage error, sampling difficulties and nonresponse error. A potential way of addressing these challenges is the use of a mixed-mode approach that combines the use of the Internet with another survey mode. The procedures for such a mixed-mode approach, however, have not been fully developed and evaluated.
This study develops and evaluates a new mixed-mode approach –a face-to-face/web response – for a provincial park camper survey in British Columbia. The five key steps of this approach are: (a) selecting a random sample of occupied campsites; (b) undertaking a short interview with potential respondents; (c) obtaining an email address at the end of the interview; (d) distributing a postcard to potential respondents that contains the website and an individual access code; and (e) undertaking email follow-ups with nonrespondents.
In evaluating this new approach, two experiments were conducted during the summer of 2010. The first experiment was conducted at Goldstream Provincial Park campground and was designed to compare a face-to-face/paper response to face-to-face/web response for several sources of survey errors and costs. The second experiment was conducted at 12 provincial park campgrounds throughout British Columbia and was designed to examine the potential for coverage error and the effect of a number of email follow-ups on return rates, nonresponse error and the substantive results.
Taken together, these experiments indicate: a low potential for coverage error (i.e., 4% non-use Internet rate); a high email collection rate for follow-ups (i.e., 99% at Goldstream; a combined rate of 88% for 12 campgrounds); similar return rates between a paper mode (60%) and a web (59%) mode; the use of two email follow-ups reduced nonresponse error for a key variable (i.e., geographic location of residence), but not for all variables; low item nonresponse for both mixed-modes (about 1%); very few differences in the substantive results between each follow-up; a 9% cost saving for the web mode. This study suggests that a face-to face/web approach can provide a viable approach for undertaking park visitor surveys if there is high Internet coverage among park visitors. / Graduate / 0366 / 0344 / 0814 / brdyckfam@yahoo.com
|
Page generated in 0.0456 seconds