Crowdsourcing has gained favor among many social scientists as a method for collecting data because this method is both time- and resource-efficient. The present study uses a within-subject test-retest design to evaluate the psychometric characteristics of crowdsource samples for developing and field testing measurement instruments. As evidenced by similar patterns of psychometric characteristics across time, strong test-retest reliability, and low failure rates of attention check items, the results of this study provide evidence that Amazon Mechanical Turk might represent a fruitful platform for field testing to support the development of a variety of measures. These findings, in turn, have significant implications for resource efficiency in the fields of educational and organizational measurement.
Identifer | oai:union.ndltd.org:uiowa.edu/oai:ir.uiowa.edu:etd-8413 |
Date | 01 May 2019 |
Creators | Wetherell, Emily Michelle |
Contributors | Dunbar, Stephen B. |
Publisher | University of Iowa |
Source Sets | University of Iowa |
Language | English |
Detected Language | English |
Type | thesis |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | Copyright © 2019 Emily Michelle Wetherell |
Page generated in 0.0022 seconds