Return to search

Many Hands Make Light Work: Crowdsourced Ratings of Medical Student OSCE Performance

Clinical skills are often measured using objective structured clinical examinations (OSCEs) in healthcare professions education programs. As with assessment centers, it is challenging to provide learners with effective feedback due to burdensome human capital demands. The aim of this dissertation was to evaluate the viability of using a crowdsourced system to gather OSCE ratings and feedback. Aggregating evaluations of student performance from a crowd of patient proxies has the potential to mitigate biases associated with single-rater evaluations, allow the patient a voice as the consumer of physician behavior, improve reliability, reduce costs, improve feedback latency, and help learners develop a mental model of the diversity of patient preferences. Crowd raters, recruited through Amazon Mechanical Turk, evaluated a set of video-recorded performance episodes designed to measure interpersonal and communication (ICS) and physical exam (PE) skills. Compared to standardized patient (SP) and faculty raters, crowd raters were more lenient and less reliable, when holding the number of raters and spending constant. However, small groups of crowd raters were able to reach acceptable levels of reliability. Crowd ratings were collected within a matter of hours whereas SP and faculty ratings were returned in over 10 days. Learner reactions to crowdsourced ratings were also measured. Blind to the rater source, a majority of learners preferred the crowdsourced feedback packages over the SP and faculty packages. After learning about the potential value of crowdsourced ratings, learners were positive about crowd ratings as a complement to SP and faculty ratings, but only for evaluations of ICS (not PE) and only for formative (not summative) applications. In particular, students valued the volume and diversity of the crowdsourced feedback and the opportunity to better understand the patient perspective. Students expressed their concerns about privacy as well as the accuracy and quality of crowd ratings. A discussion of practical implications considers future best-practices for a crowdsourced OSCE rating and feedback system.

Identiferoai:union.ndltd.org:USF/oai:scholarcommons.usf.edu:etd-7903
Date04 April 2017
CreatorsGrichanik, Mark
PublisherScholar Commons
Source SetsUniversity of South Flordia
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceGraduate Theses and Dissertations
Rightsdefault

Page generated in 0.0482 seconds