Although the use of peer assessment in MOOCs is common, there has been little empirical research about peer assessment in MOOCs, especially composition MOOCs. This study aimed to address issues in peer assessment in a MOOC-based composition course, in particular student perceptions, peer-grading scores versus instructor-grading scores, and peer commentary versus instructor commentary. The findings provided evidence that peer assessment was well received by the majority of student participants from their perspective as both peer evaluators of other students’ papers and as students being evaluated by their peers. However, many student participants also expressed negative feelings about certain aspects of peer assessment, for example peers’ lack of qualifications, peers’ negative and critical comments, and unfairness of peer grading. Statistical analysis of grades given by student peers and instructors revealed a consistency among grades given by peers but a low consistency between grades given by peers and those given by instructors, with the peer grades tending to be higher than those assigned by instructors. In addition, analysis of peer and instructor commentary revealed that peers’ commentary differed from instructors’ on specific categories of writing issues (idea development, organization, or sentence-level). For instance, on average peers focused a greater percentage of their comments (70%) on sentence-level issues than did instructors (64.7%), though both groups devoted more comments to sentence-level issues than to the two other issue categories. Peers’ commentary also differed from instructors’ in the approaches their comments took to communicating the writing issue (through explanation, question, or correction). For example, in commenting on sentence-level errors, on average 85% of peers’ comments included a correction as compared to 96% of instructors’ comments including that approach. In every comment category (idea development, organization, sentence-level), peers used a lower percentage of explanation—at least 10% lower—than did instructors. Overall, findings and conclusions of the study have limitations due to (1) the small size of composition MOOC studied and small sample size of graded papers used for the analysis, (2) the lack of research and scarcity of document archives on issues the study discussed, (3) the lack of examination of factors (i.e. level of education, cultural background, and English language proficiency) that might affect student participants’ perception of peer assessment, and (4) the lack of analysis of head notes, end notes, and length of comments. However, the study has made certain contributions to the existing literature, especially student perception of peer assessment in the composition MOOC in this study. Analysis of the grades given by peers and instructors in the study provides evidence-based information about whether online peer assessment should be used in MOOCs, especially composition MOOCs and what factors might affect the applicability and consistency of peer grading in MOOCs. In addition, analysis of the data provides insights into types of comments students in a composition MOOC made as compared to those instructors made. The findings of the study as a whole can inform the design of future research on peer assessment in composition MOOCs and indicate questions designers of peer assessment training and practice in such MOOCs could find helpful to consider.
Identifer | oai:union.ndltd.org:siu.edu/oai:opensiuc.lib.siu.edu:dissertations-2398 |
Date | 01 May 2017 |
Creators | Vu, Lan Thi |
Publisher | OpenSIUC |
Source Sets | Southern Illinois University Carbondale |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Dissertations |
Page generated in 0.002 seconds