Return to search

Validating the Rating Process of an English as a Second Language Writing Portfolio Exam

A validity study can be used to investigate the effectiveness of an exam and reveal both its strengths and weaknesses. This study concerns an investigation of the writing portfolio Level Achievement Test (LAT) at the English Language Center (ELC) of Brigham Young University (BYU). The writing portfolios of 251 students at five proficiency levels were rated by 11 raters. Writing portfolios consisted of two coursework essays, a self-reflection assignment, and a 30-minute timed essay. Quantitative methods included an analysis with Many-Facet Rasch Model (MFRM) software, called FACETS, which looked for anomalies in levels, classes, examinees, raters, writing criteria, and the rating scale categories. Qualitative methods involved a rater survey, rater Think Aloud Protocols (TAPs), and rater interviews. Results indicated that the exam has a high degree of validity based on the MFRM analysis. The survey and TAPs revealed that although raters follow a similar pattern for rating portfolios, they differed both in the time they took to rate portfolios and in the degree to which they favored the rating criteria. This may explain some of the discrepancies in the MFRM rater analysis. Conclusions from the MFRM analysis, surveys, TAPs, and interviews were all used to make recommendations to improve the rating process of the LAT, as well as to strengthen the relationship between LAT rating and classroom teaching and grading.

Identiferoai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-1454
Date29 June 2006
CreatorsMcCollum, Robb Mark
PublisherBYU ScholarsArchive
Source SetsBrigham Young University
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceTheses and Dissertations
Rightshttp://lib.byu.edu/about/copyright/

Page generated in 0.0024 seconds