• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Estimating the Reliability of Concept Map Ratings Using a Scoring Rubric Based on Three Attributes

Jimenez, Laura 16 July 2010 (has links) (PDF)
Concept maps provide a way to assess how well students have developed an organized understanding of how the concepts taught in a unit are interrelated and fit together. However, concept maps are challenging to score because of the idiosyncratic ways in which students organize their knowledge (McClure, Sonak, & Suen, 1999). The construct a map or C-mapping" task has been shown to capture students' organized understanding. This "C-mapping" task involves giving students a list of concepts and asking them to produce a map showing how these concepts are interrelated. The purpose of this study was twofold: (a) to determine to what extent the use of the restricted C-mapping technique coupled with the threefold scoring rubric produced reliable ratings of students conceptual understanding from two examinations, and (b) to project how the reliability of the mean ratings for individual students would likely vary as a function of the average number of raters and rating occasions from two examinations. Nearly three-fourths (73%) of the variability in the ratings for one exam and (43 %) of the variability for the other exam were due to dependable differences in the students' understanding detected by the raters. The rater inconsistencies were higher for one exam and somewhat lower for the other exam. The person-to-rater interaction was relatively small for one exam and somewhat higher for the other exam. The rater-by-occasion variance components were zero for both exams. The unexplained variance accounted for 19% on one exam and 14% on the other. The size of the reliability coefficient of student concept map scores varied across the two examinations. A reliability of .95 and .93 for relative and absolute decision was obtained for one exam. A reliability of .88 and .78. for absolute and relative decision was obtained for the other exam. Increasing the number of raters from one to two on one rating occasion would yield a greater increase in the reliability of the ratings at a lower cost than increasing the number of rating occasions. The same pattern holds for both exams.

Page generated in 0.127 seconds