In order to improve the feedback an intelligent tutoring system provides, the grading engine needs to do more than simply indicate whether a student gives a correct answer or not. Good feedback must provide actionable information with diagnostic value. This means the grading system must be able to determine what knowledge gap or misconception may have caused the student to answer a question incorrectly. This research evaluated the quality of a rules-based grading engine in an automated online homework system by comparing grading engine scores with manually graded scores. The research sought to improve the grading engine by assessing student understanding using knowledge component research. Comparing both the current student scores and the new student scores with the manually graded scores led us to believe the grading engine rules were improved. By better aligning grading engine rules with requisite knowledge components and making revisions to task instructions the quality of the feedback provided would likely be enhanced.
Identifer | oai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-7605 |
Date | 01 December 2016 |
Creators | Chapman, John Shadrack |
Publisher | BYU ScholarsArchive |
Source Sets | Brigham Young University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | All Theses and Dissertations |
Rights | http://lib.byu.edu/about/copyright/ |
Page generated in 0.002 seconds