Evaluation of student understanding of learning material is critical to effective teaching. Current computer-aided evaluation tools exist, such as Computer Adaptive Testing (CAT); however, they require expert knowledge to implement and update. We propose a novel task, to create an evaluation tool that can predict student performance (knowledge) based on general performance on test questions without expert curation of the questions or expert understanding of the evaluation tool. We implement two methods for creating such a tool, find both methods lacking, and urge further investigation.
Identifer | oai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-10646 |
Date | 11 August 2022 |
Creators | Armstrong, Piper |
Publisher | BYU ScholarsArchive |
Source Sets | Brigham Young University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | https://lib.byu.edu/about/copyright/ |
Page generated in 0.0024 seconds