Measuring student learning is fundamental to any educational endeavor. A primary goal of many computer science education projects is to determine the extent to which a given instructional intervention has had an impact on student learning. However, the field of computing lacks valid and reliable assessment instruments for pedagogical or research purposes. Without such valid assessments, it is difficult to accurately measure student learning or establish a relationship between the instructional setting and learning outcomes. The goal of assessment research in computer science is to have valid ways of measuring student conceptions of fundamental topics, which will enable both research into how understanding of knowledge in the domain develops as well as enable curricular innovation and reform grounded in this knowledge.
My dissertation work focused on three questions regarding assessment of introductory concepts in computer science. How can existing test development methods be applied and adapted to create a valid assessment instrument for CS1 conceptual knowledge? To what extent can pseudo-code be used as the mechanism for achieving programming language independence in an assessment instrument? And to what extent does the language independent instrument provide a valid measure of CS1 conceptual knowledge?
I developed the Foundational CS1 (FCS1) Assessment instrument, the first assessment instrument for introductory computer science concepts that is applicable across a variety of current pedagogies and programming languages. I applied methods from educational and psychological test development, adapting them as necessary to fit the disciplinary context. I conducted think aloud interviews and a large scale empirical study to demonstrate that pseudo-code was an appropriate mechanism for achieving programming language independence. Student participants were able to read and reason in the pseudo-code syntax without difficulty and were able to transfer conceptual knowledge from their CS1 programming language to pseudo-code. Finally, I established the validity of the assessment using a multi-faceted argument, combining interview data, statistical analysis of results on the assessment, and exam scores.
The contributions of this research are: (1) An example of how to bootstrap the process for developing the first assessment instrument for a disciplinary specific design-based field. (2) Identification that although it may not be possible to correlate scores between computer science exams created with different measurement goals, the validity claims of the individual assessments are not diminished. (3) A demonstration that novice computing students, at an appropriate level of development, can transfer their understanding of fundamental concepts to pseudo-code notation. (4) A valid assessment of introductory computing concepts for procedurally-based introductory computing courses taught in Java, Matlab, or Python at the university level.
Identifer | oai:union.ndltd.org:GATECH/oai:smartech.gatech.edu:1853/37090 |
Date | 26 August 2010 |
Creators | Tew, Allison Elliott |
Publisher | Georgia Institute of Technology |
Source Sets | Georgia Tech Electronic Thesis and Dissertation Archive |
Detected Language | English |
Type | Dissertation |
Page generated in 0.0016 seconds