Automated grading systems provide feedback to computer science students in a variety of ways, but often focus on incorrect program behaviors. These tools will provide indications of test case failures or runtime errors, but without debugging skills, students often become frus- trated when they don't know where to start. They know their code has defects, but finding the problem may be beyond their experience, especially for beginners. An additional concern is balancing the need to provide enough direction to be useful, without giving the student so much direction that they are effectively given the answer. Drawing on the experiences of the software engineering community, in this work we apply a technique called statistical fault location (SFL) to student program assignments. Using the GZoltar software tool, we applied this technique to a set of previously-submitted student assignments gathered from students in our introductory CS course, CS 1114: Introduction to Software Design. After a manual inspection of the student code, this exercise demonstrated that the SFL technique identifies the defective method in the first three most suspicious methods in the student's code 90% of the time. We then developed a plug-in for Web-CAT to allow new student submissions to be evaluated with the GZoltar SFL system. Additionally, we developed a tool to create a heat map visualization to show the results of the SFL evaluation overlaid on the student's source code. We deployed this toolset for use in CS 1114 in Fall 2017. We then surveyed the students about their perceptions of the utility of the visualization for helping them understand how to find and correct the defects in their code, versus not having access to the heat map. Their responses led to refinements in our presentation of the feedback. We also evaluated the performance of CS 1114 classes from two semesters and discovered that having the heat maps led to more frequent incremental improvements in their code, as well as reaching their highest correctness score on instructor-provided tests more quickly than students that did not have access to the heat maps. Finally, we suggest several directions for future enhancements to the feedback interface. / Doctor of Philosophy / Automated grading systems provide feedback to computer science students in a variety of ways, but often focus on incorrect program behaviors. These tools will provide indications of test case failures or runtime errors, but without debugging skills, students often become frus- trated when they don't know where to start. They know their code has defects, but finding the problem may be beyond their experience, especially for beginners. An additional concern is balancing the need to provide enough direction to be useful, without giving the student so much direction that they are effectively given the answer. Drawing on the experiences of the software engineering community, in this work we apply a technique called statistical fault location (SFL) to student program assignments. Using the GZoltar software tool, we applied this technique to a set of previously-submitted student assignments gathered from students in our introductory CS course, CS 1114: Introduction to Software Design. After a manual inspection of the student code, this exercise demonstrated that the SFL technique identifies the defective method in the first three most suspicious methods in the student's code 90% of the time. We then developed a plug-in for Web-CAT to allow new student submissions to be evaluated with the GZoltar SFL system. Additionally, we developed a tool to create a heat map visualization to show the results of the SFL evaluation overlaid on the student's source code. We deployed this toolset for use in CS 1114 in Fall 2017. We then surveyed the students about their perceptions of the utility of the visualization for helping them understand how to find and correct the defects in their code, versus not having access to the heat map. Their responses led to refinements in our presentation of the feedback. We also evaluated the performance of CS 1114 classes from two semesters and discovered that having the heat maps led to more frequent incremental improvements in their code, as well as reaching their highest correctness score on instructor-provided tests more quickly than students that did not have access to the heat maps. Finally, we suggest several directions for future enhancements to the feedback interface.
Identifer | oai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/96008 |
Date | 16 December 2019 |
Creators | Edmison, Kenneth Robert, Jr. |
Contributors | Computer Science, Edwards, Stephen H., Perez-Quinonez, Manuel A., Maranhao, Rui, Servant Cortes, Francisco Javier, Meng, Na |
Publisher | Virginia Tech |
Source Sets | Virginia Tech Theses and Dissertation |
Detected Language | English |
Type | Dissertation |
Format | ETD, application/pdf |
Rights | In Copyright, http://rightsstatements.org/vocab/InC/1.0/ |
Page generated in 0.0027 seconds