Return to search

Measuring the Software Development Process to Enable Formative Feedback

Graduating CS students face well-documented difficulties upon entering the workforce, with reports of a gap between what they learn and what is expected of them in industry. Project management, software testing, and debugging have been repeatedly listed as common "knowledge deficiencies" among newly hired CS graduates. Similar difficulties manifest themselves on a smaller scale in upper-level CS courses, like the Data Structures and Algorithms course at Virginia Tech: students are required to develop large and complex projects over a three to four week lifecycle, and it is common to see close to a quarter of the students drop or fail the course, largely due to the difficult and time-consuming nature of the projects. My research is driven by the hypothesis that regular feedback about the software development process, delivered during development, will help ameliorate these difficulties. Assessment of software currently tends to focus on qualities like correctness, code coverage from test suites, and code style. Little attention or tooling has been developed for the assessment of the software development process. I use empirical software engineering methods like IDE-log analysis, software repository mining, and semi-structured interviews with students to identify effective and ineffective software practices to formulate. Using the results of these analyses, I have worked on assessing students' development in terms of time management, test writing, test quality, and other "self-checking" behaviours like running the program locally or submitting to an oracle of instructor-written test cases. The goal is to use this information to formulate formative feedback about the software development process. In addition to educators, this research is relevant to software engineering researchers and practitioners, since the results from these experiments are based on the work of upper-level students who grapple with issues of design and work-flow that are not far removed from those faced by professionals in industry. / Doctor of Philosophy / Graduating CS students face well-documented difficulties upon entering the workforce, with reports of a gap between what they learn and what is expected of them as professional soft-ware developers. Project management, software testing, and debugging have been repeatedly listed as common "knowledge deficiencies" among newly hired CS graduates. Similar difficulties manifest themselves on a smaller scale in upper-level CS courses, like the DataStructures and Algorithms course at Virginia Tech: students are required to develop large and complex software projects over a three to four week lifecycle, and it is common to see close to a quarter of the students drop or fail the course, largely due to the difficult and time-consuming nature of the projects. The development of these projects necessitates adherence to disciplined software process, i.e., incremental development, testing, and debugging of small pieces of functionality. My research is driven by the hypothesis that regular feedback about the software development process, delivered during development, will help ameliorate these difficulties. However, in educational contexts, assessment of software currently tends to focus on properties of the final product like correctness, quality of automated software tests, and adherence to code style requirements. Little attention or tooling has been developed for the assessment of the software development process. In this dissertation, I quantitatively characterise students' software development habits, using data from numerous sources: us-age logs from students' software development environments, detailed sequences of snapshots showing the project's evolution over time, and interviews with the students themselves. I analyse the relationships between students' development behaviours and their project out-comes, and use the results of these analyses to determine the effectiveness or ineffectiveness of students' software development processes. I have worked on assessing students' development in terms of time management, test writing, test quality, and other "self-checking"behaviours like running their programs locally or submitting them to an online system that uses instructor-written tests to generate a correctness score. The goal is to use this information to assess the quality of one's software development process in a way that is formative instead of summative, i.e., it can be done while students work toward project completion as opposed to after they are finished. For example, if we can identify procrastinating students early in the project timeline, we could intervene as needed and possibly help them to avoid the consequences of bad project management (e.g., unfinished or late project submissions).In addition to educators, this research is relevant to software engineering researchers and practitioners, since the results from these experiments are based on the work of upper-level students who grapple with issues of design and work-flow that are not far removed from those faced by professionals in industry.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/97723
Date16 April 2020
CreatorsKazerouni, Ayaan Mehdi
ContributorsComputer Science, Shaffer, Clifford A., Edwards, Stephen H., Kafura, Dennis G., Servant Cortes, Francisco Javier, Spacco, Jaime
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
Detected LanguageEnglish
TypeDissertation
FormatETD, application/pdf, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0021 seconds