The demand for new competent programmers is increasing with the ever-growing dependency on technology. The workload for teachers with more and more students creates the need for more automated tools for feedback and grading. While some tools exist that alleviate this to some degree, machine learning presents an interesting avenue for techniques and tools to do this more efficiently. Logical errors are common occurrences within novice code, and therefore a model that could detect these would alleviate the workload for the teachers and be a boon to students. This study aims to explore the performance of the machine learning model code2seq in detecting logical errors. This is explored through an empirical experiment where a data-set consisting of real-world Java code that is modified to contain one specific logical error is used to train, validate and test the code2seq model. The performance of the model is measured using the metrics: accuracy, precision, recall and F1-score. The results of this study show promise for the application of the code2seq model in detecting logical errors and have the potential for real-world use in classrooms.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:su-219645 |
Date | January 2023 |
Creators | Lückner, Anton, Chapman, Kevin |
Publisher | Stockholms universitet, Institutionen för data- och systemvetenskap |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0022 seconds