1 |
Collecting Student Data for Accreditation AssessmentRingenbach, Michael 03 March 2011 (has links)
This paper seeks to identify one of the key problems faced by academic institutions seeking accreditation. The accreditation process requires academic institutions to conduct a self-study analyzing how well a given program is meeting the learning outcomes the accreditation board uses in its assessment. This self-study by schools often contains qualitative or subjective data and does not directly correlate the learning outcomes being measured to student performance. The lack of quantitative measurements at a granular level means that it is difficult for the academic institution to prove that it was effective in meeting a particular outcome.
I propose in this paper a tool that is both efficient and effective in capturing quantitative data at the student level. The tool maps specific coursework to learning outcomes and shows how students performed towards that outcome over the duration of a particular course or program. Additionally, the data collected by the tool can be used to assess course and program design. / Master of Science
|
2 |
Intelligent Goal-Oriented Feedback for Java Programming AssignmentsKandru, Nischel 12 July 2018 (has links)
Within computer science education, goal-oriented feedback motivates beginners to be engaged in learning programming. As the number of students increases, it is challenging for teaching assistants to cater to all the doubts of students and provide goals. This problem is addressed by intelligent visual feedback which guides beginners formulate effective goals to resolve all the errors they would incur while solving a programming assignment.
Most current automated feedback mechanisms provide feedback without categorization, prioritization, or goal formulation in mind. Students may overlook important issues, and high priority issues might be hidden among other issues. Also, beginners are not well equipped in formulating goals to resolve the issues provided in the feedback.
In this research, we address the problem of providing an effective, intelligent goal-oriented feedback to student's code to resolve all the issues in their code while ensuring that the code is well tested. The goal-oriented feedback would eventually implicitly navigate the students to write a logically correct solution. The code feedback is summarized into four categories in the descending order of priority: Coding, Student's Testing, Behavior, and Style. Each category is further classified into subcategories, and a simple visual summary of the student's code is also provided.
Each of the above-mentioned categories has detailed feedback on each error in that category to provide a better understanding of the errors. We also offer enhanced error messages and diagnosis of errors to make the feedback very useful.
This intelligent feedback has been integrated into Web-CAT, an open-source automated grading tool developed at Virginia Tech that is widely used by many universities. A user survey was collected after the students have utilized this feedback for a couple of programming assignments and we obtained promising results to claim that our intelligent feedback is effective. / Master of Science / Within computer science education, goal-oriented feedback motivates beginners to be engaged in learning programming. As the number of students increases, it is challenging for teaching assistants to cater to all the doubts of students and provide goals. This problem is addressed by intelligent visual feedback which guides beginners formulate effective goals to resolve all the issues they would incur while programming.
Most current automated feedback mechanisms provide feedback without categorization, prioritization, or goal formulation in mind. Students may overlook important issues, and high priority issues might be hidden among other issues. Also, beginners are not well equipped in formulating goals to resolve the issues provided in the feedback.
In this research, we address the problem of providing an effective, intelligent goal-oriented feedback to student’s code to resolve all the issues in their code. The goal-oriented feedback would eventually implicitly navigate the students to write a logically correct solution. The code feedback is modularized smartly to guide students to understand the issues easily.
A simple visual summary of the student’s code is also provided to help students obtain an overview of the issues in their code. We also offer detailed feedback on each error along with enhanced error messages and diagnosis of errors to make the feedback very effective.
|
3 |
From Intuition to Evidence: A Data-Driven Approach to Transforming CS EducationAllevato, Anthony James 13 August 2012 (has links)
Educators in many disciplines are too often forced to rely on intuition about how students learn and the effectiveness of teaching to guide changes and improvements to their curricula. In computer science, systems that perform automated collection and assessment of programming assignments are seeing increased adoption, and these systems generate a great deal of meaningful intermediate data and statistics during the grading process. Continuous collection of these data and long-term retention of collected data present educators with a new resource to assess both learning (how well students understand a topic or how they behave on assignments) and teaching (how effective a response, intervention, or assessment instrument was in evaluating knowledge or changing behavior), by basing their decisions on evidence rather than intuition. It is only possible to achieve these goals, however, if such data are easily accessible.
I present an infrastructure that has been added to one such automated grading system, Web-CAT, in order to facilitate routine data collection and access while requiring very little added effort by instructors. Using this infrastructure, I present three case studies that serve as representative examples of educational questions that can be explored thoroughly using pre-existing data from required student work. The first case study examines student time management habits and finds that students perform better when they start earlier but that offering extra credit for finishing earlier did not encourage them to do so. The second case study evaluates a tool used to improve student understanding of manual memory management and finds that students made fewer errors when using the tool. The third case study evaluates the reference tests used to grade student code on a selected assignment and confirms that the tests are a suitable instrument for assessing student ability. In each case study, I use a data-driven, evidence-based approach spanning multiple semesters and students, allowing me to answer each question in greater detail than was possible using previous methods and giving me significantly increased confidence in my conclusions. / Ph. D.
|
Page generated in 0.0243 seconds