Spelling suggestions: "subject:"automated degrading""
1 |
Supporting Direct Markup and Evaluation of Students' Projects On-lineVastani, Hussein Kamaluddin 23 August 2004 (has links)
Automated grading systems have been researched at various universities for several years. Numerous systems have been developed that automate the process of grading by compiling, executing and testing the students submitted source code. However, such systems are mostly written as UNIX scripts and are restricted to performing one kind of activity. The instructors or teaching assistants have to resort to other methods in order to provide their feedback to the students.
The core of this thesis is to research a TA feedback mechanism which will streamline the grading process for the professors and teaching assistants. A web-based grading tool has been developed that allows course staff to enter comments for students' programs directly through a web browser. The interface provides for full direct-manipulation editing of comments, which are then immediately viewable by students when they look up assignment results. Such an interface also has potential to support peer grading of assignments.
Teaching assistants of introductory programming level courses were interviewed to learn about the different grading methods they use and were asked their opinion of our new grading interface. TAs were also asked to grade assignments using the traditional paper method as well as the computer using our new grading tool for comparison. Finally, an anonymous survey was sent out to various computer science faculties in different universities to gather information about the expectations they have with respect to TA grading activities for programming assignments and the learning outcomes that these professors desire for their students. / Master of Science
|
2 |
The Programming Exercise Markup Language: A Teacher-Oriented Format for Describing Auto-graded AssignmentsMishra, Divyansh Shankar 28 June 2023 (has links)
Automated programming assignment grading tools have become integral to CS courses at introductory as well as advanced levels. However a lot of these tools have their own custom approaches to setting up assignments and describing how solutions should be tested, requiring instructors to make a significant learning investment to begin using a new tool.
In addition, differences between tools mean that initial investment must be repeated when switching tools or adding a new one. Worse still, tool-specific strategies further reduce the ability of educators to share and reuse their assignments.
As a solution to this problem, we describe our experiences working with PEML, the Programming Exercise Markup Language, which provides an easy to use, instructor friendly approach for writing programming assignments. Unlike tool-oriented data interchange formats, PEML is designed to provide a human friendly authoring format that has been developed to be intuitive, expressive and not be a technological or notational barrier to instructors.
We describe the design of PEML and also discuss its implementation as a programming library, a web application, and a microservice that provides full parsing and rendering capabilities for easy integration into any tools or scripting libraries. We also describe the integration of PEML into two automated testing and grading tools used at Virginia Tech by the CS department: Code Workout and Web-CAT. We then describe our experiences using PEML to describe a full range of programming assignments, laboratory exercises, and small coding questions of varying complexity in demonstrating the practicality of the notation.
We evaluate the feasibility of PEML using this encoding exercise as well as the effect of its integration into the aforementioned automated grading tools. We finally present a framework for integrating PEML into existing grading tools and then draw our conclusions as well as list down avenues PEML can be expanded into in the future. / Master of Science / Automated grading tools have become ubiquitous to CS courses focused on programming concepts at both the undergraduate as well as graduate level. These tools allow instructors to provide near instant feedback to students as well as spend more time focusing on the curriculum rather than grading.
However, these tools use a variety programming assignment representation formats and without a standardized representation, instructors and educators may struggle to share and reuse assignments across different tools and platforms.
To address this need, we have developed the Programming Exercise Markup Language (PEML), a standardized format for representing programming exercises, designed to be human-friendly as well as easy to learn and use. PEML includes information about the problem statement, input and output formats, constraints, and sample test cases, and can be used for a wide range of exercise types and programming languages.
As part of this master's thesis project, we encoded 50 assignments of varying size and difficulty into PEML as well as integrated support for PEML into Web-CAT and Code Workout, two commonly used automated grading tools used at Virginia Tech. Building upon our experience performing this task, we also designed a framework that can be utilized when integrating PEML into other automated grading tools.
By providing a standardized way of representing programming assignments, PEML can help to streamline programming education and make it easier for instructors and educators to create and share assignments across different tools and platforms.
|
3 |
From Intuition to Evidence: A Data-Driven Approach to Transforming CS EducationAllevato, Anthony James 13 August 2012 (has links)
Educators in many disciplines are too often forced to rely on intuition about how students learn and the effectiveness of teaching to guide changes and improvements to their curricula. In computer science, systems that perform automated collection and assessment of programming assignments are seeing increased adoption, and these systems generate a great deal of meaningful intermediate data and statistics during the grading process. Continuous collection of these data and long-term retention of collected data present educators with a new resource to assess both learning (how well students understand a topic or how they behave on assignments) and teaching (how effective a response, intervention, or assessment instrument was in evaluating knowledge or changing behavior), by basing their decisions on evidence rather than intuition. It is only possible to achieve these goals, however, if such data are easily accessible.
I present an infrastructure that has been added to one such automated grading system, Web-CAT, in order to facilitate routine data collection and access while requiring very little added effort by instructors. Using this infrastructure, I present three case studies that serve as representative examples of educational questions that can be explored thoroughly using pre-existing data from required student work. The first case study examines student time management habits and finds that students perform better when they start earlier but that offering extra credit for finishing earlier did not encourage them to do so. The second case study evaluates a tool used to improve student understanding of manual memory management and finds that students made fewer errors when using the tool. The third case study evaluates the reference tests used to grade student code on a selected assignment and confirms that the tests are a suitable instrument for assessing student ability. In each case study, I use a data-driven, evidence-based approach spanning multiple semesters and students, allowing me to answer each question in greater detail than was possible using previous methods and giving me significantly increased confidence in my conclusions. / Ph. D.
|
Page generated in 0.0672 seconds