1 |
Evaluating the Integration of Online, Interactive Tutorials into a Data Structures and Algorithms CourseBreakiron, Daniel Aubrey 28 May 2013 (has links)
OpenDSA is a collection of open source tutorials for teaching data structures and algorithms. It was created with the goals of visualizing complex, abstract topics; increasing the amount of practice material available to students; and providing immediate feedback and incremental assessment. In this thesis, I first describe aspects of the OpenDSA architecture relevant to collecting user interaction data. I then present an analysis of the interaction log data gathered from three classes during Spring 2013. The analysis focuses on determining the time distribution of student activity, determining the time required for assignment completion, and exploring \credit-seeking" behaviors and behavior related to non-required exercises.
We identified clusters of students based on when they completed exercises, verified the reliability of estimated time requirements for exercises, provided evidence that a majority of students do not read the text, discovered a measurement that could be used to identify exercises that require additional development, and found evidence that students complete exercises after obtaining credit. Furthermore, we determined that slideshow usage was fairly high (even when credit was not ordered), and skipping to the end of slideshows was more common when credit was offered but also occurred when it was not. / Master of Science
|
2 |
Interactive Visualization for Novice LearnersChon, Jieun 09 July 2019 (has links)
Iteration, the repetition of computational steps, is a core concept in programming. Students usually learn about iteration in an entry-level Computer Science class. Virginia Tech's Computational Thinking (CT) course is designed to teach non-CS majors computing skills and new ways of thinking. The course covers iteration on Day 8 of the class. We conducted a pretest before, and three post-tests after, Day 8 of the Computational Thinking class in Spring 2018 on 137 students. The pre-test was intended to measure knowledge of iteration before the material was covered. We found from the post-tests that students' knowledge of iteration did not satisfy the course objectives in Spring 2018, because the knowledge gain shown between pre-test and post-tests was not significant. We developed interactive visualizations and exercises for Fall 2018 and Spring 2019. For three semesters we conducted tests and compared the data from Fall 2018 and Spring 2019 (the treatment) against Spring 2018 (the control). We found that Spring 2019 students had greater knowledge gains than Spring 2018 students. Also, we conducted surveys in Fall 2018 and Spring 2019 from students to learn more about their recall, helpfulness, and reuse of the interactive visualizations. Finally, we analyzed data from the interactive exercises and page use to investigate students' usage behavior. / Master of Science / Iteration is a process of repeating a set of instructions or structures. An iterative process repeats until a condition is met or a specified number of repetitions is completed. Students usually learn about iteration in an entry-level Computer Science class. Virginia Tech’s Computational Thinking (CT) course is designed to teach non-CS majors computing skills and new ways of thinking. The course covers iteration on Day 8 of the class. We conducted a pretest before, and three post-tests after, Day 8 of the Computational Thinking class in Spring 2018 on 137 students. The pre-test was intended to measure knowledge of iteration before the material was covered. We found from the post-tests that students’ knowledge of iteration did not satisfy the course objectives in Spring 2018. In particular, the knowledge gain shown between pre-test and post-tests was not significant. We developed interactive visualizations and exercises that were used during Fall 2018 and Spring 2019. We conducted tests and compared the data from Fall 2018 and Spring 2019 (the treatment) against Spring 2018 (the control). To see if there was a statistically significant difference between the absolute score means of three groups, we used independent sample t-tests. Also we used paired sample t-tests to see if there was a greater knowledge gain after using our invention. By analyzing the results of the t-tests, we found that Spring 2019 students had greater knowledge gains than Spring 2018 students. Also, we conducted student surveys in Fall 2018 and Spring 2019 to learn more about their opinions on recall, helpfulness, and reuse of the interactive visualizations. We analyzed data from the interactive exercises and page use to investigate students’ usage behavior.
|
3 |
Analyzing Student Session Data in an eTextbookHeo, Samnyeong 18 July 2022 (has links)
As more students interact with online learning platforms and eTextbooks, they generate massive amounts of data. For example, the OpenDSA eTextbook system collects clickstream data as users interact with prose, visualizations, and interactive auto-graded exercises. Ideally, instructors and system developers can harness this information to create better instructional experiences. But in its raw event-level form, it is difficult for developers or instructors to understand student behaviors, or to make testable hypotheses about relationships between behavior and performance.
In this study, we describe our efforts to break raw event-level data first into sessions (a continuous series of work by a student) and then to meaningfully abstract the events into higher-level descriptions of that session. The goal of this abstraction is to help instructors and researchers gain insights into the students' learning behaviors. For example, we can distinguish when students read material and then attempt the associated exercise, versus going straight to the exercise and then hunting for the answers in the associated material.
We first bundle events into related activities, such as the events associated with stepping through a given visualization, or with working a given exercise. Each such group of events defines a state. A state is a basic unit that characterizes the interaction log data, and there are multiple state types including reading prose, interacting with visual contents, and solving exercises. We harnessed the abstracted data to analyze studying behavior and compared it with course performance based on GPA. We analyzed data from the Fall 2020 and Spring 2021 sections of a senior-level Formal Languages course, and also from the Fall 2020 and Spring 2021 sections of a data structures course. / Master of Science / OpenDSA is an online learning platform used in multiple academic institutions including Virginia Tech's Computer Science courses. They use OpenDSA as the main instructional method and students in these courses generate massive amounts of clickstream data while interacting with the OpenDSA content. The system collects various events logs such as when students opened/closed a certain page, how long they stayed on the page, and how many times they clicked an interface element for visualizations and exercises. However, in its raw event-level form, it is difficult for instructors or developers to understand student behaviors, or to make testable hypotheses about relationships between behavior and performance. We describe our efforts to break raw event-level clickstreams into a session (continuous series of work by a student) and then to abstract the events into meaningful higher-level descriptions of students' behavior. We grouped raw events into related activities, such as the events associated with stepping through a given visualization, or working with a given exercise.
We defined such a group of activities as a state, which is a basic unit that can characterize the interaction log data such as reading, slideshows, and exercises state. We harnessed the abstracted data to analyze students' studying behavior and compared it with their course performance based on their GPA. We analyzed data from two offerings of two CS courses at Virginia Tech to gain insights into students' learning behaviors.
|
4 |
Detecting Credit-Seeking Behavior on Programmed Instruction FramesetsElnady, Yusuf Fawzy 02 June 2022 (has links)
When students use an online eTextbook with content and interactive graded exercises, they often display aspects of two types of behavior: credit-seeking, and knowledge-seeking. Any given student might behave to some degree in either way in a given assignment. In this work, we look at multiple aspects of detecting the degree to which either behavior is taking place, and investigate relationships to student performance. In particular, we focus on an eTextbook used for teaching Formal Languages, an advanced computer science course. This eTextbook is using Programmed Instruction (PI) framesets to deliver the material. We take two approaches to analyze session interactions in order to detect credit-seeking incidents.
We first start with a coarse-grained approach by presenting an unsupervised model that clusters the behavior in the work sessions based on the sequence of different interactions that happens during them. Then we perform a fine-grained analysis where we consider the type of each question in the frameset, which can be a multi-choice, single-choice, or T/F question. We show that credit-seeking behavior is negatively affecting the learning outcome of the students. We also find that the type of the PI frame is a key factor in drawing students more into the credit-seeking behavior to finish the PI framesets quickly. We implement three machine learning models that predict students' midterm and overall semester grades based on their amount of credit-seeking behavior on the PI framesets. Finally, we provide a semisupervised learning model to aid in the work session labeling process. / Master of Science / Students frequently exhibit features of two types of behavior when using an online eTextbook with content and interactive graded exercises: credit-seeking and knowledge-seeking.
When solving homework or studying a material, students can behave in either manner to some extent. In this research, we study links between student performance and different elements of recognizing the degree to which either behavior is occurring. We concentrate on an eTextbook used to teach an advanced computer science course, Formal Languages and Automata, using a teaching paradigm called Programmed Instruction (PI). In order to detect credit-seeking instances, we use two ways to study students' behavior in the Programmed Instruction sessions. We begin with a coarse-grained approach by building a model that can categorize work sessions into two groups based on the interactions that occur throughout them. Then we do a fine-grained analysis in which we analyze the question types in the framesets and their effect on the students' behavior. We show that credit-seeking behavior has a negative effect on students' learning outcomes. We discovered that the PI frame type is an important factor in enticing students to engage in credit-seeking behavior in an attempt to finish PI framesets fast. Finally, we present three predictive models that can forecast the students' midterm and total semester grades based on their credit-seeking behavior on the Programmed Instruction framesets.
|
Page generated in 0.0279 seconds