Spelling suggestions: "subject:"learnersourcing"" "subject:"nearsourcing""
1 |
Towards Contextualized Programming Education by Developing a Learnersourcing WorkflowYuzhe Zhou (18398130) 18 April 2024 (has links)
<p dir="ltr">In response to the escalating demand for proficient programming skills in today's technological landscape, innovative educational strategies have emerged to mitigate the challenges inherent in mastering programming concepts. Contextualization, a pedagogical approach embedding learning within real-world contexts, has demonstrated efficacy in enhancing student engagement and understanding. However, its implementation in programming education encounters hurdles related to diverse student backgrounds and resource-intensive material preparation. To address these challenges, this paper proposes leveraging learnersourcing, a collaborative approach wherein students actively contribute to the creation of contextualized learning materials. Specifically, we investigate the viability of implementing a learnersourcing workflow in an advanced database programming class during the Spring semester of 2022 with a group of 23 students enrolled, where students are tasked with generating contextualized worked-out examples. The results reveal that students successfully incorporated diverse contexts into their WEs, demonstrating the potential of learnersourcing to enrich educational content. However, challenges such as vague problem descriptions and formatting errors were identified, emphasizing the need for structured support and guidance. Self-assessment ratings tended to overestimate clarity and educational value, while peer assessments exhibited variability among assessors. Ambiguities in evaluation criteria and limited granularity of rating scales contributed to inconsistencies in assessments. These findings underscore the importance of addressing challenges in learnersourcing implementation, including providing explicit guidance, scaffolding support, and integrating real-time feedback mechanisms. Additionally, efforts to enhance the reliability of self and peer assessments should consider standardization measures and clear evaluation criteria. Future research should explore alternative approaches to improve the validity and consistency of assessments in learnersourcing contexts.</p>
|
2 |
Creating Systems and Applying Large-Scale Methods to Improve Student Remediation in Online Tutoring Systems in Real-time and at ScaleSelent, Douglas A 08 June 2017 (has links)
"A common problem shared amongst online tutoring systems is the time-consuming nature of content creation. It has been estimated that an hour of online instruction can take up to 100-300 hours to create. Several systems have created tools to expedite content creation, such as the Cognitive Tutors Authoring Tool (CTAT) and the ASSISTments builder. Although these tools make content creation more efficient, they all still depend on the efforts of a content creator and/or past historical. These tools do not take full advantage of the power of the crowd. These issues and challenges faced by online tutoring systems provide an ideal environment to implement a solution using crowdsourcing. I created the PeerASSIST system to provide a solution to the challenges faced with tutoring content creation. PeerASSIST crowdsources the work students have done on problems inside the ASSISTments online tutoring system and redistributes that work as a form of tutoring to their peers, who are in need of assistance. Multi-objective multi-armed bandit algorithms are used to distribute student work, which balance exploring which work is good and exploiting the best currently known work. These policies are customized to run in a real-world environment with multiple asynchronous reward functions and an infinite number of actions. Inspired by major companies such as Google, Facebook, and Bing, PeerASSIST is also designed as a platform for simultaneous online experimentation in real-time and at scale. Currently over 600 teachers (grades K-12) are requiring students to show their work. Over 300,000 instances of student work have been collected from over 18,000 students across 28,000 problems. From the student work collected, 2,000 instances have been redistributed to over 550 students who needed help over the past few months. I conducted a randomized controlled experiment to evaluate the effectiveness of PeerASSIST on student performance. Other contributions include representing learning maps as Bayesian networks to model student performance, creating a machine-learning algorithm to derive student incorrect processes from their incorrect answer and the inputs of the problem, and applying Bayesian hypothesis testing to A/B experiments. We showed that learning maps can be simplified without practical loss of accuracy and that time series data is necessary to simplify learning maps if the static data is highly correlated. I also created several interventions to evaluate the effectiveness of the buggy messages generated from the machine-learned incorrect processes. The null results of these experiments demonstrate the difficulty of creating a successful tutoring and suggest that other methods of tutoring content creation (i.e. PeerASSIST) should be explored."
|
Page generated in 0.0487 seconds