• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Debugging Supported Automated Assessment System for Novice Programming

Fong, Chao-Chun 29 August 2010 (has links)
Novice programmers are difficult to debug on their own because of their lacking of prior knowledge. If we want to help them, first we need to able to check the correctness of a novice¡¦s program. And whenever any error is found, we could provide some suggestion to assist them in debugging. We use concolic testing algorithm to automatically generate test inputs. The test inputs generation of the concolic testing is directed by negating path conditions and is produced by solving path constraints. By using of concolic testing, we are able to explore as much more branches as we can. And once we found an error, we will try to locate it for novice programmers. We propose a new method called concolic debugging. Its idea comes from concolic testing. The concolic debugging algorithm initiates with a given failed test, and try to locate the faulty block by negating and backtracking the path conditions of the failed test. We use concolic testing to improve assessing style of the automated assessment system. 86.67% of our sample programs are successfully assessed by concolic testing algorithm on our new automated assessment system. And we also found our concolic debugging is much more stable and accuracy on fault localization then spectrum-based fault localization.
2

iProgram: uma ferramenta de apoio à avaliação de exercícios de programação

SÁ NETO, Eliaquim Lima 12 August 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-08-18T11:43:05Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertacao_elsn_final.pdf: 3297496 bytes, checksum: 994f881a846561129268bb0d37fc24bd (MD5) / Made available in DSpace on 2016-08-18T11:43:05Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertacao_elsn_final.pdf: 3297496 bytes, checksum: 994f881a846561129268bb0d37fc24bd (MD5) Previous issue date: 2015-08-12 / A disciplina de introdução à programação é ministrada no início dos cursos de Computação e desempenha um papel importante para o desenvolvimento do aluno no restante do curso. Trata-se de uma matéria que proporciona muitos desafios tanto para alunos quanto para professores. Os alunos normalmente apresentam problemas no desenvolvimento do raciocínio abstrato e na resolução de problemas, enquanto os professores precisam lidar com questões que vão desde como motivar o aluno até como avaliá-lo. Nesse contexto, a fim de ajudar a superar essas dificuldades, muitas ferramentas que abordam a avaliação automática de exercícios tem sido propostas na literatura. No entanto, nem sempre é possível ao professor acompanhar o progresso da aprendizagem de seus alunos com relação à aquisição do conhecimento esperado na disciplina de programação. Diante disso, uma ferramenta, intitulada iProgram, foi construída para proporcionar ao professor um ambiente no qual ele possa: gerenciar turmas e exercícios, elaborar exercícios com questões de um banco de questões, avaliar exercícios de maneira semiautomática, proporcionar feedback aos alunos, e acompanhar o progresso de seus alunos através de relatórios gráficos. A ferramenta foi avaliada por professores que lecionam disciplinas de introdução à programação, através de entrevistas abertas e questionários. Essa avaliação evidenciou a contribuição do iProgram em proporcionar um ambiente para auxiliar o professor na avaliação de exercícios de programação. De maneira geral, a ferramenta foi avaliada positivamente, com destaque para o feedback fornecido aos alunos, a associação de questões com objetivos de aprendizagem e para os relatórios gráficos disponíveis ao professor. Alguns professores entrevistados, inclusive, demonstraram interesse em utilizar a ferramenta em suas turmas. / Introductory programming courses are offered at the beginning of Computing courses and play an important role in the development of the student in the remainder of the course. This is a discipline that provides many challenges for students as well as for teachers. Students typically have problems in the development of abstract reasoning and problem solving, while teachers have to deal with issues ranging from how to motivate students to how to assess them. Hence, in order to help overcome these difficulties, many tools that address automatic evaluation have been proposed in the literature. However, it is not always possible for the teacher to track the progress of their students’ learning regarding the expected acquisition of knowledge in programming classes. Therefore, we propose iProgram, a tool built to provide teachers with an environment where they can: manage classes and exercises, prepare exams with questions from a database of questions, evaluate exams semi-automatically, provide feedback to students, and monitor the progress of their students through graphical reports. The tool was evaluated by teachers who teach introductory programming courses, through open interviews and questionnaires. This assessment highlighted the contribution of iProgram to provide an environment to help the teacher in the evaluation of programming exams. In general, the tool was positively assessed, especially with respect to the feedback provided to students, the association of questions with learning objectives and graphical reports available to the teacher. Some interviewed teachers even expressed interest in using the tool in their courses.
3

Enhancing Learning of Recursion

Hamouda, Sally Mohamed Fathy Mo 24 November 2015 (has links)
Recursion is one of the most important and hardest topics in lower division computer science courses. As it is an advanced programming skill, the best way to learn it is through targeted practice exercises. But the best practice problems are hard to grade. As a consequence, students experience only a small number of problems. The dearth of feedback to students regarding whether they understand the material compounds the difficulty of teaching and learning CS2 topics. We present a new way for teaching such programming skills. Students view examples and visualizations, then practice a wide variety of automatically assessed, small-scale programming exercises that address the sub-skills required to learn recursion. The basic recursion tutorial (RecurTutor) teaches material typically encountered in CS2 courses. The advanced recursion in binary trees tutorial (BTRecurTutor) covers advanced recursion techniques most often encountered post CS2. It provides detailed feedback on the students' programming exercise answers by performing semantic code analysis on the student's code. Experiments showed that RecurTutor supports recursion learning for CS2 level students. Students who used RecurTutor had statistically significant better grades on recursion exam questions than did students who used a typical instruction. Students who experienced RecurTutor spent statistically significant more time on solving programming exercises than students who experienced typical instruction, and came out with a statistically significant higher confidence level. As a part of our effort in enhancing recursion learning, we have analyzed about 8000 CS2 exam responses on basic recursion questions. From those we discovered a collection of frequently repeated misconceptions, which allowed us to create a draft concept inventory that can be used to measure student's learning of basic recursion skills. We analyzed about 600 binary tree recursion programming exercises from CS3 exam responses. From these we found frequently recurring misconceptions. The main goal of this work is to enhance the learning of recursion. On one side, the recursion tutorials aim to enhance student learning of this topic through addressing the main misconceptions and allow students to do enough practice. On the other side, the recursion concept inventory assesses independently student learning of recursion regardless of the instructional methods. / Ph. D.
4

Reinforcement of Variability and Implications for Creativity

Bayliss, Harvey Ray 23 March 2016 (has links)
One of the defining characteristics of Autism Spectrum Disorder (ASD) is repetitive, rigid, or stereotyped patterns of behavior. A proposed approach to treating such patterns is to provide reinforcement for response variability. Though research demonstrates that the variability of responses can be influenced by contingencies of reinforcement, no studies have examined the effects of placing contingencies on different units of behavior. The purpose of this study was to examine effects of two modified percentile schedules on variety of completed drawings and individual lines drawn by students with ASD who had been referred for engaging in rigid patterns of behavior. For all three participants that completed drawing sessions, results indicated that drawing variability increased the most when reinforcement was contingent on the variability of the completed drawing, as opposed to a random ratio schedule of reinforcement or reinforcement being contingent on individual lines being varied.
5

Enhancements in Volumetric Surgical Simulation

Kerwin, Thomas 22 July 2011 (has links)
No description available.
6

Automatisk återkoppling på programmeringsuppgifter : Undersökning och utveckling av hur automatisk återkoppling kan användas för att främja lärande / Automatic feedback of programming assignments.

Dalianis, Hera January 2022 (has links)
I denna studie undersöktes värdet av automatisk bedömning av programmeringsuppgifter inom högre utbildning ur studenternas perspektiv. Studien utgick från tidigare litteratur och studier om lärande av programmering för att föreslå en utvecklad pedagogik i användningen av automatbedömning. För att konkretisera automatiserad bedömning så undersökte studien hur det automatiserade bedömningsverktyget Kattis används inom ramen för en datalogikurs på Kungliga Tekniska Högskolan. För att förstå studenternas upplevelse av Kattis genomfördes kvalitativa intervjuer med studenter som läst eller läser datalogikursen. Därefter genomfördes en tematisk analys av intervjuerna för att identifiera de centrala delarna i hur studenter använder och upplever Kattis. Dessa delar analyserades sedan utifrån tidigare studier för identifiera vad som bör ändras för att förbättra verktyget ur ett pedagogiskt perspektiv. Däribland identifierades behovet av att informera studenterna och lärare om användning av Kattis och avläsning av Kattis återkoppling. Utifrån denna analys samt diskussion med lärare som använder Kattis i sin undervisning utformades en arbetsguide med metoder och information för lärare och studenter. / This study examines the value of automated assessment of programming assignments in higher education from a student perspective. The study used earlier literature and studies about learning programming and pedagogy to suggest a developed pedagogy in the use of automated assessment. To concretize automated assessment the study explored how the automated assessment tool Kattis have been used in a course in computer science at KTH, Royal Institute of Technology. To understand students’ experience of Kattis, qualitative interviews was conducted withstudents who are taking or have taken the course in computer science. The interviews were analyzed using thematic analysis to identify central aspects of how students use and experience Kattis. These aspects were then analyzed based on earlier scientific studies to identify what should be changed in the use of the tool from a pedagogical perspective. Among this there is a need to inform students and teachers about the use of Kattis and how to read the feedback from Kattis. Based on this analysis and discussion with teachers who use Kattis in their courses, a work guide for teachers was designed with methods and information for teachers and students.
7

Building and Evaluating a Learning Environment for Data Structures and Algorithms Courses

Fouh Mbindi, Eric Noel 29 April 2015 (has links)
Learning technologies in computer science education have been most closely associated with teaching of programming, including automatic assessment of programming exercises. However, when it comes to teaching computer science content and concepts, learning technologies have not been heavily used. Perhaps the best known application today is Algorithm Visualization (AV), of which there are hundreds of examples. AVs tend to focus on presenting the procedural aspects of how a given algorithm works, rather than more conceptual content. There are also new electronic textbooks (eTextbooks) that incorporate the ability to edit and execute program examples. For many traditional courses, a longstanding problem is lack of sufficient practice exercises with feedback to the student. Automated assessment provides a way to increase the number of exercises on which students can receive feedback. Interactive eTextbooks have the potential to make it easy for instructors to introduce both visualizations and practice exercises into their courses. OpenDSA is an interactive eTextbook for data structures and algorithms (DSA) courses. It integrates tutorial content with AVs and automatically assessed interactive exercises. Since Spring 2013, OpenDSA has been regularly used to teach a fundamental data structures and algorithms course (CS2), and also a more advanced data structures, algorithms, and analysis course (CS3) at various institutions of higher education. In this thesis, I report on findings from early adoption of the OpenDSA system. I describe how OpenDSA's design addresses obstacles in the use of AV systems. I identify a wide variety of use for OpenDSA in the classroom. I found that instructors used OpenDSA exercises as graded assignments in all the courses where it was used. Some instructors assigned an OpenDSA assignment before lectures and started spending more time teaching higher-level concepts. OpenDSA also supported implementing a ``flipped classroom'' by some instructors. I found that students are enthusiastic about OpenDSA and voluntarily used the AVs embedded within OpenDSA. Students found OpenDSA beneficial and expressed a preference for a class format that included using OpenDSA as part of the assigned graded work. The relationship between OpenDSA and students' performance was inconclusive, but I found that students with higher grades tend to complete more exercises. / Ph. D.
8

Automated Assessment of Student-written Tests Based on Defect-detection Capability

Shams, Zalia 05 May 2015 (has links)
Software testing is important, but judging whether a set of software tests is effective is difficult. This problem also appears in the classroom as educators more frequently include software testing activities in programming assignments. The most common measures used to assess student-written software tests are coverage criteria—tracking how much of the student’s code (in terms of statements, or branches) is exercised by the corresponding tests. However, coverage criteria have limitations and sometimes overestimate the true quality of the tests. This dissertation investigates alternative measures of test quality based on how many defects the tests can detect either from code written by other students—all-pairs execution—or from artificially injected changes—mutation analysis. We also investigate a new potential measure called checked code coverage that calculates coverage from the dynamic backward slices of test oracles, i.e. all statements that contribute to the checked result of any test. Adoption of these alternative approaches in automated classroom grading systems require overcoming a number of technical challenges. This research addresses these challenges and experimentally compares different methods in terms of how well they predict defect-detection capabilities of student-written tests when run against over 36,500 known, authentic, human-written errors. For data collection, we use CS2 assignments and evaluate students’ tests with 10 different measures—all-pairs execution, mutation testing with four different sets of mutation operators, checked code coverage, and four coverage criteria. Experimental results encompassing 1,971,073 test runs show that all-pairs execution is the most accurate predictor of the underlying defect-detection capability of a test suite. The second best predictor is mutation analysis with the statement deletion operator. Further, no strong correlation was found between defect-detection capability and coverage measures. / Ph. D.
9

An Interactive Tutorial for NP-Completeness

Maji, Nabanita 18 June 2015 (has links)
A Theory of Algorithms course is essential to any Computer Science curriculum at both the undergraduate and graduate levels. It is also considered to be difficult material to teach or to learn. In particular the topics of Computational Complexity Theory, reductions, and the NP-Complete class of problems are considered difficult by students. Numerous algorithm visualizations (AVs) have been developed over the years to portray the dynamic nature of known algorithms commonly taught in undergraduate classes. However, to the best of our knowledge, the instructional material available for NP-Completeness is mostly static and textual, which does little to alleviate the complexity of the topic. Our aim is to improve the pedagogy of NP-Completeness by providing intuitive, interactive, and easy-to-understand visualizations for standard NP Complete problems, reductions, and proofs. In this thesis, we present a set of visualizations that we developed using the OpenDSA framework for certain NP-Complete problems. Our paradigm is a three step process. We first use an AV to illustrate a particular NP-Complete problem. Then we present an exercise to provide a first-hand experience with attempting to solve a problem instance. Finally, we present a visualization of a reduction as a part of the proof for NP-Completeness. Our work has been delivered as a collection of modules in OpenDSA, an interactive eTextbook system developed at Virginia Tech. The tutorial has been introduced as a teaching supplement in both a senior undergraduate and a graduate class. We present an analysis of the system use based on records of online interactions by students who used the tutorial. We also present results from a survey of the students. / Master of Science
10

Teaching Formal Languages through Visualizations, Machine Simulations, Auto-Graded Exercises, and Programmed Instruction

Mohammed, Mostafa Kamel Osman 14 July 2021 (has links)
The material taught in a Formal Languages course is mathematical in nature and requires students to practice proofs and algorithms to understand the content. Traditional Formal Languages textbooks are heavy on prose, and homework typically consists of solving many paper exercises. Some instructors make use of finite state machine simulators like the JFLAP package. JFLAP helps students by allowing them to build models and apply various algorithms on these models, which improves student interaction with the studied material. However, students still need to read a significant amount of text and practice problems by hand to achieve understanding. Inspired by the principles of the Programmed Instruction (PI) teaching method, we seek to develop a new Formal Languages eTextbook capable of conveying these concepts more intuitively. The PI approach has students read a little, ideally a sentence or a paragraph, and then answer a question or complete an exercise related to that information. Based on the question response, students can continue to other information frames or retry to solve the exercise. Our goal is to present all algorithms using algorithm visualizations and produce proficiency exercises to let students demonstrate understanding. To evaluate the pedagogical effectiveness of our new eTextbook, we conduct time and performance evaluations across two offerings of the course CS4114 Formal Languages and Automata. In time evaluation, the time spent by students looking at instructional content with text and visualizations versus with PI frames is compared to determine levels of student engagement. In performance evaluation, students grades are compared to assess learning gains with text and paper exercises only, with text, visualizations with exercises, and with PI frames. / Doctor of Philosophy / Theory textbooks in computer science are hard to read and understand. Traditionally, instructors use books that are heavy on mathematical prose and paper exercises. Sometimes, instructors use simulators to allow students to create, simulate, and test models. Previously, we found that students tend to skip reading the text presented in the books. This leads to less understanding of the topics taught in the course. To increase student engagement, we developed a new eTextbook for the Formal Languages course. We used pedagogy based on Programmed Instruction, presenting the content in the form of short bits of prose followed by the related question. If students can solve the question correctly, this means that they understood the content and are ready to move forward. To help both instructors and students, we developed a new Formal Languages simulator named OpenFLAP. OpenFLAP allows instructors to create many exercises, and OpenFLAP can grade these exercises automatically.

Page generated in 0.4532 seconds