• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Computer managed learning assessment in higher education: the effect of a practice test.

Sly, Janet L. January 2000 (has links)
This thesis reports the results of studies set up to investigate formative assessment in the context of a computer managed learning (CML) practice test. The studies sought to determine whether taking the practice test affects performance on later CML assessed tests for first year university students and to determine the characteristics of the most effective CML practice test. The study was carried out in the context of CML testing at Curtin University of Technology. Because data were collected in a real testing environment, the research questions were addressed using a series of small studies, each focusing on a one-semester unit for first year students. Those students who sat a practice test improved their performance from the practice to the assessed test. Further, they outperformed the non practice test group on the assessed test. The effect was statistically significant in eleven of the twelve studies where CML test results were investigated. Student ability, anxiety level, and sex did not affect test performance or choice to sit the practice test. Students preferred to be given the correct answer for an incorrect response and to have a practice test that was the same length as the assessed test but students continued to show improved performance regardless of these conditions. They reported that they used the feedback in a variety of ways including identifying important areas of content, identifying their own error areas and as a motivator for further study. The findings suggest that using the CML system as a formative assessment tool improves student performance on summative assessment. The practice test is contributing to improved performance, however this improvement cannot be attributed to a single factor. In those cases where the practice test only partially covers the content of the assessed test, the improvement is seen on that common part, however when ++ / there was no overlap of content the group who did the practice test still performed better on the assessed test than the group who did not. This suggests that a contributing factor may be familiarity either with the CML system, items or test type. It is also possible that the beneficial effect was due to prior exposure to the CML system and that only one test is required for this purpose.This research has implications for current teaching practices because the acceptance of a practice test provides feedback to both students and lecturers prior to the assessed test. The optimal practice test c covers the same content as the assessed test with the same number of items and provides the correct answer for a item answered incorrectly. The key recommendation for use of the CML system is the provision of a practice test for formative purposes, for the use of both lecturers and students. Lecturers need to encourage student participation not just on an initial practice test but on all practice tests provided. Students need to be encouraged to review their error summary, as is the current practice in the CML Laboratory. Lecturers need to make more use of the feedback provided by the tests, in terms of content coverage, revision and consolidation of work, and quality of test items.
2

Improving learning and teaching through automated short-answer marking

Siddiqi, Raheel January 2010 (has links)
Automated short-answer marking cannot 'guarantee' 100% agreement between the marks generated by a software system and the marks produced separately by a human. This problem has prevented automated marking systems from being used in high-stake short-answer marking. Given this limitation, can an automated short-answer marking system have any practical application? This thesis describes how an automated short-answer marking system, called IndusMarker, can be effectively used to improve learning and teaching.The design and evaluation of IndusMarker are also presented in the thesis. IndusMarker is designed for factual answers where there is a clear criterion for answers being right or wrong. The system is based on structure matching, i.e. matching a pre-specified structure, developed via a purpose-built structure editor, with the content of the student's answer text. An examiner specifies the required structure of an answer in a simple purpose-designed language called Question Answer Markup Language (QAML). The structure editor ensures that users construct correct required structures (with respect to QAML's syntax and informal semantics) in a form that is suitable for accurate automated marking.
3

The Effects of Individualized Test Coaching on Teacher Certification Test Scores.

Hall, Kathryn Cowart 08 1900 (has links)
While student populations are growing, the gatekeeping devices of teacher certification examinations prevent many who want and are trained to teach from entering the profession. If failing these exams predicted failure to teach well, blocking students who do not pass certification exams from entering the profession might be a well-reasoned policy. However, many studies indicate that there is little correlation between certification test scores and quality of teaching. The present study investigated the effectiveness of a program to improve the scores of Texas elementary preservice teachers on a required certification exam. The program consisted of one-on-one coaching of preservice teachers upon the completion of coursework and prior to their taking the state's certification exam. Subjects' scores on a representative form of the certification test were used as pre-treatment measures. The content of the treatment program was individualized for each subject and determined by the specific items missed by each subject on the representative form. The post-treatment measure was the subject's score on the certification exam. Scores on the representative form and on the certification examination were compared to determine if there were significant differences between scores of preservice teachers who had been coached and those who were not coached. Since subjects voluntarily enrolled in the treatment, initial differences between coached and uncoached groups were controlled through analysis of covariance and pairwise matching. Descriptive statistics, t-tests for dependent samples, repeated measures analysis of variance, and univariate analyses of variance and covariance all indicated that there were statistically significant differences between the scores on the certification test of coached and uncoached students. Coached students showed greater improvement in scores than uncoached, with Hispanic subjects showing greater improvement than Caucasian subjects. Analyses that examined the differences between the coached and uncoached subjects on the domain and competency scores that make up the raw scores failed to indicate the sources of the differences in raw scores.

Page generated in 0.062 seconds