• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Znalosti žáků ZŠ z oblasti ekologie tvorba standardizovaného testu / Ecology knowledge of the grammar school pupils development of the standardized test

GROULÍKOVÁ, Eva January 2016 (has links)
The main purpose of this diploma thesis was to develop a standardized test on the knowledge of ecology for grammar school pupils and to test it at particular schools. The situation on ecology knowledge testing in the Czech Republic is analyzed in the first section of the thesis, followed by a theory of didactical tests structure. The responses from 231 pupils (122 version A, 109 version B) were collected, analyzed, compared and discussed.
2

Improved recall for information reread on tests provides support for the test question effect

Barnes, Kevin 01 May 2020 (has links)
Repeated testing produces superior recall (especially at a delay) compared to rereading, a phenomenon known as the testing effect. Three studies present evidence for a test question effect that benefits recall of information participants encounter when reading a test. After reading a two-page passage, participants either reread the passage or took fill-in-the-blank practice tests that contained additional information that was later tested. The same procedure was used for a different two-page prose passage as well. A large and unexpected benefit for information read on practice tests was observed. On the 48-hour delayed final test, recall of information reread on practice tests was superior to information reread in prose passages, a finding that is not predicted by current theories of the testing effect. Additionally, recall of information reread on practice tests did not differ significantly from tested information.
3

An Evaluation of Multiple Choice Test Questions Deliberately Designed to Include Multiple Correct Answers

Thayn, Kim Scott 16 December 2010 (has links) (PDF)
The multiple-choice test question is a popular item format used for tests ranging from classroom assessments to professional licensure exams. The popularity of this format stems from its administration and scoring efficiencies. The most common multiple-choice format consists of a stem that presents a problem to be solved accompanied by a single correct answer and two, three, or four incorrect answers. A well-constructed item using this format can result in a high quality assessment of an examinee's knowledge, skills and abilities. However, for some complex, higher-order knowledge, skills and abilities, a single correct answer is often insufficient. Test developers tend to avoid using multiple correct answers out of a concern about the increased difficulty and lower discrimination of such items. However, by avoiding the use of multiple correct answers, test constructors may inadvertently create validity concerns resulting from incomplete content coverage and construct irrelevant variance. This study explored an alternative way of implementing multiple-choice questions with two or more correct answers by specifying in each question the number of answers examinees should select instead of using the traditional guideline to select all that apply. This study investigated the performance of three operational exams that use a standard multiple-choice format where the examinees are told how many answers they are to select. The collective statistical performance of multiple-choice items that included more than one answer that is keyed as correct was compared with the performance of traditional single-answer, multiple-choice (SA) items within each exam. The results indicate that the multiple-answer, multiple-choice (MA) items evaluated from these three exams performed at least as well as to the single-answer questions within the same exams.

Page generated in 0.1118 seconds