• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 189
  • 88
  • 9
  • 9
  • 8
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 393
  • 393
  • 393
  • 80
  • 79
  • 79
  • 77
  • 73
  • 65
  • 63
  • 63
  • 55
  • 49
  • 44
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Exploring a meta-theoretical framework for dynamic assessment and intelligence

Murphy, Raegan. January 2007 (has links)
Thesis (PhD(Psychology))-University of Pretoria, 2007. / Abstract in English. Includes bibliographical references. Available on the Internet via the World Wide Web.
152

A comparison of traditional test blueprinting and item development to assessment engineering in a licensure context

Masters, James S. January 1900 (has links)
Dissertation (Ph.D.)--The University of North Carolina at Greensboro, 2010. / Directed by Richard Luecht; submitted to the Dept. of Educational Research Methodology. Title from PDF t.p. (viewed Jul. 12, 2010). Includes bibliographical references (p. 92-103).
153

An evaluation of a new method of IRT scaling /

Ragland, Shelley. January 2010 (has links) (PDF)
Thesis (Ph.D.)--James Madison University, 2010. / Includes bibliographical references.
154

Effectiveness of the hybrid Levine equipercentile and modified frequency estimation equating methods under the common-item nonequivalent groups design

Hou, Jianlin. Vispoel, Walter P. January 2007 (has links)
Thesis advisor: Walter P. Vispoel. Includes bibliographic references (p. 194-196).
155

Relationships between examinee pacing and observed item responses results from a multi-factor simulation study and an operational high stakes assessment /

Klaric, John S. January 1900 (has links)
Dissertation (Ph.D.)--The University of North Carolina at Greensboro, 2009. / Directed by Richard M. Luecht; submitted to the Dept. of Educational Research Methodology. Title from PDF t.p. (viewed May 17, 2010). Includes bibliographical references (p. 58-62).
156

Multilevel 2PL item response model vertical equating with the presence of differential item functioning

Turhan, Ahmet. Kamata, Akihito. January 2006 (has links)
Thesis (Ph. D.)--Florida State University, 2006. / Advisor: Akihito Kamata, Florida State University, College of Education, Dept. of Educational Psychology and Learning Systems. Title and description from dissertation home page (viewed June 7, 2006). Document formatted into pages; contains x, 135 pages. Includes bibliographical references.
157

Bayesian analysis of hierarchical IRT models comparing and combining the unidimensional & multi-unidimensional IRT models /

Sheng, Yanyan, January 2005 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2005. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (July 19, 2006) Vita. Includes bibliographical references.
158

Controlling Type 1 errors in moderated multiple regression an application of item response theory for applied psychological research /

Morse, Brendan J. January 2009 (has links)
Thesis (Ph.D.)--Ohio University, August, 2009. / Title from PDF t.p. Includes bibliographical references.
159

Sample Size and Test Length Minima for DIMTEST with Conditional Covariance -Based Subtest Selection

January 2012 (has links)
abstract: The existing minima for sample size and test length recommendations for DIMTEST (750 examinees and 25 items) are tied to features of the procedure that are no longer in use. The current version of DIMTEST uses a bootstrapping procedure to remove bias from the test statistic and is packaged with a conditional covariance-based procedure called ATFIND for partitioning test items. Key factors such as sample size, test length, test structure, the correlation between dimensions, and strength of dependence were manipulated in a Monte Carlo study to assess the effectiveness of the current version of DIMTEST with fewer examinees and items. In addition, the DETECT program was also used to partition test items; a second feature of this study also compared the structure of test partitions obtained with ATFIND and DETECT in a number of ways. With some exceptions, the performance of DIMTEST was quite conservative in unidimensional conditions. The performance of DIMTEST in multidimensional conditions depended on each of the manipulated factors, and did suggest that the minima of sample size and test length can be made lower for some conditions. In terms of partitioning test items in unidimensional conditions, DETECT tended to produce longer assessment subtests than ATFIND in turn yielding different test partitions. In multidimensional conditions, test partitions became more similar and were more accurate with increased sample size, for factorially simple data, greater strength of dependence, and a decreased correlation between dimensions. Recommendations for sample size and test length minima are provided along with suggestions for future research. / Dissertation/Thesis / M.A. Educational Psychology 2012
160

Algorithms for assessing the quality and difficulty of multiple choice exam questions

Luger, Sarah Kaitlin Kelly January 2016 (has links)
Multiple Choice Questions (MCQs) have long been the backbone of standardized testing in academia and industry. Correspondingly, there is a constant need for the authors of MCQs to write and refine new questions for new versions of standardized tests as well as to support measuring performance in the emerging massive open online courses, (MOOCs). Research that explores what makes a question difficult, or what questions distinguish higher-performing students from lower-performing students can aid in the creation of the next generation of teaching and evaluation tools. In the automated MCQ answering component of this thesis, algorithms query for definitions of scientific terms, process the returned web results, and compare the returned definitions to the original definition in the MCQ. This automated method for answering questions is then augmented with a model, based on human performance data from crowdsourced question sets, for analysis of question difficulty as well as the discrimination power of the non-answer alternatives. The crowdsourced question sets come from PeerWise, an open source online college-level question authoring and answering environment. The goal of this research is to create an automated method to both answer and assesses the difficulty of multiple choice inverse definition questions in the domain of introductory biology. The results of this work suggest that human-authored question banks provide useful data for building gold standard human performance models. The methodology for building these performance models has value in other domains that test the difficulty of questions and the quality of the exam takers.

Page generated in 0.0472 seconds