Spelling suggestions: "subject:"desponse 1heory"" "subject:"desponse btheory""
31 |
The application of item response theory in the cross-cultural validation of the physical estimation and attraction scale /Charoenruk, Kongsak. January 1989 (has links)
Thesis (Ph. D.)--Oregon State University, 1989. / Typescript (photocopy). Includes bibliographical references. Also available on the World Wide Web.
|
32 |
IRT parameter estimation : can the jackknife improve accuracy? /Dunn, Jennifer Louise. January 2004 (has links)
Thesis (Ph. D.)--University of Toronto, 2004. / Adviser: Ruth Childs. Includes bibliographical references.
|
33 |
The Application of item response theory to measure problem solving proficiencies /Wu, Margaret Li-Min. January 2003 (has links)
Thesis (Ph.D.)--University of Melbourne, Dept. of Learning and Educational Development, 2004. / Typescript (photocopy). Includes bibliographical references (leaves 247-265).
|
34 |
Multidimensionality and item parameter drift an investigation of linking items in a large-scale certification test /Li, Xin. January 2008 (has links)
Thesis (Ph.D.)--Michigan State University. Measurement and Quantitative Methods, 2008. / Title from PDF t.p. (viewed on July 21, 2009) Includes bibliographical references (p. 111-120). Also issued in print.
|
35 |
An automated test assembly for unidimensional IRT tests containing cognitive diagnostic elementsKim, Soojin, Chang, Hua-Hua, January 2004 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2004. / Supervisor: Hua-Hua Chang. Vita. Includes bibliographical references.
|
36 |
An Investigation of vertical scaling with item response theory using a multistage testing frameworkBeard, Jonathan. Ansley, Timothy Neri. January 2008 (has links)
Thesis supervisor: Timothy Ansley. Includes bibliographical references (p. 175-180).
|
37 |
Variability in the estimation of item option characteristic curves for the multiple-category scoring modelBuhr, Dianne C., January 1989 (has links)
Thesis (Ph. D.)--University of Florida, 1989. / Description based on print version record. Typescript. Vita. Includes bibliographical references (leaves 121-127).
|
38 |
The impact of student ability and method for varying the position of correct answers in classroom multiple-choice testsJoseph, Dane Christian, January 2010 (has links) (PDF)
Thesis (Ph. D.)--Washington State University, May 2010. / Title from PDF title page (viewed on July 15, 2010). "Department of Educational Leadership & Counseling Psychology." Includes bibliographical references (p. 67-73).
|
39 |
IRT-based automated test assembly a sampling and stratification perspective /Chen, Pei-hua, Chang, Hua-Hua, January 2005 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2005. / Supervisor: Hua-Hua Chang. Vita. Includes bibliographical references.
|
40 |
Developing a validation process for an adaptive computer-based spoken English language testUnderhill, Nic January 2000 (has links)
This thesis explores the implications for language test validation of developments in language teaching and testing methodology, test validity and computer-based delivery. It identifies a range of features that tests may now exhibit in novel combinations, and concludes that these combinations of factors favour a continuing process of validation for such tests. It proposes such a model designed around a series of cycles drawing on diverse sources of data. The research uses the Five Star test, a private commercial test designed for use in a specific cultural context, as an exemplar of a larger class of tests exhibiting some or all of these features. A range of validation activities on the Five Star test is reported and analysed from two quite different sources, an independent expert panel that scrutinised the test task by task and an analysis of 460 test results using item-response theory (IRT). The validation activities are critically evaluated for the purpose of the model, which is then applied to the Five Star test. A historical overview of language teaching and testing methodology reveals the communicative approach to be the dominant paradigm, but suggests that there is no clear consensus about the key features of this approach or how they combine. It has been applied incompletely to language testing, and important aspects of the approach are identified which remain problematic, especially for the assessment of spoken language. They include the constructs of authenticity, interaction and topicality whose status in the literature is reviewed and determinability in test events discussed. The evolution of validity in the broader field of educational and psychological testing informs the development of validation in language testing and a transition is identified away from validity as a one-time activity attaching to the test instrument towards validation as a continuing process that informs the interpretation of test results. In test delivery, this research reports on the validation issues raised by computer-based adaptive testing, particularly with respect to test instruments such as the Five Star test that combine direct face-to-face interaction with computer-based delivery. In the light of the theoretical issues raised and the application of the model to the Five Star test, some implications of the model for use in other test environments are presented critically and recommendations made for its development.
|
Page generated in 0.0406 seconds