• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Nonword Item Generation: Predicting Item Difficulty in Nonword Repetition

January 2011 (has links)
abstract: The current study employs item difficulty modeling procedures to evaluate the feasibility of potential generative item features for nonword repetition. Specifically, the extent to which the manipulated item features affect the theoretical mechanisms that underlie nonword repetition accuracy was estimated. Generative item features were based on the phonological loop component of Baddelely's model of working memory which addresses phonological short-term memory (Baddeley, 2000, 2003; Baddeley & Hitch, 1974). Using researcher developed software, nonwords were generated to adhere to the phonological constraints of Spanish. Thirty-six nonwords were chosen based on the set item features identified by the proposed cognitive processing model. Using a planned missing data design, two-hundred fifteen Spanish-English bilingual children were administered 24 of the 36 generated nonwords. Multiple regression and explanatory item response modeling techniques (e.g., linear logistic test model, LLTM; Fischer, 1973) were used to estimate the impact of item features on item difficulty. The final LLTM included three item radicals and two item incidentals. Results indicated that the LLTM predicted item difficulties were highly correlated with the Rasch item difficulties (r = .89) and accounted for a substantial amount of the variance in item difficulty (R2 = .79). The findings are discussed in terms of validity evidence in support of using the phonological loop component of Baddeley's model (2000) as a cognitive processing model for nonword repetition items and the feasibility of using the proposed radical structure as an item blueprint for the future generation of nonword repetition items. / Dissertation/Thesis / M.A. Educational Psychology 2011
2

Essays zu methodischen Herausforderungen im Large-Scale Assessment

Robitzsch, Alexander 21 January 2016 (has links)
Mit der wachsenden Verbreitung empirischer Schulleistungsleistungen im Large-Scale Assessment gehen eine Reihe methodischer Herausforderungen einher. Die vorliegende Arbeit untersucht, welche Konsequenzen Modellverletzungen in eindimensionalen Item-Response-Modellen (besonders im Rasch-Modell) besitzen. Insbesondere liegt der Fokus auf vier methodischen Herausforderungen von Modellverletzungen. Erstens, implizieren Positions- und Kontexteffekte, dass gegenüber einem eindimensionalen IRT-Modell Itemschwierigkeiten nicht unabhängig von der Position im Testheft und der Zusammenstellung des Testheftes ausgeprägt sind und Schülerfähigkeiten im Verlauf eines Tests variieren können. Zweitens, verursacht die Vorlage von Items innerhalb von Testlets lokale Abhängigkeiten, wobei unklar ist, ob und wie diese in der Skalierung berücksichtigt werden sollen. Drittens, können Itemschwierigkeiten aufgrund verschiedener Lerngelegenheiten zwischen Schulklassen variieren. Viertens, sind insbesondere in low stakes Tests nicht bearbeitete Items vorzufinden. In der Arbeit wird argumentiert, dass trotz Modellverletzungen nicht zwingend von verzerrten Schätzungen von Itemschwierigkeiten, Personenfähigkeiten und Reliabilitäten ausgegangen werden muss. Außerdem wird hervorgehoben, dass man psychometrisch häufig nicht entscheiden kann und entscheiden sollte, welches IRT-Modell vorzuziehen ist. Dies trifft auch auf die Fragestellung zu, wie nicht bearbeitete Items zu bewerten sind. Ausschließlich Validitätsüberlegungen können dafür Hinweise geben. Modellverletzungen in IRT-Modellen lassen sich konzeptuell plausibel in den Ansatz des Domain Samplings (Item Sampling; Generalisierbarkeitstheorie) einordnen. In dieser Arbeit wird gezeigt, dass die statistische Unsicherheit in der Modellierung von Kompetenzen nicht nur von der Stichprobe der Personen, sondern auch von der Stichprobe der Items und der Wahl statistischer Modelle verursacht wird. / Several methodological challenges emerge in large-scale student assessment studies like PISA and TIMSS. Item response models (IRT models) are essential for scaling student abilities within these studies. This thesis investigates the consequences of several model violations in unidimensional IRT models (especially in the Rasch model). In particular, this thesis focuses on the following four methodological challenges of model violations. First, position effects and contextual effects imply (in comparison to unidimensional IRT models) that item difficulties depend on the item position in a test booklet as well as on the composition of a test booklet. Furthermore, student abilities are allowed to vary among test positions. Second, the administration of items within testlets causes local dependencies, but it is unclear whether and how these dependencies should be taken into account for the scaling of student abilities. Third, item difficulties can vary among different school classes due to different opportunities to learn. Fourth, the amount of omitted items is in general non-negligible in low stakes tests. In this thesis it is argued that estimates of item difficulties, student abilities and reliabilities can be unbiased despite model violations. Furthermore, it is argued that the choice of an IRT model cannot and should not be made (solely) from a psychometric perspective. This also holds true for the problem of how to score omitted items. Only validity considerations provide reasons for choosing an adequate scoring procedure. Model violations in IRT models can be conceptually classified within the approach of domain sampling (item sampling; generalizability theory). In this approach, the existence of latent variables need not be posed. It is argued that statistical uncertainty in modelling competencies does not only depend on the sampling of persons, but also on the sampling of items and on the choice of statistical models.

Page generated in 0.0728 seconds