Spelling suggestions: "subject:"1inear logistic test model"" "subject:"1inear logistic test godel""
1 |
Nonword Item Generation: Predicting Item Difficulty in Nonword RepetitionJanuary 2011 (has links)
abstract: The current study employs item difficulty modeling procedures to evaluate the feasibility of potential generative item features for nonword repetition. Specifically, the extent to which the manipulated item features affect the theoretical mechanisms that underlie nonword repetition accuracy was estimated. Generative item features were based on the phonological loop component of Baddelely's model of working memory which addresses phonological short-term memory (Baddeley, 2000, 2003; Baddeley & Hitch, 1974). Using researcher developed software, nonwords were generated to adhere to the phonological constraints of Spanish. Thirty-six nonwords were chosen based on the set item features identified by the proposed cognitive processing model. Using a planned missing data design, two-hundred fifteen Spanish-English bilingual children were administered 24 of the 36 generated nonwords. Multiple regression and explanatory item response modeling techniques (e.g., linear logistic test model, LLTM; Fischer, 1973) were used to estimate the impact of item features on item difficulty. The final LLTM included three item radicals and two item incidentals. Results indicated that the LLTM predicted item difficulties were highly correlated with the Rasch item difficulties (r = .89) and accounted for a substantial amount of the variance in item difficulty (R2 = .79). The findings are discussed in terms of validity evidence in support of using the phonological loop component of Baddeley's model (2000) as a cognitive processing model for nonword repetition items and the feasibility of using the proposed radical structure as an item blueprint for the future generation of nonword repetition items. / Dissertation/Thesis / M.A. Educational Psychology 2011
|
2 |
Development of a working memory test for the German Bundeswehr’s online assessmentNagler-Nitzschner, Ursa 09 March 2021 (has links)
Wie die meisten westlichen Streitkräfte, bewegt sich die Bundeswehr im Spannungsfeld zwischen hohem Personalbedarf und Fachkräftemangel. Durch ein Onlineassessment kann der Bewerbungsprozess dahingehend optimiert werden, dass fähiges Personal schneller gebunden wird. Onlineassessment hat diverse Vorteile, gleichzeitig sind damit jedoch Herausforderungen verbunden. Die wahrscheinlich größte ist es, Betrug zu minimieren, da Onlineassessment in einer weitestgehend unkontrollierten Umgebung stattfindet. Zur Entgegnung dieser Problematik dienen verschiedene Ansätze, wie beispielsweise große Itempools, wodurch einer Verbreitung der Lösung im Internet entgegengewirkt werden kann. Dieser Ansatz ist jedoch mit hohen Kosten verbunden. Automatische Itemgenerierung hingegen ermöglicht es, kostengünstig und zeiteffizient psychometrisch hochwertige Items zu erstellen. Aus diesem Grund wurden in der vorliegenden Arbeit zwei Arbeitsgedächtnistests mit automatischer Itemgenerierung für das Onlineassessment der Bundeswehr entwickelt und evaluiert, mit dem Ziel einer hohen prädiktiven Validität auf die Diagnostik vor Ort.
In der ersten Studie (N = 330) wurde gezeigt, dass automatische Itemgenerierung für die entwickelten Arbeitsgedächtnistests eingesetzt werden kann. Hierbei wurden zudem zwei verschiedene zeitliche Varianten untersucht, wobei sich diejenige mit der längeren Stimulusrepräsentationszeit als vorteilhafter erwies.
In der zweiten Studie (N = 621) wurden Nachweise für Reliabilität und Validität erbracht. Die Tests zeigten eine gute konvergente und diskriminante Validität. Zudem konnte einer der beiden Tests eine sehr gute prädiktive Validität aufweisen. Unter Gesamtberücksichtigung der Testgütekriterien wurde dieser Test schließlich für das Onlineassessment der Bundeswehr vorgeschlagen. Somit steht der Bundeswehr nun ein wissenschaftlich fundierter Arbeitsgedächtnistest für das Onlineassessment zur Verfügung. / Like most Western armed forces, the Bundeswehr faces both high personnel requirements and a shortage of skilled personnel. Online assessment can optimize the application process to ensure that capable personnel are retained more quickly. Online assessment has various advantages, but also challenges associated with it. Probably the biggest of these challenges is minimizing cheating, as online assessment takes place in a largely unsupervised environment. Various approaches are used to counter this problem, such as large item pools, which can be used to counter the dissemination of solutions on the Internet. However, this approach is associated with high costs. Automatic item generation, on the other hand, makes it possible to create psychometrically high-quality items in a cost-effective and time-efficient manner. For this reason, two working memory tests with automatic item generation for the German Armed Forces’ online assessment were developed and evaluated in the present study, with the aim of matching the high predictive validity of on-site diagnostics.
The first study (N = 330) demonstrated that automatic item generation can be used for the developed working memory tests. Two different temporal variants were also investigated, with the longer stimulus presentation time proving to be more beneficial.
The second study (N = 621) provided reliability and validity evidence. The tests showed good convergent and discriminant validity. In addition, one of the two tests demonstrated very good predictive validity. Taking into account the overall test quality criteria, this test was ultimately proposed for use in the German Armed Forces’ online assessment. Thus, the Bundeswehr now has a scientifically-grounded working memory test available for its online assessment.
|
Page generated in 0.0969 seconds