Spelling suggestions: "subject:"bimatrix"" "subject:"5matrix""
1 |
The Performance of the Linear Logistic Test Model When the Q-Matrix is Misspecified: A Simulation StudyMacdonald, George T. 14 November 2013 (has links)
A simulation study was conducted to explore the performance of the linear logistic test model (LLTM) when the relationships between items and cognitive components were misspecified. Factors manipulated included percent of misspecification (0%, 1%, 5%, 10%, and 15%), form of misspecification (under-specification, balanced misspecification, and over-specification), sample size (20, 40, 80, 160, 320, 640, and 1280), Q-matrix density (60% and 46%), number of items (20, 40, and 60 items), and skewness of person ability distribution (-0.5, 0, and 0.5). Statistical bias, root mean squared error, confidence interval coverage, confidence interval width, and pairwise cognitive components correlations were computed. The impact of the design factors were interpreted for cognitive components, item difficulty, and person ability parameter estimates.
The simulation provided rich results and selected key conclusions include (a) SAS works superbly when estimating LLTM using a marginal maximum likelihood approach for cognitive components and an empirical Bayes estimation for person ability, (b) parameter estimates are sensitive to misspecification, (c) under-specification is preferred to over-specification of the Q-matrix, (d) when properly specified the cognitive components parameter estimates often have tolerable amounts of root mean squared error when the sample size is greater than 80, (e) LLTM is robust to the density of Q-matrix specification, (f) the LLTM works well when the number of items is 40 or greater, and (g) LLTM is robust to a slight skewness of the person ability distribution. In sum, the LLTM is capable of identifying conceptual knowledge when the Q-matrix is properly specified, which is a rich area for applied empirical research.
|
2 |
Alignment of Faculty Expectations and Course Preparation between First-Year Mathematics and Physics Courses and a Statics and Dynamics Course.Shryock, Kristi 2011 May 1900 (has links)
Alignment of the expectations of engineering faculty and the preparation engineering students receive in first-year mathematics and physics mechanics courses provided the motivation for the work contained in this study. While a number of different aspects of student preparation including intangibles, such as motivation, time management skills, and study skills, affect their performance in the classroom, the goal of this study was to assess the alignment of the mathematics and physics mechanics knowledge and skills addressed in first-year courses with those needed for a sophomore-level statics and dynamics course.
Objectives of this study included: (1) development of a set of metrics for measuring alignment appropriate for an engineering program by adapting and refining common notions of alignment used in K-12 studies; (2) study of the degree of alignment between the first-year mathematics and physics mechanics courses and the follow-on sophomore-level statics and dynamics course; (3) identification of first-year mathematics and physics mechanics skills needed for a sophomore-level statics and dynamics course through the development of mathematics and physics instruments based on the inputs from faculty teaching the statics and dynamics courses; (4) analysis of tasks given to the students (in the form of homework and exam problems) and the identification of the mathematics and physics skills required; (5) comparison of the required skills to the skills reported by faculty members to be necessary for a statics and dynamics course; and (6) the comparison of student preparation in the form of grades and credits received in prerequisite courses to performance in statics and dynamics.
Differences were identified between the content/skills developed in first-year mathematics and physics mechanics courses and content/skills expected by engineering faculty members in the sophomore year. Furthermore, skills stated by engineering faculty members as being required were not necessarily utilized in homework and exam problems in a sophomore engineering mechanics course. Finally, success in first-year physics mechanics courses provided a better indicator of success in a sophomore-level statics and dynamics course than that of first-year mathematics. Processes used in the study could be applied to any course where proper alignment of material is desired.
|
3 |
Recommendations Regarding Q-Matrix Design and Missing Data Treatment in the Main Effect Log-Linear Cognitive Diagnosis ModelMa, Rui 11 December 2019 (has links)
Diagnostic classification models used in conjunction with diagnostic assessments are to classify individual respondents into masters and nonmasters at the level of attributes. Previous researchers (Madison & Bradshaw, 2015) recommended items on the assessment should measure all patterns of attribute combinations to ensure classification accuracy, but in practice, certain attributes may not be measured by themselves. Moreover, the model estimation requires large sample size, but in reality, there could be unanswered items in the data. Therefore, the current study sought to provide suggestions on selecting between two alternative Q-matrix designs when an attribute cannot be measured in isolation and when using maximum likelihood estimation in the presence of missing responses. The factorial ANOVA results of this simulation study indicate that adding items measuring some attributes instead of all attributes is more optimal and that other missing data treatments should be sought if the percent of missing responses is greater than 5%.
|
4 |
Modèles de tests adaptatifs pour le diagnostic de connaissances dans un cadre d'apprentissage à grande échelle / Cognitive diagnostic computerized adaptive testing models for large-scale learningVie, Jill-Jênn 05 December 2016 (has links)
Cette thèse porte sur les tests adaptatifs dans les environnements d’apprentissage. Elle s’inscrit dans les contextes de fouille de données éducatives et d’analytique de l’apprentissage, où l’on s’intéresse à utiliser les données laissées par les apprenants dans des environnements éducatifs pour optimiser l’apprentissage au sens large.L’évaluation par ordinateur permet de stocker les réponses des apprenants facilement, afin de les analyser et d’améliorer les évaluations futures. Dans cette thèse, nous nous intéressons à un certain type de test par ordinateur, les tests adaptatifs. Ceux-ci permettent de poser une question à un apprenant, de traiter sa réponse à la volée, et de choisir la question suivante à lui poser en fonction de ses réponses précédentes. Ce processus réduit le nombre de questions à poser à un apprenant tout en conservant une mesure précise de son niveau. Les tests adaptatifs sont aujourd’hui implémentés pour des tests standardisés tels que le GMAT ou le GRE, administrés à des centaines de milliers d’étudiants. Toutefois, les modèles de tests adaptatifs traditionnels se contentent de noter les apprenants, ce qui est utile pour l’institution qui évalue, mais pas pour leur apprentissage. C’est pourquoi des modèles plus formatifs ont été proposés, permettant de faire un retour plus riche à l’apprenant à l’issue du test pour qu’il puisse comprendre ses lacunes et y remédier. On parle alors de diagnostic adaptatif.Dans cette thèse, nous avons répertorié des modèles de tests adaptatifs issus de différents pans de la littérature. Nous les avons comparés de façon qualitative et quantitative. Nous avons ainsi proposé un protocole expérimental, que nous avons implémenté pour comparer les principaux modèles de tests adaptatifs sur plusieurs jeux de données réelles. Cela nous a amenés à proposer un modèle hybride de diagnostic de connaissances adaptatif, meilleur que les modèles de tests formatifs existants sur tous les jeux de données testés. Enfin, nous avons élaboré une stratégie pour poser plusieursquestions au tout début du test afin de réaliser une meilleure première estimation des connaissances de l’apprenant. Ce système peut être appliqué à la génération automatique de feuilles d’exercices, par exemple sur un cours en ligne ouvert et massif (MOOC). / This thesis studies adaptive tests within learning environments. It falls within educational data mining and learning analytics, where student educational data is processed so as to optimize their learning.Computerized assessments allow us to store and analyze student data easily, in order to provide better tests for future learners. In this thesis, we focus on computerized adaptive testing. Such adaptive tests which can ask a question to the learner, analyze their answer on the fly, and choose the next question to ask accordingly. This process reduces the number of questions to ask to a learner while keeping an accurate measurement of their level. Adaptive tests are today massively used in practice, for example in the GMAT and GRE standardized tests, that are administered to hundreds of thousands of students. Traditionally, models used for adaptive assessment have been mostly summative : they measure or rank effectively examinees, but do not provide any other feedback. Recent advances have focused on formative assessments, that provide more useful feedback for both the learner and the teacher ; hence, they are more useful for improving student learning.In this thesis, we have reviewed adaptive testing models from various research communities. We have compared them qualitatively and quantitatively. Thus, we have proposed an experimental protocol that we have implemented in order to compare the most popular adaptive testing models, on real data. This led us to provide a hybrid model for adaptive cognitive diagnosis, better than existing models for formative assessment on all tried datasets. Finally, we have developed a strategy for asking several questions at the beginning of a test in order to measure the learner more accurately. This system can be applied to the automatic generation of worksheets, for example on a massive online open course (MOOC).
|
Page generated in 0.0294 seconds