• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Statistical Methods for Small Sample Cognitive Diagnosis

David B Arthur (10165121) 19 April 2024 (has links)
<p dir="ltr">It has been shown that formative assessments can lead to improvements in the learning process. Cognitive Diagnostic Models (CDMs) are a powerful formative assessment tool that can be used to provide individuals with valuable information regarding skill mastery in educational settings. These models provide each student with a ``skill mastery profile'' that shows the level of mastery they have obtained with regard to a specific set of skills. These profiles can be used to help both students and educators make more informed decisions regarding the educational process, which can in turn accelerate learning for students. However, despite their utility, these models are rarely used with small sample sizes. One reason for this is that these models are often complex, containing many parameters that can be difficult to estimate accurately when working with a small number of observations. This work aims to contribute to and expand upon previous work to make CDMs more accessible for a wider range of educators and students.</p><p dir="ltr">There are three main small sample statistical problems that we address in this work: 1) accurate estimation of the population distribution of skill mastery profiles, 2) accurate estimation of additional model parameters for CDMs as well as improved classification of individual skill mastery profiles, and 3) improved selection of an appropriate CDM for each item on the assessment. Each of these problems deals with a different aspect of educational measurement and the solutions provided to these problems can ultimately lead to improvements in the educational process for both students and teachers. By finding solutions to these problems that work well when using small sample sizes, we make it possible to improve learning in everyday classroom settings and not just in large scale assessment settings.</p><p dir="ltr">In the first part of this work, we propose novel algorithms for estimating the population distribution of skill mastery profiles for a popular CDM, the Deterministic Inputs Noisy ``and'' Gate (DINA) model. These algorithms borrow inspiration from the concepts behind popular machine learning algorithms. However, in contrast to these methods, which are often used solely for prediction, we illustrate how the ideas behind these methods can be adapted to obtain estimates of specific model parameters. Through studies involving simulated and real-life data, we illustrate how the proposed algorithms can be used to gain a better picture of the distribution of skill mastery profiles for an entire population students, but can do so by only using a small sample of students from that population. </p><p dir="ltr">In the second part of this work, we introduce a new method for regularizing high-dimensional CDMs using a class of Bayesian shrinkage priors known as catalytic priors. We show how a simpler model can first be fit to the observed data and then be used to generate additional pseudo-observations that, when combined with the original observations, make it easier to more accurately estimate the parameters in a complex model of interest. We propose an alternative, simpler model that can be used instead of the DINA model and show how the information from this model can be used to formulate an intuitive shrinkage prior that effectively regularizes model parameters. This makes it possible to improve the accuracy of parameter estimates for the more complex model, which in turn leads to better classification of skill mastery. We demonstrate the utility of this method in studies involving simulated and real-life data and show how the proposed approach is superior to other common approaches for small sample estimation of CDMs.</p><p dir="ltr">Finally, we discuss the important problem of selecting the most appropriate model for each item on assessment. Often, it is not uncommon in practice to use the same CDM for each item on an assessment. However, this can lead to suboptimal results in terms of parameter estimation and overall model fit. Current methods for item-level model selection rely on large sample asymptotic theory and are thus inappropriate when the sample size is small. We propose a Bayesian approach for performing item-level model selection using Reversible Jump Markov chain Monte Carlo. This approach allows for the simultaneous estimation of posterior probabilities and model parameters for each candidate model and does not require a large sample size to be valid. We again demonstrate through studies involving simulated and real-life data that the proposed approach leads to a much higher chance of selecting the best model for each item. This in turn leads to better estimates of item and other model parameters, which ultimately leads to more accurate information regarding skill mastery. </p>
2

Diagnosing examinees' attributes-mastery using the Bayesian inference for binomial proportion: a new method for cognitive diagnostic assessment

Kim, Hyun Seok (John) 05 July 2011 (has links)
Purpose of this study was to propose a simple and effective method for cognitive diagnosis assessment (CDA) without heavy computational demand using Bayesian inference for binomial proportion (BIBP). In real data studies, BIBP was applied to a test data using two different item designs: four and ten attributes. Also, the BIBP method was compared with DINA and LCDM in the diagnosis result using the same four-attribute data set. There were slight differences in the attribute mastery probability estimate among the three model (DINA, LCDM, BIBP), which could result in different attribute mastery pattern. In Simulation studies, it was found that the general accuracy of the BIBP method in the true parameter estimation was relatively high. The DINA estimation showed slightly higher overall correct classification rate but the bigger overall biases and estimation errors than the BIBP estimation. The three simulation variables (Attribute Correlation, Attribute Difficulty, and Sample Size) showed impacts on the parameter estimations of both models. However, they affected differently the two models: Harder attributes showed the higher accuracy of attribute mastery classification in the BIBP estimation while easier attributes was associated with the higher accuracy of the DINA estimation. In conclusion, BIBP appears an effective method for CDA with the advantage of easy and fast computation and a relatively high accuracy of parameter estimation.
3

認知診斷模式在英語簡單句之驗證與應用 / The Verification and Application of Cognitive Diagnosis Models on English Simple Sentences

趙珮晴 Unknown Date (has links)
英語簡單句的認知診斷模式測驗,具有積極的教育意義,其訊息可以協助國小學生瞭解自己,也可以幫助國中小學校進行補救教學,促進國中英語課程的銜接。本研究對象為429位基隆市國民小學六年級學生,自編具有英語簡單句六個認知屬性的試題及其相關影響因素的測量問卷,研究發現如下: 壹、古典測驗理論之試題分析探索題目和相關影響因素: 一、英語簡單句題目,具有內部一致性信度和選項誘答力。 二、英語簡單句題目,以2個或3個認知屬性的題目比僅有1個認知屬性的題目具有難度與鑑別度。 三、自我效能、內在動機題目,具有良好的建構效度和內部一致性信度。 貳、以認知診斷模式分析英語簡單句測驗: 一、對於不良試題,認知診斷模式和古典測驗理論之試題分析結果可相呼應。 二、英語簡單句題目以G-DINA模式進行分析較為適當。 三、DINA模式和G-DINA模式的分析結果,大致相同。 四、僅有1個認知屬性的題目,有較高猜測參數,可能需要再檢視Q矩陣結構或修改試題。 五、認知屬性中,人稱代名詞單複數的判斷之精熟程度最高,而現在式一般動詞在單數或複數人稱上的使用之精熟程度最低。 六、精熟組型中,有幾乎一半的學生均具備全部的認知屬性;但是,也有約略二成的學生不具備任一認知屬性。 參、女學生、有課後英語文課程、高自我效能和高內在動機者,具有較多認知屬性個數的精熟組型 最後,本研究根據研究結果,提出供教育相關當局與人員之教學與研究建議。
4

利用貝氏網路建構綜合觀念學習模型之初步研究 / An Exploration of Applying Bayesian Networks for Mapping the Learning Processes of Composite Concepts

王鈺婷, Wang, Yu-Ting Unknown Date (has links)
本研究以貝氏網路作為表示教學領域中各個學習觀念的關係的語言。教學領域中的學習觀念包含了基本觀念與綜合觀念,綜合觀念是由兩個以上的基本觀念所衍生出來的觀念,而綜合觀念的學習歷程即為學生在學習的過程中如何整合這些基本觀念的過程。了解綜合觀念的學習歷程可以幫助教師及出題者了解學生的學習路徑,並修改其教學或出題的方針,以期能提供適性化的教學及測驗。為了從考生答題資料中尋找出這個隱藏的綜合觀念學習歷程,我們提出一套以mutual information以及一套以chi-square test所發展出來的研究方法,希望能夠藉由一個模擬環境中模擬考生的答題資料來猜測考生學習綜合觀念的學習歷程。 初步的實驗結果顯示出,在一些特殊的條件假設下,我們的方法有不錯的機會找到暗藏在模擬系統中的學習歷程。因此我們進而嘗試提出一個策略來尋找較大規模結構中的學習歷程,利用搜尋的概念嘗試是否能較有效率的尋找出學生對於綜合觀念學習歷程。雖然在實驗中並沒有十分理想的結果,但是在實驗的過程中,我們除了發現學生答題資料的模糊程度為系統的正確率的主要挑戰之外,另外也發現了學生類別與觀念能力之間的關係也是影響實驗結果的主要因素。透過我們的方法,雖然不能完美的找出學生對於任何綜合觀念的綜合歷程,但是我們的實驗過程與結果也對隱藏的真實歷程結構提供了不少線索。 最後,我們探討如何藉由觀察學生接受測驗的結果來分類不同學習程度與狀況的學生之相關問題與技術。我們利用最近鄰居分類法與k-means分群法以及基於這些方法所變化出的方法,探討是否能透過學生的答題資料有效的分辨學生能力的類別。實驗結果顯示出,在每個觀念擁有多道測驗試題的情況下,利用最近鄰居分類法與k-means分群法以及基於這些方法所變化出的方法,藉由考生答題資料來進行學生能力類別的分類可以得到不錯的正確率。我們希望這些探討和結果能對適性化教學作出一些貢獻。 / In this thesis, I employ Bayesian networks to represent relations between concepts in pedagogical domains. We consider basic concepts, and composite concepts that are integrated from the basic ones. The learning processes of composite concepts are the ways how students integrate the basic concepts to form the composite concepts. Information about the learning processes can help teachers know the learning paths of students and revise their teaching methods so that teachers can provide adaptive course contents and assessments. In order to find out the latent learning processes based on students’ item response patterns, I propose two methods: a mutual information-based approach and a chi-square test-stimulated heuristics, and examine the ideas in a simulated environment. Results of some preliminary experiments showed that the proposed methods offered satisfactory performance under some particular conditions. Hence, I went a step further to propose a search method that tried to find out the learning process of larger structures in a more efficient way. Although the experimental results for the search method were not very satisfactory, we would find that both the uncertainty included by the students’ item response patterns and the relations between student groups and concepts substantially influenced the performance achieved by the proposed methods. Although the proposed methods did not find out the learning processes perfectly, the experimental processes and results indeed had the potential to provide information about the latent learning processes. Finally, I attempted to classify students’ competence according to their item response patterns. I used the nearest neighbor algorithm, the k-means algorithm, and some variations of these two algorithms to classify students’ competence patterns. Experimental results showed that the more the test items used in the assessment, the higher the accuracy of classification we could obtain. I hope that these experimental results can make contributions towards adaptive learning.
5

Modèles de tests adaptatifs pour le diagnostic de connaissances dans un cadre d'apprentissage à grande échelle / Cognitive diagnostic computerized adaptive testing models for large-scale learning

Vie, Jill-Jênn 05 December 2016 (has links)
Cette thèse porte sur les tests adaptatifs dans les environnements d’apprentissage. Elle s’inscrit dans les contextes de fouille de données éducatives et d’analytique de l’apprentissage, où l’on s’intéresse à utiliser les données laissées par les apprenants dans des environnements éducatifs pour optimiser l’apprentissage au sens large.L’évaluation par ordinateur permet de stocker les réponses des apprenants facilement, afin de les analyser et d’améliorer les évaluations futures. Dans cette thèse, nous nous intéressons à un certain type de test par ordinateur, les tests adaptatifs. Ceux-ci permettent de poser une question à un apprenant, de traiter sa réponse à la volée, et de choisir la question suivante à lui poser en fonction de ses réponses précédentes. Ce processus réduit le nombre de questions à poser à un apprenant tout en conservant une mesure précise de son niveau. Les tests adaptatifs sont aujourd’hui implémentés pour des tests standardisés tels que le GMAT ou le GRE, administrés à des centaines de milliers d’étudiants. Toutefois, les modèles de tests adaptatifs traditionnels se contentent de noter les apprenants, ce qui est utile pour l’institution qui évalue, mais pas pour leur apprentissage. C’est pourquoi des modèles plus formatifs ont été proposés, permettant de faire un retour plus riche à l’apprenant à l’issue du test pour qu’il puisse comprendre ses lacunes et y remédier. On parle alors de diagnostic adaptatif.Dans cette thèse, nous avons répertorié des modèles de tests adaptatifs issus de différents pans de la littérature. Nous les avons comparés de façon qualitative et quantitative. Nous avons ainsi proposé un protocole expérimental, que nous avons implémenté pour comparer les principaux modèles de tests adaptatifs sur plusieurs jeux de données réelles. Cela nous a amenés à proposer un modèle hybride de diagnostic de connaissances adaptatif, meilleur que les modèles de tests formatifs existants sur tous les jeux de données testés. Enfin, nous avons élaboré une stratégie pour poser plusieursquestions au tout début du test afin de réaliser une meilleure première estimation des connaissances de l’apprenant. Ce système peut être appliqué à la génération automatique de feuilles d’exercices, par exemple sur un cours en ligne ouvert et massif (MOOC). / This thesis studies adaptive tests within learning environments. It falls within educational data mining and learning analytics, where student educational data is processed so as to optimize their learning.Computerized assessments allow us to store and analyze student data easily, in order to provide better tests for future learners. In this thesis, we focus on computerized adaptive testing. Such adaptive tests which can ask a question to the learner, analyze their answer on the fly, and choose the next question to ask accordingly. This process reduces the number of questions to ask to a learner while keeping an accurate measurement of their level. Adaptive tests are today massively used in practice, for example in the GMAT and GRE standardized tests, that are administered to hundreds of thousands of students. Traditionally, models used for adaptive assessment have been mostly summative : they measure or rank effectively examinees, but do not provide any other feedback. Recent advances have focused on formative assessments, that provide more useful feedback for both the learner and the teacher ; hence, they are more useful for improving student learning.In this thesis, we have reviewed adaptive testing models from various research communities. We have compared them qualitatively and quantitatively. Thus, we have proposed an experimental protocol that we have implemented in order to compare the most popular adaptive testing models, on real data. This led us to provide a hybrid model for adaptive cognitive diagnosis, better than existing models for formative assessment on all tried datasets. Finally, we have developed a strategy for asking several questions at the beginning of a test in order to measure the learner more accurately. This system can be applied to the automatic generation of worksheets, for example on a massive online open course (MOOC).

Page generated in 0.0758 seconds