• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 59
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 85
  • 85
  • 49
  • 34
  • 31
  • 31
  • 21
  • 20
  • 20
  • 18
  • 13
  • 10
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

TAARAC : test d'anglais adaptatif par raisonnement à base de cas

Lakhlili, Zakia January 2007 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
82

The construction and evaluation of a dynamic computerised adaptive test for the measurement of learning potential

De Beer, Marie 03 1900 (has links)
Recent political and social changes in South Africa have created the need for culture-fair tests for cross-cultural measurement of cognitive ability. This need has been highlighted by the professional, legal and research communities. For cognitive assessment, dynamic assessment is more equitable because it involves a test-train-retest procedure, which shows what performance levels individuals are able to attain when relevant training is provided. Following Binet’s thinking, dynamic assessment aims to identify those individuals who are likely to benefit from additional training. The theoretical basis for learning potential assessment is Vygotsky’s concept of the zone of proximal development. This thesis describes the development, standardisation and evaluation of the Learning Potential Computerised Adaptive Test (LPCAT), for measuring learning potential in the culturally diverse South African population by means of nonverbal figural items. In accordance with Vygotsky’s view, learning potential is defined as a combination of present performance and the extent to which performance is increased after relevant training. This definition allows for comparison of individuals at different levels of initial performance and with different measures of improvement. Computerised adaptive testing based on item response theory, as used in the LPCAT, is uniquely suitable for increasing both measurement accuracy and testing efficiency of dynamic testing, two aspects that have been identified as problematic. The LPCAT pretest and the post-test are two separate adaptive tests, hence eliminating the role of memory in post-test performance. Several multicultural groups were used for item analysis and test validation. The results support the LPCAT as a culture-fair measure of learning potential in the nonverbal general reasoning domain. For examinees with a wide range of ability levels, LPCAT scores correlate strongly with academic performance. For African examinees, poor proficiency in English (the language of teaching) hampers academic performance. The LPCAT ensures the equitable measurement of learning potential, independent of language proficiency and prior scholastic learning and can be used to help select candidates for further training or developmental opportunities. / Psychology / D. Litt. et Phil. (Psychology)
83

Adaptivní testování pro odhad znalostí / Computerized adaptive testing in knowledge assessment

Tělupil, Dominik January 2018 (has links)
In this thesis, we describe and analyze computerized adaptive tests (CAT), the class of psychometrics tests in which items are selected based on the actual estimate of respondent's ability. We focus on the tests based on di- chotomic IRT (item response theory) models. We present critera for item selection, methods for ability estimation and termination criteria, as well as methods for exposure rate control and content balancing. In the analytical part, the effect of CAT settings on the average length of the test and on absoulute bias of ability estimates is investigated using linear regression mo- dels. We provide post hoc analysis of real data coming from real admission test with unknown true values of abilities, as well as simulation study based on the simulated answers of respondents with known true values of ability. In the last chapter of the thesis we investigate the possibilities of analysing adaptive tests in R software and of creating a real CAT. 1
84

Modèles de tests adaptatifs pour le diagnostic de connaissances dans un cadre d'apprentissage à grande échelle / Cognitive diagnostic computerized adaptive testing models for large-scale learning

Vie, Jill-Jênn 05 December 2016 (has links)
Cette thèse porte sur les tests adaptatifs dans les environnements d’apprentissage. Elle s’inscrit dans les contextes de fouille de données éducatives et d’analytique de l’apprentissage, où l’on s’intéresse à utiliser les données laissées par les apprenants dans des environnements éducatifs pour optimiser l’apprentissage au sens large.L’évaluation par ordinateur permet de stocker les réponses des apprenants facilement, afin de les analyser et d’améliorer les évaluations futures. Dans cette thèse, nous nous intéressons à un certain type de test par ordinateur, les tests adaptatifs. Ceux-ci permettent de poser une question à un apprenant, de traiter sa réponse à la volée, et de choisir la question suivante à lui poser en fonction de ses réponses précédentes. Ce processus réduit le nombre de questions à poser à un apprenant tout en conservant une mesure précise de son niveau. Les tests adaptatifs sont aujourd’hui implémentés pour des tests standardisés tels que le GMAT ou le GRE, administrés à des centaines de milliers d’étudiants. Toutefois, les modèles de tests adaptatifs traditionnels se contentent de noter les apprenants, ce qui est utile pour l’institution qui évalue, mais pas pour leur apprentissage. C’est pourquoi des modèles plus formatifs ont été proposés, permettant de faire un retour plus riche à l’apprenant à l’issue du test pour qu’il puisse comprendre ses lacunes et y remédier. On parle alors de diagnostic adaptatif.Dans cette thèse, nous avons répertorié des modèles de tests adaptatifs issus de différents pans de la littérature. Nous les avons comparés de façon qualitative et quantitative. Nous avons ainsi proposé un protocole expérimental, que nous avons implémenté pour comparer les principaux modèles de tests adaptatifs sur plusieurs jeux de données réelles. Cela nous a amenés à proposer un modèle hybride de diagnostic de connaissances adaptatif, meilleur que les modèles de tests formatifs existants sur tous les jeux de données testés. Enfin, nous avons élaboré une stratégie pour poser plusieursquestions au tout début du test afin de réaliser une meilleure première estimation des connaissances de l’apprenant. Ce système peut être appliqué à la génération automatique de feuilles d’exercices, par exemple sur un cours en ligne ouvert et massif (MOOC). / This thesis studies adaptive tests within learning environments. It falls within educational data mining and learning analytics, where student educational data is processed so as to optimize their learning.Computerized assessments allow us to store and analyze student data easily, in order to provide better tests for future learners. In this thesis, we focus on computerized adaptive testing. Such adaptive tests which can ask a question to the learner, analyze their answer on the fly, and choose the next question to ask accordingly. This process reduces the number of questions to ask to a learner while keeping an accurate measurement of their level. Adaptive tests are today massively used in practice, for example in the GMAT and GRE standardized tests, that are administered to hundreds of thousands of students. Traditionally, models used for adaptive assessment have been mostly summative : they measure or rank effectively examinees, but do not provide any other feedback. Recent advances have focused on formative assessments, that provide more useful feedback for both the learner and the teacher ; hence, they are more useful for improving student learning.In this thesis, we have reviewed adaptive testing models from various research communities. We have compared them qualitatively and quantitatively. Thus, we have proposed an experimental protocol that we have implemented in order to compare the most popular adaptive testing models, on real data. This led us to provide a hybrid model for adaptive cognitive diagnosis, better than existing models for formative assessment on all tried datasets. Finally, we have developed a strategy for asking several questions at the beginning of a test in order to measure the learner more accurately. This system can be applied to the automatic generation of worksheets, for example on a massive online open course (MOOC).
85

自變數有測量誤差的羅吉斯迴歸模型之序貫設計探討及其在教育測驗上的應用 / Sequential Designs with Measurement Errors in Logistic Models with Applications to Educational Testing

盧宏益, Lu, Hung-Yi Unknown Date (has links)
本論文探討當自變數存在測量誤差時,羅吉斯迴歸模型的估計問題,並將此結果應用在電腦化適性測驗中的線上校準問題。在變動長度電腦化測驗的假設下,我們證明了估計量的強收斂性。試題反應理論被廣泛地使用在電腦化適性測驗上,其假設受試者在試題的表現情形與本身的能力,可以透過試題特徵曲線加以詮釋,羅吉斯迴歸模式是最常見的試題反應模式。藉由適性測驗的施行,考題的選取可以依據不同受試者,選擇最適合的題目。因此,相較於傳統測驗而言,在適性測驗中,題目的消耗量更為快速。在題庫的維護與管理上,新試題的補充與試題校準便為非常重要的工作。線上試題校準意指在線上測驗進行中,同時進行試題校準。因此,受試者的能力估計會存在測量誤差。從統計的觀點,線上校準面臨的困難,可以解釋為在非線性模型下,當自變數有測量誤差時的實驗設計問題。我們利用序貫設計降低測量誤差,得到更精確的估計,相較於傳統的試題校準,可以節省更多的時間及成本。我們利用處理測量誤差的技巧,進一步應用序貫設計的方法,處理在線上校準中,受試者能力存在測量誤差的問題。 / In this dissertation, we focus on the estimate in logistic regression models when the independent variables are subject to some measurement errors. The problem of this dissertation is motivated by online calibration in Computerized Adaptive Testing (CAT). We apply the measurement error model techniques and adaptive sequential design methodology to the online calibration problem of CAT. We prove that the estimates of item parameters are strongly consistent under the variable length CAT setup. In an adaptive testing scheme, examinees are presented with different sets of items chosen from a pre-calibrated item pool. Thus the speed of attrition in items will be very fast, and replenishing of item pool is essential for CAT. The online calibration scheme in CAT refers to estimating the item parameters of new, un-calibrated items by presenting them to examinees during the course of their ability testing together with previously calibrated items. Therefore, the estimated latent trait levels of examinees are used as the design points for estimating the parameter of the new items, and naturally these designs, the estimated latent trait levels, are subject to some estimating errors. Thus the problem of the online calibration under CAT setup can be formulated as a sequential estimation problem with measurement errors in the independent variables, which are also chosen sequentially. Item Response Theory (IRT) is the most commonly used psychometric model in CAT, and the logistic type models are the most popular models used in IRT based tests. That's why the nonlinear design problem and the nonlinear measurement error models are involved. Sequential design procedures proposed here can provide more accurate estimates of parameters, and are more efficient in terms of sample size (number of examinees used in calibration). In traditional calibration process in paper-and-pencil tests, we usually have to pay for the examinees joining the pre-test calibration process. In online calibration, there will be less cost, since we are able to assign new items to the examinees during the operational test. Therefore, the proposed procedures will be cost-effective as well as time-effective.

Page generated in 0.4314 seconds