• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 189
  • 88
  • 9
  • 9
  • 8
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 393
  • 393
  • 393
  • 80
  • 79
  • 79
  • 77
  • 73
  • 65
  • 63
  • 63
  • 55
  • 49
  • 44
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

The Development and Validation of All Four TRAILS (Tool for Real-Time Assessment of Information Literacy Skills) Tests for K-12 Students

Salem, Joseph A., Jr. 10 December 2014 (has links)
No description available.
332

THE RELATIONSHIP BETWEEN STUDENT EVALUATIONS AND TEACHER QUALITY IN HIGH SCHOOL IN SAUDI ARABIA: ITEM RESPONSE THEORY ANALYSIS AND MULTILEVEL MODELING

Alqarni, Abdulelah M., Dr 04 May 2015 (has links)
No description available.
333

Faking is a FACT: Examining the Susceptibility of Intermediate Items to Misrepresentation

Foster, Garett C. 22 March 2017 (has links)
No description available.
334

Implicit Theories and Beta Change in Longitudinal Evaluations of Training Effectiveness: An Investigation Using Item Response Theory

Craig, S. Bartholomew 21 May 2002 (has links)
Golembiewski, Billingsly, and Yeager (1976) conceptualized three distinct types of change that might result from development interventions, called alpha, beta, and gamma change. Recent research has found that beta and gamma change do occur as hypothesized, but the phenomena are somewhat infrequent and the precise conditions under which they occur have not been established. This study used confirmatory factor analysis and item response theory to identify gamma and beta change on a multidimensional, multisource managerial performance appraisal instrument and to examine relations among the change types, training program content, and raters' implicit theories of performance. Results suggested that coverage in training was a necessary but not sufficient condition for beta and gamma change to occur. Further, although gamma change was detected only in the trainee group, beta change was detected in self-ratings from trainees and in ratings collected from their superiors. Because trainees' superiors were involved in post-training follow-up, this finding was interpreted as a possible diffusion of treatments effect (Campbell & Stanley, 1963). Contrary to expectations, there were no interpretable relations between raters' implicit theories of performance and either of the change types. Perhaps relatedly, more implicit theory change was detected among individuals providing observer ratings than in the trainees themselves. The implications of these findings for future research on plural change were discussed. / Ph. D.
335

Exploring item response theory in forced choice psychometrics for construct and trait interpretation in cross-cultural context

Huang, Teng-Wei 03 1900 (has links)
This thesis explores item response theory (IRT) in the Personal Profile Analysis (PPA) from Thomas International. The study contains two parts (Part 1 and Part II) for which two sample groups were collected. For Part I of the research 650 participants were collected via the old form (CPPA25/C7) in the Beijing office of Thomas International in China (male=323, Female=267, missing=60). Part II of the research used the amended form in the same area and collected a sample of 307 (male=185, female=119, missing=3). The study postulates that IRT methods are applicable to forced-choice psychometrics. The results of Part I showed that the current CPPA form functions, to some extent, according to PPA’s original constructs. Part I of the research identified 16 items that need to be amended (called Amend A in this research). The amended form was returned to China for the collection of samples for Part II, and the results are deemed acceptable. The study concludes with a research protocol for PPA-IRT research generated from the current research. The research protocol suggests four levels of analysis for forced choice (FC) psychometrics, namely: 1. Textual analysis, 2. Functional analysis, 3. Dynamic analysis, and 4. Construct analysis. / Psychology / M.A. (Psychology)
336

Exploring item response theory in forced choice psychometrics for construct and trait interpretation in cross-cultural context

Huang, Teng-Wei 03 1900 (has links)
This thesis explores item response theory (IRT) in the Personal Profile Analysis (PPA) from Thomas International. The study contains two parts (Part 1 and Part II) for which two sample groups were collected. For Part I of the research 650 participants were collected via the old form (CPPA25/C7) in the Beijing office of Thomas International in China (male=323, Female=267, missing=60). Part II of the research used the amended form in the same area and collected a sample of 307 (male=185, female=119, missing=3). The study postulates that IRT methods are applicable to forced-choice psychometrics. The results of Part I showed that the current CPPA form functions, to some extent, according to PPA’s original constructs. Part I of the research identified 16 items that need to be amended (called Amend A in this research). The amended form was returned to China for the collection of samples for Part II, and the results are deemed acceptable. The study concludes with a research protocol for PPA-IRT research generated from the current research. The research protocol suggests four levels of analysis for forced choice (FC) psychometrics, namely: 1. Textual analysis, 2. Functional analysis, 3. Dynamic analysis, and 4. Construct analysis. / Psychology / M.A. (Psychology)
337

Zdraví a jeho socioekonomické ukazatele - testování reliability a validity na PSAS / Health and Its Socioeconomic Indicators - Reliability and Validity Testing of Scales

Juráčková, Veronika January 2018 (has links)
The diploma thesis "Health and Socio-economic Indicators - reliability and validity testing of the PSAS" deals with a theoretical concept of health and its socio-economic indicators. A substantial part of the work concentrates on the application of PSAS tools to the Czech population and determining whether the range is reliable and valid for Czech respondents. To determine the reliability, a complex test is used for the whole range through the value of Cronbach's Alpha, and then the Item Response Theory (IRT) is also tested. The IRT test is done using the 18-point Likert's range of responses, of which is the PSAS composed. The validity is tested based on confirmatory factor analysis, using the construct validity as well as analysis of cognitive interviews for face validity. The secondary data analysis is done in SPSS, MPLUS, R, and IRTPRO programs. The last two programs are used to test the lesser known Item Response Theory.
338

The construction and evaluation of a dynamic computerised adaptive test for the measurement of learning potential

De Beer, Marie 03 1900 (has links)
Recent political and social changes in South Africa have created the need for culture-fair tests for cross-cultural measurement of cognitive ability. This need has been highlighted by the professional, legal and research communities. For cognitive assessment, dynamic assessment is more equitable because it involves a test-train-retest procedure, which shows what performance levels individuals are able to attain when relevant training is provided. Following Binet’s thinking, dynamic assessment aims to identify those individuals who are likely to benefit from additional training. The theoretical basis for learning potential assessment is Vygotsky’s concept of the zone of proximal development. This thesis describes the development, standardisation and evaluation of the Learning Potential Computerised Adaptive Test (LPCAT), for measuring learning potential in the culturally diverse South African population by means of nonverbal figural items. In accordance with Vygotsky’s view, learning potential is defined as a combination of present performance and the extent to which performance is increased after relevant training. This definition allows for comparison of individuals at different levels of initial performance and with different measures of improvement. Computerised adaptive testing based on item response theory, as used in the LPCAT, is uniquely suitable for increasing both measurement accuracy and testing efficiency of dynamic testing, two aspects that have been identified as problematic. The LPCAT pretest and the post-test are two separate adaptive tests, hence eliminating the role of memory in post-test performance. Several multicultural groups were used for item analysis and test validation. The results support the LPCAT as a culture-fair measure of learning potential in the nonverbal general reasoning domain. For examinees with a wide range of ability levels, LPCAT scores correlate strongly with academic performance. For African examinees, poor proficiency in English (the language of teaching) hampers academic performance. The LPCAT ensures the equitable measurement of learning potential, independent of language proficiency and prior scholastic learning and can be used to help select candidates for further training or developmental opportunities. / Psychology / D. Litt. et Phil. (Psychology)
339

The application and empirical comparison of item parameters of Classical Test Theory and Partial Credit Model of Rasch in performance assessments

Mokilane, Paul Moloantoa 05 1900 (has links)
This study empirically compares the Classical Test Theory (CTT) and the Partial Credit Model (PCM) of Rasch focusing on the invariance of item parameters. The invariance concept which is the consequence of the principle of specific objectivity was tested in both CTT and PCM using the results of learners who wrote the National Senior Certificate (NSC) Mathematics examinations in 2010. The difficulty levels of the test items were estimated from the independent samples of learn- ers. The same sample of learners used in the calibration of the difficulty levels of the test items in the PCM model were also used in the calibration of the difficulty levels of the test items in CTT model. The estimates of the difficulty levels of the test items were done using RUMM2030 in the case of PCM while SAS was used in the case of CTT. RUMM2030 and SAS are both the statistical softwares. The analysis of variance (ANOVA) was used to compare the four different design groups of test takers. In cases where the ANOVA showed a significant difference between the means of the design groups, the Tukeys groupings was used to establish where the difference came from. The research findings were that the test items' difficulty parameter estimates based on the CTT theoretical framework were not invariant across the different independent sample groups. The over- all findings from this study were that the CTT theoretical framework was unable to produce item difficulty invariant parameter estimates. The PCM estimates were very stable in the sense that for most of the items, there was no significant difference between the means of at least three design groups and the one that deviated from the rest did not deviate that much. The item parameters of the group that was representative of the population (proportional allocation) and the one where the same number of learners (50 learners) was taken from different performance categories did not differ significantly for all the items except for item 6.6 in examination question paper 2. It is apparent that for the test item parameters to be invariant of the group of test takers in PCM, the group of test takers must be heterogeneous and each performance category needed to be big enough for the proper calibration of item parameters. The higher values of the estimated item parameters in CTT were consistently found in the sample that was dominated by the high proficient learners in Mathematics ("bad") and the lowest values were consistently calculated in the design group that was dominated by the less proficient learners. This phenomenon was not apparent in the Rasch model. / Mathematical Sciences / M.Sc. (Statistics)
340

線上題庫與適性測驗證合系統之發展研究 / A reserach in the development of an integrated on-line item bank and computerized adaptive testing system

陳新豐 Unknown Date (has links)
論文名稱:線上題庫與適性測驗整合系統之發展研究 頁數:337 校所系別:國立政治大學教育學系 畢業時間及摘要別:九十學年度第二學期博士論文摘要 指導教授:林邦傑博士、余民寧博士 研究生:陳新豐 論文摘要內容 本研究係結合工具研發、理論驗證與效能評估的研究,旨在開發一個建構在全球資訊網的「線上題庫與適性測驗整合系統」,以提供教師在網際網路環境下的輔助教學評量系統,除了可動態新增題庫之外,並能針對學生提供童身訂做的適性測驗。因此,研究的兩個核心主軸為「線上題庫與適性測驗整合系統」之開發與「線上題庫建置」的理論驗證。 依循這兩個核心主軸,本研究的研究目的有三:(一)開發線上題庫與適性測驗整合系統。(二)驗證建置題庫的相關理論。(三)評估整合系統運作效能與使用者滿意程度。 為達成這三個研究目的,研究者採用結構分析中,Sehlly、Cashmen和Rosenblatt (2001)所提出的「系統開發生命週期」,將「線上題庫與適性測驗整合系統」開發過程分為「系統規劃」、「系統分析」、「系統設計」、「系統建置」、「系統運行與支援」等五個階段逐步開發。 研究樣本方面,第一次預試選用台南市崇明國中三年級學生115人,第二次預試選用台南市建興國中三年級學生191人,正式施測樣本則是台灣地區北、中、南、東、離島等共計九校2567位國中三年級學生為研究對象。此外,在需求調查報告部分,共調查十五位專家對開發系統的意見。 就研究工具來說,本研究主要研究工具為「線上題庫與適性測驗之整合系統」,另外,「功能需求調查問卷」、「硬體設備」、「軟體工具」、「系統評估量表」也是本研究的研究工具。 就資料處理來說,本研究運用ITEMAN、BILOG、MatLab和SPSS套裝軟體進行資料處理,所採用的統計方法包含古典測驗理論與試題反應理論等理論來分析,計有試題分析、IRT三參數估計、因素結構分析等。 研究結果部分,本研究得到如下結論:(一)線上題庫與通性測驗兩個系統可整合為一。(二)線上題庫與遍性測驗之整合系統具有多項功能。(三)結構化分析中之系統開發生命週期是開發整合系統的理想方法。(四)題庫等化轉換常數方法以Mean/Mean和Haebara等方法較佳。(五)線上測驗與紙筆測驗的試題訊息量相近,但難度偏高。(六)線上測驗連結效益良好。(七)本整合系統運作效能良好。(八)使用者對整合系統之功能感到滿意。根據研究結論,本研究針對工具研發、題庫建置、效能評估等提出具體建議。 關鍵字:試題反應理論、題庫、等化、電腦化適性測驗、系統開發生命週期 / A Reserach In The Development of An Integrated On-Line Item Bank and Computerized Adaptive Testing System Abstract This research is to develop an integrated internet system of on-line item bank and computerized adaptive testing (the "System"), which is comprised of the teaching tool development, theory verification, and efficiency evaluation. Except for the addition of new item bank dynamically, the System, an auxiliary teaching evaluation system for teachers, can also provide customarily made adaptive testing for students. Therefore, to develop an integrated on-line item bank and computerized adaptive testing system and to verify the theory of on-line item bank development constitute the two core spindles of this research. Following the aforementioned research spindles, the main purposes of this research are going to: (A). Develop an integrated on-line item bank and computerized adaptive testing system. (B). Verify the related theories concerning the development of on-line item bank. (C). Evaluate the operating efficiency of such System and the degree of users' satisfaction. The "systems development life cycle" (Sehily, Cashmen Rosenblatt, 2001), a structured analysis method, is adopted to conduct the research. The development process of an integrated on-line item bank and computerized adaptive testing system is divided into 5 separate and successive stages, starting from system planning, system analysis, system design, system development, to system operation and support. In terms of research sampling, the selected samples in the first preliminary testing are 115 ninth-grade students of Chiung-Ming High School in Tainan City, Taiwan. The selected samples in the second preliminary testing are 191 ninth-grade students of Cheng-Sing High School in Tainan City, Taiwan. The third and official sampling is 2,567 ninth-grade students who were selected from a total of 9 high schools ranging from Northern, Central, Southern, and Eastern Taiwan, and islands adjacent to Taiwan. Furthermore, in the demand side, an investigation has been conducted to consult with 15 teaching experts for their professional opinions in regard to such System development. As far as research tools are concerned, except for the main research tool - the integrated on-line item bank and computerized adaptive testing system, other research tools employed consist of functional demand questionnaires, hardware equipments, software tools, and scales for system evaluation. In the aspect of data processing, ITEMAN, BILOQ MatLab, and SPSSapplication softwares are used to perform the data processing. The statistical method,like classical true score theory and item response theory and etc., is applied to conduct the following analyses: item analysis, IRT three-parameter estimate, structured analysis of elements, and etc. The results of this research lead to the following conclusion: 1. The on-line item bank: system and the adaptive testing system, the two separate systems, can be integrated into one system. 2. The integrated on-line item bank and adaptive testing system can play multiple functions. 3. "Systems development life cycle" in the structured analysis is an ideal manner to develop an integrated system. 4. "Mean/Mean", "Haebara" and, etc. are the better methods to perform the item bank equating and constant conversion. 5. For users, the information volume provided by on-line testing and traditional written testing are quite similar, but the degree of difficulty of on-line testing is higher than that of traditional written testing. 6. The linking effect of on-line testing is fair. 7. The operating efficiency of the integrated system is fair. 8. Users are satisfied with the functions of the integrated system. Based on research conclusions drawn thereon, suggestions for tool development, item bank development, and efficiency evaluation are also provided. Keywords: item response theory, item bank, equating, computerized adaptive testing,systems development life cycle

Page generated in 0.0586 seconds