• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 4
  • Tagged with
  • 19
  • 19
  • 19
  • 19
  • 9
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Comparison of MIRT observed score equating methods under the common-item nonequivalent groups design

Choi, Jiwon 01 May 2019 (has links)
For equating tests that measure several distinct proficiencies, procedures that reflect the multidimensional structure of the data are needed. Although there exist a few equating procedures developed under the multidimensional item response theory (MIRT) framework, there is a need for further research in this area. Therefore, the primary objectives of this dissertation are to consolidate and expand MIRT observed score equating research with a specific focus on the common-item nonequivalent groups (CINEG) design, which requires scale linking. Content areas and item types are two focal points of dimensionality. This dissertation uses two studies with different data types and comparison criteria to address the research objectives. In general, a comparison between unidimensional item response theory (UIRT) and MIRT methods suggested a better performance of the MIRT methods over UIRT. The simple structure (SS) and full MIRT methods showed more accurate equating results compared to UIRT. In terms of calibration methods, concurrent calibration outperformed separate calibration for all equating methods under most of the studied conditions.
2

A Bifactor Model of Burnout? An Item Response Theory Analysis of the Maslach Burnout Inventory – Human Services Survey.

Periard, David Andrew 05 August 2016 (has links)
No description available.
3

Simple structure MIRT equating for multidimensional tests

Kim, Stella Yun 01 May 2018 (has links)
Equating is a statistical process used to accomplish score comparability so that the scores from the different test forms can be used interchangeably. One of the most widely used equating procedures is unidimensional item response theory (UIRT) equating, which requires a set of assumptions about the data structure. In particular, the essence of UIRT rests on the unidimensionality assumption, which requires that a test measures only a single ability. However, this assumption is not likely to be fulfilled for many real data such as mixed-format tests or tests composed of several content subdomains: failure to satisfy the assumption threatens the accuracy of the estimated equating relationships. The main purpose of this dissertation was to contribute to the literature on multidimensional item response theory (MIRT) equating by developing a theoretical and conceptual framework for true-score equating using a simple-structure MIRT model (SS-MIRT). SS-MIRT has several advantages over other complex MIRT models such as improved efficiency in estimation and a straightforward interpretability. In this dissertation, the performance of the SS-MIRT true-score equating procedure (SMT) was examined and evaluated through four studies using different data types: (1) real data, (2) simulated data, (3) pseudo forms data, and (4) intact single form data with identity equating. Besides SMT, four competitors were included in the analyses in order to assess the relative benefits of SMT over the other procedures: (a) equipercentile equating with presmoothing, (b) UIRT true-score equating, (c) UIRT observed-score equating, and (d) SS-MIRT observed-score equating. In general, the proposed SMT procedure behaved similarly to the existing procedures. Also, SMT showed more accurate equating results compared to the traditional UIRT equating. Better performance of SMT over UIRT true-score equating was consistently observed across the three studies that employed different criterion relationships with different datasets, which strongly supports the benefit of a multidimensional approach to equating with multidimensional data.
4

Multidimensional item response theory observed score equating methods for mixed-format tests

Peterson, Jaime Leigh 01 July 2014 (has links)
The purpose of this study was to build upon the existing MIRT equating literature by introducing a full multidimensional item response theory (MIRT) observed score equating method for mixed-format exams because no such methods currently exist. At this time, the MIRT equating literature is limited to full MIRT observed score equating methods for multiple-choice only exams and Bifactor observed score equating methods for mixed-format exams. Given the high frequency with which mixed-format exams are used and the accumulating evidence that some tests are not purely unidimensional, it was important to present a full MIRT equating method for mixed-format tests. The performance of the full MIRT observed score method was compared with the traditional equipercentile method, and unidimensional IRT (UIRT) observed score method, and Bifactor observed score method. With the Bifactor methods, group-specific factors were defined according to item format or content subdomain. With the full MIRT methods, two- and four-dimensional models were included and correlations between latent abilities were freely estimated or set to zero. All equating procedures were carried out using three end-of-course exams: Chemistry, Spanish Language, and English Language and Composition. For all subjects, two separate datasets were created using pseudo-groups in order to have two separate equating criteria. The specific equating criteria that served as baselines for comparisons with all other methods were the theoretical Identity and the traditional equipercentile procedures. Several important conclusions were made. In general, the multidimensional methods were found to perform better for datasets that evidenced more multidimensionality, whereas unidimensional methods worked better for unidimensional datasets. In addition, the scale on which scores are reported influenced the comparative conclusions made among the studied methods. For performance classifications, which are most important to examinees, there typically were not large discrepancies among the UIRT, Bifactor, and full MIRT methods. However, this study was limited by its sole reliance on real data which was not very multidimensional and for which the true equating relationship was not known. Therefore, plans for improvements, including the addition of a simulation study to introduce a variety of dimensional data structures, are also discussed.
5

Observed score and true score equating procedures for multidimensional item response theory

Brossman, Bradley Grant 01 May 2010 (has links)
The purpose of this research was to develop observed score and true score equating procedures to be used in conjunction with the Multidimensional Item Response Theory (MIRT) framework. Currently, MIRT scale linking procedures exist to place item parameter estimates and ability estimates on the same scale after separate calibrations are conducted. These procedures account for indeterminacies in (1) translation, (2) dilation, (3) rotation, and (4) correlation. However, no procedures currently exist to equate number correct scores after parameter estimates are placed on the same scale. This research sought to fill this void in the current psychometric literature. Three equating procedures--two observed score procedures and one true score procedure--were created and described in detail. One observed score procedure was presented as a direct extension of unidimensional IRT observed score equating, and is referred to as the "Full MIRT Observed Score Equating Procedure." The true score procedure and the second observed score procedure incorporated the statistical definition of the "direction of best measurement" in an attempt to equate exams using unidimensional IRT (UIRT) equating principles. These procedures are referred to as the "Unidimensional Approximation of MIRT True Score Equating Procedure" and the "Unidimensional Approximation of MIRT Observed Score Equating Procedure," respectively. Three exams within the Iowa Test of Educational Development (ITED) Form A and Form B batteries were used to conduct UIRT observed score and true score equating, MIRT observed score and true score equating, and equipercentile equating. The equipercentile equating procedure was conducted for the purpose of comparison since this procedure does not explicitly violate the IRT assumption of unidimensionality. Results indicated that the MIRT equating procedures performed more similarly to the equipercentile equating procedure than the UIRT equating procedures, presumably due to the violation of the unidimensionality assumption under the UIRT equating procedures. Future studies are expected to address how the MIRT procedures perform under varying levels of multidimensionality (weak, moderate, strong), varying frameworks of dimensionality (simple structure vs. complex structure), and number of dimensions, among other conditions.
6

Using Multidimensional Item Response Theory Models to Explain Multi-Category Purchases

Schröder, Nadine January 2017 (has links) (PDF)
We apply multidimensional item response theory models (MIRT) to analyse multi-category purchase decisions. We further compare their performance to benchmark models by means of topic models. Estimation is based on two types of data sets. One contains only binary the other polytomous purchase decisions. We show that MIRT are superior w. r. t. our chosen benchmark models. In particular, MIRT are able to reveal intuitive latent traits that can be interpreted as characteristics of households relevant for multi-category purchase decisions. With the help of latent traits marketers are able to predict future purchase behaviour for various types of households. These information may guide shop managers for cross selling activities and product recommendations.
7

DIMENSIONALITY ANALYSIS OF THE PALS CLASSROOM GOAL ORIENTATION SCALES

Tombari, Angela K. 01 January 2017 (has links)
Achievement goal theory is one of the most broadly accepted theoretical paradigms in educational psychology with over 35 years of influencing research and educational practice. The longstanding use of this construct has led to two consequences of importance for this research: 1) many different dimensionality representations have been debated, and 2) methods used to confirm dimensionality of the scales have been supplanted from best practice. A further issue is that goal orientations are used to inform classroom practice, whereas most measurement studies focus on the structure of the personal goal orientation scales rather than the classroom level structure. This study aims to provide an updated understanding of one classroom goal orientation scale using the modern psychometric techniques of multidimensional item response theory and bifactor analysis. The most commonly used scale with K-12 students is the Patterns of Adaptive Learning Scales (PALS); thus, the PALS classroom goal orientation scales will be the subject of this study.
8

Decision consistency and accuracy indices for the bifactor and testlet response theory models

LaFond, Lee James 01 July 2014 (has links)
The primary goal of this study was to develop a new procedure for estimating decision consistency and accuracy indices using the bifactor and testlet response theory (TRT) models. This study is the first to investigate decision consistency and accuracy from a multidimensional perspective, and the results have shown that the bifactor model at least behaved in way that met the author's expectations and represents a potential useful procedure. The TRT model, on the other hand, did not meet the author's expectations and generally showed poor model performance. The multidimensional decision consistency and accuracy indices proposed in this study appear to provide good performance, at least for the bifactor model, in the case of a substantial testlet effect. For practitioners examining a test containing testlets for decision consistency and accuracy, a recommended first step is to check for dimensionality. If the testlets show a significant degree of multidimensionality, then the usage of the multidimensional indices proposed can be recommended as the simulation study showed an improved level of performance over unidimensional IRT models. However, if there is a not a significant degree of multidimensionality then the unidimensional IRT models and indices would perform as well, or even better, than the multidimensional models. Another goal of this study was to compare methods for numerical integration used in the calculation of decision consistency and accuracy indices. This study investigated a new method (M method) that sampled ability estimates through a Monte-Carlo approach. In summary, the M method seems to be just as accurate as the other commonly used methods for numerical integration. However, it has some practical advantages over the D and P methods. As previously mentioned, it is not as nearly as computationally intensive as the D method. Also, the P method requires large sample sizes. In addition, the P method has conceptual disadvantage in that the conditioning variable, in theory, should be the true theta, not an estimated theta. The M method avoids both of these issues and seems to provide equally accurate estimates of decision consistency and accuracy indices, which makes it a strong option particularly in multidimensional cases.
9

Bringing Situational Judgement Tests to the 21st Century: Scoring of Situational Judgement Tests Using Item Response Theory

Ron, Tom Haim 19 November 2019 (has links)
No description available.
10

Validation of an Outcome Tracking System for Use in Psychology Training Clinics

Kilmer, Elizabeth Davis 08 1900 (has links)
The ability to monitor client change in psychotherapy over time is vital to quality assurance in psychotherapy as well as the continuing improvement of psychotherapy research. Currently there is not a free and comprehensive outcome measure for psychotherapy that meets current research and treatment goals. This study took further steps to validate a suite of measures to aid in treatment and research, theoretically based in the research domain criteria (RDoC) and the phase model of change frameworks. Items previously tested in a community sample were further tested in a clinical population in psychotherapy training clinics and a community clinical sample Data was analyzed using bi-factor confirmatory factor analysis and multidimensional item response theory. Additional exploratory analyses were conducted to explore differential item functioning in these samples.

Page generated in 0.2744 seconds