• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 8
  • 8
  • 8
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Impact of Violations of Longitudinal Measurement Invariance in Latent Growth Models and Autoregressive Quasi-simplex Models

January 2013 (has links)
abstract: In order to analyze data from an instrument administered at multiple time points it is a common practice to form composites of the items at each wave and to fit a longitudinal model to the composites. The advantage of using composites of items is that smaller sample sizes are required in contrast to second order models that include the measurement and the structural relationships among the variables. However, the use of composites assumes that longitudinal measurement invariance holds; that is, it is assumed that that the relationships among the items and the latent variables remain constant over time. Previous studies conducted on latent growth models (LGM) have shown that when longitudinal metric invariance is violated, the parameter estimates are biased and that mistaken conclusions about growth can be made. The purpose of the current study was to examine the impact of non-invariant loadings and non-invariant intercepts on two longitudinal models: the LGM and the autoregressive quasi-simplex model (AR quasi-simplex). A second purpose was to determine if there are conditions in which researchers can reach adequate conclusions about stability and growth even in the presence of violations of invariance. A Monte Carlo simulation study was conducted to achieve the purposes. The method consisted of generating items under a linear curve of factors model (COFM) or under the AR quasi-simplex. Composites of the items were formed at each time point and analyzed with a linear LGM or an AR quasi-simplex model. The results showed that AR quasi-simplex model yielded biased path coefficients only in the conditions with large violations of invariance. The fit of the AR quasi-simplex was not affected by violations of invariance. In general, the growth parameter estimates of the LGM were biased under violations of invariance. Further, in the presence of non-invariant loadings the rejection rates of the hypothesis of linear growth increased as the proportion of non-invariant items and as the magnitude of violations of invariance increased. A discussion of the results and limitations of the study are provided as well as general recommendations. / Dissertation/Thesis / Ph.D. Psychology 2013
2

Evaluating Model Fit for Longitudinal Measurement Invariance with Ordered Categorical Indicators

Clark, Jonathan Caleb 08 December 2020 (has links)
Current recommended cutoffs for determining measurement invariance have typically derived from simulation studies that have focused on multigroup confirmatory factor analysis, often using continuous data. These cutoffs may be inappropriate for ordered categorical data in a longitudinal setting. This study conducts two Monte Carlo studies that evaluate the performance of four popular model fit indices used to determine measurement invariance. The comparative fit index (CFI), Tucker-Lewis Index (TLI), and root mean square error of approximation (RMSEA) were all found to be inconsistent across various simulation conditions as well as invariance tests, and thus were not recommended for use in longitudinal measurement invariance testing. The standardized root mean square residual (SRMR) was the most consistent and robust fit index across simulation conditions, and thus we recommended using ≥ 0.01 as a cutoff for determining longitudinal measurement invariance with ordered categorical indicators.
3

An Analysis of the Factor Structure and Measurement Invariance of the Performance Assessment and Evaluation System Ratings of Preservice Teachers

Steadman, Anna Kay 14 April 2023 (has links)
The Performance Assessment and Evaluation System (PAES) is used by all major universities in the state of Utah to measure the effective teaching skills of preservice candidates as they progress through their teaching preparation program. The resulting ratings are used to make high-stakes decisions relating to course completion as well as recommendation for licensure. This study analyzes the factor structure and tests for measurement invariance of PAES ratings assigned to 663 elementary education candidates at Brigham Young University across two measurement occasions. The candidates were rated by 30 clinical faculty associates. This study also examines the degree to which differential rater effects impact the PAES ratings of these candidates. A bifactor model, with a general factor measuring effective teaching skills measured through observation, and a specific factor measuring effective teaching skills evaluated through conversation best fit the model. Evidence of measurement invariance was found between evaluations completed for Practicum 1 and Practicum 2 candidates. This study also found that differential rater effects impact the PAES ratings of individual candidates, indicating that a candidate's rating may depend on which rater completed the evaluation. Similar research studies should be conducted to analyze the quality of PAES ratings of teacher candidates in the various secondary education programs at BYU. In addition, since the PAES is used at other teacher preparation colleges and universities in Utah, similar research studies should be conducted to examine the quality of PAES ratings of teacher candidates at these other institutions.
4

Longitudinal Measurement Invariance of the Outcome Questionnaire-45

Howland, Shiloh Marie 06 August 2021 (has links)
The Outcome Questionnaire-45 (OQ-45) is a 45-item instrument designed to be used by psychotherapists to track their clients' distress over time. The OQ-45 is composed of three factors: symptomatic distress, interpersonal relations, and social role performance. Numerous researchers have attempted to replicate this intended three-factor structure in their own data, only to find poor fit. Attempts to find a factor structure that does show adequate fit have been met with mixed, but generally poor, results. Additionally, very little work has been done to establish that the OQ-45 exhibits sufficient longitudinal measurement invariance to allow comparison of OQ-45 scores over time. Notwithstanding these known issues regarding the fit of the OQ-45, it has been adopted widely in many countries and translated into several dozen languages. This study sought to identify a factor structure of the OQ-45 that did exhibit longitudinal measurement invariance. Using a sample of 7,751 clients who made 56,353 visits to Brigham Young University's Counseling and Psychological Services between 1996 and 2017, three factor structures were analyzed using Mplus 8.2 through confirmatory factor analysis: (a) single-factor, (b) intended three-factor, and (c) bifactor models. The bifactor model fit the data best, as determined by standard fit statistics (CFI, TLI, RMSEA, SRMR). However, this bifactor model still had inadequate fit. At this point, exploratory structural equation modeling (ESEM) using target rotation was applied to the bifactor model. This ESEM bifactor model had a dominant general factor and did have good fit to the data. Having selected the ESEM bifactor model, it was then tested to see if it showed longitudinal measurement invariance over five time points (the initial OQ-45 score at the intake appointment, followed by four subsequent appointments). The OQ-45 items were treated as categorical and analyzed using the WLSMV estimator. Four time sequences were examined for configural, metric, and scalar longitudinal invariance: Time 1 to Time 2, Time 1 to Time 3 (inclusive of Time 2), Time 1 to Time 4 (inclusive of Times 2 and 3), and Time 1 to Time 5 (inclusive of Times 2, 3, and 4). The OQ-45, when modeled as an ESEM bifactor model, does exhibit scalar longitudinal measurement invariance. Using a new method developed by Clark (2020), ΔSRMR between adjacent models (configural to metric, metric to scalar) were all below his recommended guideline of .01. This is the first study to find a good fitting model of the OQ- 45 that can be used to assess changes in clients' psychological functioning over time. Total OQ- 45 scores can continue to be used by therapists to monitor their patients with confidence in its longitudinal psychometric properties.
5

Sensitivity Analysis of Longitudinal Measurement Non-Invariance: A Second-Order Latent Growth Model Approach with Ordered-Categorical Indicators

January 2016 (has links)
abstract: Researchers who conduct longitudinal studies are inherently interested in studying individual and population changes over time (e.g., mathematics achievement, subjective well-being). To answer such research questions, models of change (e.g., growth models) make the assumption of longitudinal measurement invariance. In many applied situations, key constructs are measured by a collection of ordered-categorical indicators (e.g., Likert scale items). To evaluate longitudinal measurement invariance with ordered-categorical indicators, a set of hierarchical models can be sequentially tested and compared. If the statistical tests of measurement invariance fail to be supported for one of the models, it is useful to have a method with which to gauge the practical significance of the differences in measurement model parameters over time. Drawing on studies of latent growth models and second-order latent growth models with continuous indicators (e.g., Kim & Willson, 2014a; 2014b; Leite, 2007; Wirth, 2008), this study examined the performance of a potential sensitivity analysis to gauge the practical significance of violations of longitudinal measurement invariance for ordered-categorical indicators using second-order latent growth models. The change in the estimate of the second-order growth parameters following the addition of an incorrect level of measurement invariance constraints at the first-order level was used as an effect size for measurement non-invariance. This study investigated how sensitive the proposed sensitivity analysis was to different locations of non-invariance (i.e., non-invariance in the factor loadings, the thresholds, and the unique factor variances) given a sufficient sample size. This study also examined whether the sensitivity of the proposed sensitivity analysis depended on a number of other factors including the magnitude of non-invariance, the number of non-invariant indicators, the number of non-invariant occasions, and the number of response categories in the indicators. / Dissertation/Thesis / Doctoral Dissertation Psychology 2016
6

Evaluating the error of measurement due to categorical scaling with a measurement invariance approach to confirmatory factor analysis

Olson, Brent 05 1900 (has links)
It has previously been determined that using 3 or 4 points on a categorized response scale will fail to produce a continuous distribution of scores. However, there is no evidence, thus far, revealing the number of scale points that may indeed possess an approximate or sufficiently continuous distribution. This study provides the evidence to suggest the level of categorization in discrete scales that makes them directly comparable to continuous scales in terms of their measurement properties. To do this, we first introduced a novel procedure for simulating discretely scaled data that was both informed and validated through the principles of the Classical True Score Model. Second, we employed a measurement invariance (MI) approach to confirmatory factor analysis (CFA) in order to directly compare the measurement quality of continuously scaled factor models to that of discretely scaled models. The simulated design conditions of the study varied with respect to item-specific variance (low, moderate, high), random error variance (none, moderate, high), and discrete scale categorization (number of scale points ranged from 3 to 101). A population analogue approach was taken with respect to sample size (N = 10,000). We concluded that there are conditions under which response scales with 11 to 15 scale points can reproduce the measurement properties of a continuous scale. Using response scales with more than 15 points may be, for the most part, unnecessary. Scales having from 3 to 10 points introduce a significant level of measurement error, and caution should be taken when employing such scales. The implications of this research and future directions are discussed.
7

Evaluating the error of measurement due to categorical scaling with a measurement invariance approach to confirmatory factor analysis

Olson, Brent 05 1900 (has links)
It has previously been determined that using 3 or 4 points on a categorized response scale will fail to produce a continuous distribution of scores. However, there is no evidence, thus far, revealing the number of scale points that may indeed possess an approximate or sufficiently continuous distribution. This study provides the evidence to suggest the level of categorization in discrete scales that makes them directly comparable to continuous scales in terms of their measurement properties. To do this, we first introduced a novel procedure for simulating discretely scaled data that was both informed and validated through the principles of the Classical True Score Model. Second, we employed a measurement invariance (MI) approach to confirmatory factor analysis (CFA) in order to directly compare the measurement quality of continuously scaled factor models to that of discretely scaled models. The simulated design conditions of the study varied with respect to item-specific variance (low, moderate, high), random error variance (none, moderate, high), and discrete scale categorization (number of scale points ranged from 3 to 101). A population analogue approach was taken with respect to sample size (N = 10,000). We concluded that there are conditions under which response scales with 11 to 15 scale points can reproduce the measurement properties of a continuous scale. Using response scales with more than 15 points may be, for the most part, unnecessary. Scales having from 3 to 10 points introduce a significant level of measurement error, and caution should be taken when employing such scales. The implications of this research and future directions are discussed.
8

Evaluating the error of measurement due to categorical scaling with a measurement invariance approach to confirmatory factor analysis

Olson, Brent 05 1900 (has links)
It has previously been determined that using 3 or 4 points on a categorized response scale will fail to produce a continuous distribution of scores. However, there is no evidence, thus far, revealing the number of scale points that may indeed possess an approximate or sufficiently continuous distribution. This study provides the evidence to suggest the level of categorization in discrete scales that makes them directly comparable to continuous scales in terms of their measurement properties. To do this, we first introduced a novel procedure for simulating discretely scaled data that was both informed and validated through the principles of the Classical True Score Model. Second, we employed a measurement invariance (MI) approach to confirmatory factor analysis (CFA) in order to directly compare the measurement quality of continuously scaled factor models to that of discretely scaled models. The simulated design conditions of the study varied with respect to item-specific variance (low, moderate, high), random error variance (none, moderate, high), and discrete scale categorization (number of scale points ranged from 3 to 101). A population analogue approach was taken with respect to sample size (N = 10,000). We concluded that there are conditions under which response scales with 11 to 15 scale points can reproduce the measurement properties of a continuous scale. Using response scales with more than 15 points may be, for the most part, unnecessary. Scales having from 3 to 10 points introduce a significant level of measurement error, and caution should be taken when employing such scales. The implications of this research and future directions are discussed. / Education, Faculty of / Educational and Counselling Psychology, and Special Education (ECPS), Department of / Graduate

Page generated in 0.1494 seconds