• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 363
  • 9
  • Tagged with
  • 417
  • 417
  • 76
  • 60
  • 58
  • 54
  • 49
  • 49
  • 45
  • 41
  • 40
  • 40
  • 40
  • 36
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

A Study of Statistical Power and Type I Errors in Testing a Factor Analytic Model for Group Differences in Regression Intercepts

January 2010 (has links)
abstract: In the past, it has been assumed that measurement and predictive invariance are consistent so that if one form of invariance holds the other form should also hold. However, some studies have proven that both forms of invariance only hold under certain conditions such as factorial invariance and invariance in the common factor variances. The present research examined Type I errors and the statistical power of a method that detects violations to the factorial invariant model in the presence of group differences in regression intercepts, under different sample sizes and different number of predictors (one or two). Data were simulated under two models: in model A only differences in the factor means were allowed, while model B violated invariance. A factorial invariant model was fitted to the data. Type I errors were defined as the proportion of samples in which the hypothesis of invariance was incorrectly rejected, and statistical power was defined as the proportion of samples in which the hypothesis of factorial invariance was correctly rejected. In the case of one predictor, the results show that the chi-square statistic has low power to detect violations to the model. Unexpected and systematic results were obtained regarding the negative unique variance in the predictor. It is proposed that negative unique variance in the predictor can be used as indication of measurement bias instead of the chi-square fit statistic with sample sizes of 500 or more. The results of the two predictor case show larger power. In both cases Type I errors were as expected. The implications of the results and some suggestions for increasing the power of the method are provided. / Dissertation/Thesis / M.A. Psychology 2010
292

Assessing Dimensionality in Complex Data Structures: A Performance Comparison of DETECT and NOHARM Procedures

January 2011 (has links)
abstract: The purpose of this study was to investigate the effect of complex structure on dimensionality assessment in compensatory and noncompensatory multidimensional item response models (MIRT) of assessment data using dimensionality assessment procedures based on conditional covariances (i.e., DETECT) and a factor analytical approach (i.e., NOHARM). The DETECT-based methods typically outperformed the NOHARM-based methods in both two- (2D) and three-dimensional (3D) compensatory MIRT conditions. The DETECT-based methods yielded high proportion correct, especially when correlations were .60 or smaller, data exhibited 30% or less complexity, and larger sample size. As the complexity increased and the sample size decreased, the performance typically diminished. As the complexity increased, it also became more difficult to label the resulting sets of items from DETECT in terms of the dimensions. DETECT was consistent in classification of simple items, but less consistent in classification of complex items. Out of the three NOHARM-based methods, χ2G/D and ALR generally outperformed RMSR. χ2G/D was more accurate when N = 500 and complexity levels were 30% or lower. As the number of items increased, ALR performance improved at correlation of .60 and 30% or less complexity. When the data followed a noncompensatory MIRT model, the NOHARM-based methods, specifically χ2G/D and ALR, were the most accurate of all five methods. The marginal proportions for labeling sets of items as dimension-like were typically low, suggesting that the methods generally failed to label two (three) sets of items as dimension-like in 2D (3D) noncompensatory situations. The DETECT-based methods were more consistent in classifying simple items across complexity levels, sample sizes, and correlations. However, as complexity and correlation levels increased the classification rates for all methods decreased. In most conditions, the DETECT-based methods classified complex items equally or more consistent than the NOHARM-based methods. In particular, as complexity, the number of items, and the true dimensionality increased, the DETECT-based methods were notably more consistent than any NOHARM-based method. Despite DETECT's consistency, when data follow a noncompensatory MIRT model, the NOHARM-based method should be preferred over the DETECT-based methods to assess dimensionality due to poor performance of DETECT in identifying the true dimensionality. / Dissertation/Thesis / Ph.D. Educational Psychology 2011
293

An Investigation of Power Analysis Approaches for Latent Growth Modeling

January 2011 (has links)
abstract: Designing studies that use latent growth modeling to investigate change over time calls for optimal approaches for conducting power analysis for a priori determination of required sample size. This investigation (1) studied the impacts of variations in specified parameters, design features, and model misspecification in simulation-based power analyses and (2) compared power estimates across three common power analysis techniques: the Monte Carlo method; the Satorra-Saris method; and the method developed by MacCallum, Browne, and Cai (MBC). Choice of sample size, effect size, and slope variance parameters markedly influenced power estimates; however, level-1 error variance and number of repeated measures (3 vs. 6) when study length was held constant had little impact on resulting power. Under some conditions, having a moderate versus small effect size or using a sample size of 800 versus 200 increased power by approximately .40, and a slope variance of 10 versus 20 increased power by up to .24. Decreasing error variance from 100 to 50, however, increased power by no more than .09 and increasing measurement occasions from 3 to 6 increased power by no more than .04. Misspecification in level-1 error structure had little influence on power, whereas misspecifying the form of the growth model as linear rather than quadratic dramatically reduced power for detecting differences in slopes. Additionally, power estimates based on the Monte Carlo and Satorra-Saris techniques never differed by more than .03, even with small sample sizes, whereas power estimates for the MBC technique appeared quite discrepant from the other two techniques. Results suggest the choice between using the Satorra-Saris or Monte Carlo technique in a priori power analyses for slope differences in latent growth models is a matter of preference, although features such as missing data can only be considered within the Monte Carlo approach. Further, researchers conducting power analyses for slope differences in latent growth models should pay greatest attention to estimating slope difference, slope variance, and sample size. Arguments are also made for examining model-implied covariance matrices based on estimated parameters and graphic depictions of slope variance to help ensure parameter estimates are reasonable in a priori power analysis. / Dissertation/Thesis / Ph.D. Educational Psychology 2011
294

Nonword Item Generation: Predicting Item Difficulty in Nonword Repetition

January 2011 (has links)
abstract: The current study employs item difficulty modeling procedures to evaluate the feasibility of potential generative item features for nonword repetition. Specifically, the extent to which the manipulated item features affect the theoretical mechanisms that underlie nonword repetition accuracy was estimated. Generative item features were based on the phonological loop component of Baddelely's model of working memory which addresses phonological short-term memory (Baddeley, 2000, 2003; Baddeley & Hitch, 1974). Using researcher developed software, nonwords were generated to adhere to the phonological constraints of Spanish. Thirty-six nonwords were chosen based on the set item features identified by the proposed cognitive processing model. Using a planned missing data design, two-hundred fifteen Spanish-English bilingual children were administered 24 of the 36 generated nonwords. Multiple regression and explanatory item response modeling techniques (e.g., linear logistic test model, LLTM; Fischer, 1973) were used to estimate the impact of item features on item difficulty. The final LLTM included three item radicals and two item incidentals. Results indicated that the LLTM predicted item difficulties were highly correlated with the Rasch item difficulties (r = .89) and accounted for a substantial amount of the variance in item difficulty (R2 = .79). The findings are discussed in terms of validity evidence in support of using the phonological loop component of Baddeley's model (2000) as a cognitive processing model for nonword repetition items and the feasibility of using the proposed radical structure as an item blueprint for the future generation of nonword repetition items. / Dissertation/Thesis / M.A. Educational Psychology 2011
295

Evaluating reliability and use of the Ages and Stages Questionnaires: Thai in northeast Thai early child care settings / Thai in northeast Thai early child care settings

Saihong, Prasong, 1974- 12 1900 (has links)
xix, 198 p. : ill. A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number. / Due to the lack of a screening and early identification system, preschool children who live in rural areas in Northeast Thailand have no opportunity to receive specialized educational services. Most children are identified as having disabilities at school age or older. In this study, the 24-, 30-, and 36-month intervals of the Ages and Stages Questionnaires (ASQ), a parent-completed screening system, were translated and evaluated for reliability and use in Northeast Thai early childcare settings. The study purpose was to investigate the reliability and utility of the Ages and Stages Questionnaires: Thai (ASQ: Thai). Reliability studies included an investigation of internal consistency, test-retest reliability, interobserver reliability, and comparison of differences between U.S. and Thai scores. Utility studies included surveys of satisfaction of parents/caregivers and early childcare staff as well as brief interviews with parents/caregivers and early childcare staff. Subjects included 267 children who were 2-3 years old; 267 parents/caregivers; 49 early childcare staff; and 5 early childcare professor experts. The subjects were recruited through the Department of Curriculum and Instruction, the Faculty of Education, Mahasarakham University. Results addressing the reliability and use of ASQ: Thai were promising. Internal consistency (ρ = .58 -.89) results were adequate as well as test-retest agreement (ρ > .90). A comparison between the ASQ: Thai sample data and the U.S. normative sample found that there were some differences in range, mean, median, interquartile range, and cutoff scores. The back translation of the ASQ: Thai appeared to be adequate in comparison to the original version, as well as culturally appropriate. Early childcare staff and parents/caregivers felt that the ASQ: Thai was easy to use and understand and was culturally appropriate, and they gained knowledge about child development. Early childcare staff and parents/caregivers suggested that the ASQ: Thai should be used in early childcare settings with children when they enter the program. Future research on the ASQ: Thai is needed. Increased study of cultural, language, and disability issues are areas for further study. / Committee in charge: Jane Squires, Chairperson, Special Education and Clinical Sciences; Deanne Umuh, Member, Special Education and Clinical Sciences; Erin Barton, Member, Special Education and Clinical Sciences; Kathie Carpenter, Outside Member, International Studies
296

The development and initial validation of the Environmental Justice Advocacy Scale

Hoffman, Tera L., 1968- 09 1900 (has links)
xv, 177 p. A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number. / The purpose of this dissertation was to develop and conduct initial validation procedures for the Environmental Justice Advocacy Scale (EJAS). Environmental justice refers to the equitable distribution of environmental risks and benefits across diverse groups in terms of the development, implementation, and enforcement of environmental laws and regulations. Environmental justice advocacy involves efforts to organize communities and collaborate with policymakers to prevent or remediate environmental injustice. The findings of three studies are presented and describe reliability, concurrent and discriminant validity, and internal structural validity analyses. A national sample of graduate students, practitioners, and faculty in the specialties of counseling psychology, counseling, and social work were surveyed ( n = 43, n = 294, and n = 295, respectively). Study 1 addresses initial scale development procedures that resulted in a 47-item measure. In Study 2, an exploratory factor analysis suggested a three-factor structure (Attitudes, Knowledge, and Skills) with excellent reliability and strong concurrent and discriminant validity. The results indicated that two of the subscales were correlated ( r = .16 and r = .1 6, p < .01) with a measure of social desirability. In Study 3, a confirmatory factor analysis failed to replicate the three-factor model. However, four factors (Attitudes, Knowledge-General Environmental Justice, Knowledge-Psychological and Physical Health Environmental Justice, and Skills) explained a statistically significant amount of variance in question items. Suggestions for modification of the measure and recommendations for future research, training, and practice related to environmental justice advocacy for mental health professionals are provided. / Committee in charge: Ellen McWhirter, Chairperson, Counseling Psychology and Human Services; Benedict McWhirter, Member, Counseling Psychology and Human Services; Keith Zvoch, Member, Educational Methodology, Policy, and Leadership; Michael Dreiling, Outside Member, Sociology
297

Sensitivity Analysis of Longitudinal Measurement Non-Invariance: A Second-Order Latent Growth Model Approach with Ordered-Categorical Indicators

January 2016 (has links)
abstract: Researchers who conduct longitudinal studies are inherently interested in studying individual and population changes over time (e.g., mathematics achievement, subjective well-being). To answer such research questions, models of change (e.g., growth models) make the assumption of longitudinal measurement invariance. In many applied situations, key constructs are measured by a collection of ordered-categorical indicators (e.g., Likert scale items). To evaluate longitudinal measurement invariance with ordered-categorical indicators, a set of hierarchical models can be sequentially tested and compared. If the statistical tests of measurement invariance fail to be supported for one of the models, it is useful to have a method with which to gauge the practical significance of the differences in measurement model parameters over time. Drawing on studies of latent growth models and second-order latent growth models with continuous indicators (e.g., Kim & Willson, 2014a; 2014b; Leite, 2007; Wirth, 2008), this study examined the performance of a potential sensitivity analysis to gauge the practical significance of violations of longitudinal measurement invariance for ordered-categorical indicators using second-order latent growth models. The change in the estimate of the second-order growth parameters following the addition of an incorrect level of measurement invariance constraints at the first-order level was used as an effect size for measurement non-invariance. This study investigated how sensitive the proposed sensitivity analysis was to different locations of non-invariance (i.e., non-invariance in the factor loadings, the thresholds, and the unique factor variances) given a sufficient sample size. This study also examined whether the sensitivity of the proposed sensitivity analysis depended on a number of other factors including the magnitude of non-invariance, the number of non-invariant indicators, the number of non-invariant occasions, and the number of response categories in the indicators. / Dissertation/Thesis / Doctoral Dissertation Psychology 2016
298

Mediation Analysis with a Survival Mediator: A Simulation Study of Different Indirect Effect Testing Methods

January 2017 (has links)
abstract: Time-to-event analysis or equivalently, survival analysis deals with two variables simultaneously: when (time information) an event occurs and whether an event occurrence is observed or not during the observation period (censoring information). In behavioral and social sciences, the event of interest usually does not lead to a terminal state such as death. Other outcomes after the event can be collected and thus, the survival variable can be considered as a predictor as well as an outcome in a study. One example of a case where the survival variable serves as a predictor as well as an outcome is a survival-mediator model. In a single survival-mediator model an independent variable, X predicts a survival variable, M which in turn, predicts a continuous outcome, Y. The survival-mediator model consists of two regression equations: X predicting M (M-regression), and M and X simultaneously predicting Y (Y-regression). To estimate the regression coefficients of the survival-mediator model, Cox regression is used for the M-regression. Ordinary least squares regression is used for the Y-regression using complete case analysis assuming censored data in M are missing completely at random so that the Y-regression is unbiased. In this dissertation research, different measures for the indirect effect were proposed and a simulation study was conducted to compare performance of different indirect effect test methods. Bias-corrected bootstrapping produced high Type I error rates as well as low parameter coverage rates in some conditions. In contrast, the Sobel test produced low Type I error rates as well as high parameter coverage rates in some conditions. The bootstrap of the natural indirect effect produced low Type I error and low statistical power when the censoring proportion was non-zero. Percentile bootstrapping, distribution of the product and the joint-significance test showed best performance. Statistical analysis of the survival-mediator model is discussed. Two indirect effect measures, the ab-product and the natural indirect effect are compared and discussed. Limitations and future directions of the simulation study are discussed. Last, interpretation of the survival-mediator model for a made-up empirical data set is provided to clarify the meaning of the quantities in the survival-mediator model. / Dissertation/Thesis / Doctoral Dissertation Psychology 2017
299

Examining Dose-Response Effects in Randomized Experiments with Partial Adherence

January 2018 (has links)
abstract: Understanding how adherence affects outcomes is crucial when developing and assigning interventions. However, interventions are often evaluated by conducting randomized experiments and estimating intent-to-treat effects, which ignore actual treatment received. Dose-response effects can supplement intent-to-treat effects when participants are offered the full dose but many only receive a partial dose due to nonadherence. Using these data, we can estimate the magnitude of the treatment effect at different levels of adherence, which serve as a proxy for different levels of treatment. In this dissertation, I conducted Monte Carlo simulations to evaluate when linear dose-response effects can be accurately and precisely estimated in randomized experiments comparing a no-treatment control condition to a treatment condition with partial adherence. Specifically, I evaluated the performance of confounder adjustment and instrumental variable methods when their assumptions were met (Study 1) and when their assumptions were violated (Study 2). In Study 1, the confounder adjustment and instrumental variable methods provided unbiased estimates of the dose-response effect across sample sizes (200, 500, 2,000) and adherence distributions (uniform, right skewed, left skewed). The adherence distribution affected power for the instrumental variable method. In Study 2, the confounder adjustment method provided unbiased or minimally biased estimates of the dose-response effect under no or weak (but not moderate or strong) unobserved confounding. The instrumental variable method provided extremely biased estimates of the dose-response effect under violations of the exclusion restriction (no direct effect of treatment assignment on the outcome), though less severe violations of the exclusion restriction should be investigated. / Dissertation/Thesis / Doctoral Dissertation Psychology 2018
300

Assessing Measurement Invariance and Latent Mean Differences with Bifactor Multidimensional Data in Structural Equation Modeling

January 2018 (has links)
abstract: Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been conducted for multiple-group analyses. This study explored the effects of mismatch in dimensionality between data and analysis models with multiple-group analyses at the population and sample levels. Datasets were generated using a bifactor model with different factor structures and were analyzed with bifactor and single-factor models to assess misspecification effects on assessments of MI and latent mean differences. As baseline models, the bifactor models fit data well and had minimal bias in latent mean estimation. However, the low convergence rates of fitting bifactor models to data with complex structures and small sample sizes caused concern. On the other hand, effects of fitting the misspecified single-factor models on the assessments of MI and latent means differed by the bifactor structures underlying data. For data following one general factor and one group factor affecting a small set of indicators, the effects of ignoring the group factor in analysis models on the tests of MI and latent mean differences were mild. In contrast, for data following one general factor and several group factors, oversimplifications of analysis models can lead to inaccurate conclusions regarding MI assessment and latent mean estimation. / Dissertation/Thesis / Doctoral Dissertation Educational Psychology 2018

Page generated in 0.0896 seconds