• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 630
  • 149
  • 47
  • 24
  • 19
  • 19
  • 19
  • 19
  • 19
  • 19
  • 8
  • 6
  • 5
  • 4
  • 3
  • Tagged with
  • 1089
  • 551
  • 148
  • 144
  • 134
  • 105
  • 104
  • 91
  • 91
  • 90
  • 89
  • 88
  • 88
  • 75
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
851

Questionnaire d'attitudes et de préférences éducatives des intervenants (QAPÉI) : structure factorielle et relations avec les traits de personnalité

Poitras, Mélanie 11 1900 (has links)
La psychoéducation de même que plusieurs approches théoriques en psychologie clinique suggèrent que l’intervenant constitue un élément actif fondamental des interventions auprès des individus en difficulté. Parmi l’ensemble des caractéristiques des intervenants qui sont utiles de considérer, les attitudes et préférences éducatives des intervenants apparaissent importantes puisqu’elles peuvent être reliées à un bon appariement avec un milieu d’intervention donné, au sentiment d’efficacité professionnelle et, ultimement, à l’efficacité d’une intervention. Or, très peu d’instruments psychométriques d’évaluation validés existent pour évaluer ces construits importants. Cette étude visait principalement à effectuer un examen préliminaire des propriétés psychométriques de la version française du Questionnaire d’attitudes et de préférences des intervenants (QAPÉI; Jesness & Wedge, 1983; Le Blanc, Trudeau-Le Blanc, & Lanctôt, 1999). Le premier objectif de la présente étude était d’évaluer si la structure théorique originale était reproductible empiriquement ou si une structure factorielle alternative était nécessaire. Le deuxième objectif était d’évaluer si les attitudes et préférences éducatives des intervenants étaient reliées à leurs traits de personnalité. L’échantillon utilisé était composé d’intervenants faisant partie de Boscoville2000, un projet d’intervention cognitive-comportementale en milieu résidentiel pour les adolescents en difficulté. Des analyses factorielles exploratoires ont démontré que la structure théorique originale n’était pas reproduite empiriquement. Une structure alternative en cinq facteurs a été recouvrée. Cette structure alternative était plus cohérente sur le plan conceptuel et démontrait une bonne adéquation aux données. Les facteurs identifiés ont été nommés Distance affective, Évitement thérapeutique, Exaspération, Permissivité et Coercition. Des analyses corrélationnelles ont démontré que ces échelles d’attitudes et de préférences éducatives étaient reliées de façon conceptuellement cohérente aux traits de personnalité des intervenants, ce qui appuie la validité de critère de la nouvelle structure de l’instrument. / Psychoeducation and several clinical psychology theoretical approaches suggest that the interventionist constitute a fundamental active ingredient of psychosocial interventions for individuals with adjustment problems. Among the various characteristics of interventionists there are useful to consider, attitudes and preferences in interventions are important because they can be related to an adequate matching in a given intervention milieu, to professional self-efficacy and, ultimately, to the intervention efficacy. However, there are very empirically-validated psychometric instrument to asses these important constructs. The main aim of this study was to make a preliminary evaluation of the psychometric properties of the French-Canadian version of the Counselors’ AIttitudes and Preferences Questionnaire (“Questionnaire d’attitudes et de préférences éducatives des intervenants”, QAPÉI; Jesness & Wedge, 1983; Le Blanc, Trudeau-Le Blanc, & Lanctôt, 1999). The first objective was to assess if the original theoretical structure could be reproduced empirically, or if an alternative factor structure was necessary. The second objective was to assess if interventionists’ attitudes and preferences were related to their personality traits. The sample that was used was composed of interventionists from Boscoville2000, a residential cognitive-behavioral intervention program for adolescents with serious adjustment problems. Exploratory factor analyses demonstrated that the original theoretical structure was not reproduced empirically. An alternative five-factor structure was recovered. This alternative structure was more conceptually coherent and provided a better fit to the data. The identified factors were labeled Affective distance, Therapeutic Avoidance, Exasperation, Permissiveness, and Coercion. Correlational analyses demonstrated that attitudes and preferences scales were related in a conceptually coherent way to interventionists’ personality traits, which supported the criterion-related validity of the instrument new structure.
852

Bien-être subjectif et indécision vocationnelle : une comparaison interculturelle. / Subjective well-being and career indecision : a cross-cultural comparison

Sovet, Laurent 19 November 2014 (has links)
Le bien-être subjectif se définit comme une approche hédonique du bonheur renvoyant à une évaluation générale de sa propre vie dans ses dimensions cognitives et affectives. Bien que ce concept ait fait l’objet d’une littérature scientifique abondante, peu de travaux ont porté explicitement sur ses relations avec la construction du choix d’orientation scolaire et professionnelle. Notre revue de la littérature met également en évidence l’opposition des approches ascendantes et descendantes dans l’étude des liens en ces variables rendant le sens de la causalité particulièrement complexe à appréhender. Ainsi, l’objectif de cette thèse est d’apporter une meilleure compréhension aux relations entre le bien-être subjectif et l’indécision vocationnelle. De même, afin de tester le rôle des caractéristiques individuelles et contextuelles, nous avons inscrit notre étude dans une comparaison interculturelle interrogeant des étudiants sud-coréens, étatsuniens et français. La première partie de nos résultats est largement consacrée à l’étude de la validité psychométrique des outils utilisés dans les différents contextes cibles tandis que la deuxième partie s’intéresse davantage à l’analyse des relations entre le bien-être subjectif et l’indécision vocationnelle. Une série de trois études fut conduite dans chaque pays cible dans le but d’examiner successivement le rôle modérateur des caractéristiques individuelles, des traits de la personnalité et du bien-être psychologique. Les résultats indiquent globalement que le bien-être subjectif est significativement associé à l’indécision vocationnelle quel que soit l’échantillon considéré bien que plusieurs effets modérateurs soient observés. À partir de notre revue de la littérature et des résultats obtenus, un modèle théorique des relations entre le bien-être subjectif et l’indécision vocationnelle est proposé différenciant les modèles ascendants et descendants par des mécanismes distincts. L’implication de ces résultats autour d’une vision holistique de l’individu est discutée dans des perspectives théoriques et pratiques. / Subjective well-being may be defined as a hedonic approach of happiness referring to an overall evaluation of individual’s life integrating both cognitive and affective components. Although that concept has been the focus of a considerable scientific literature, few research explored explicitly its relationships with career decision-making process. Our literature review highlighted the opposition between bottom-up and top-down approaches in the study of the relationships between these two variables making particularly complex to determine the causal direction. Thus, the purpose of this thesis was to bring a better understanding on the relationships between subjective well-being and career indecision. Also, in order to test the role of individual and contextual characteristics, we conducted a cross-cultural comparison which include Korean, US, and French college students. The first part of our results was devoted to explore the psychometric properties of several instruments used in the different countries while the second part was more focused on the analysis of the relationships between subjective well-being and career indecision. A series of three studies were conducted in each target country for examining successively the moderator role of individual characteristics, personality traits, and psychological well-being. Overall, results showed that subjective well-being was significantly associated with career indecision across samples despite several moderator effects were observed. Based on our literature review and results, we developed a theoretical model integrating subjective well-being and career indecision while arguing distinctive mechanisms in bottom-up and top-down approaches. The implications of those results toward a holistic approach of individual counseling are discussed in both research and practical directions.
853

Relationships between Missing Response and Skill Mastery Profiles of Cognitive Diagnostic Assessment

Zhang, Jingshun 13 August 2013 (has links)
This study explores the relationship between students’ missing responses on a large-scale assessment and their cognitive skill profiles and characteristics. Data from the 48 multiple-choice items on the 2006 Ontario Secondary School Literacy Test (OSSLT), a high school graduation requirement, were analyzed using the item response theory (IRT) three-parameter logistic model and the Reduced Reparameterized Unified Model, a Cognitive Diagnostic Model. Missing responses were analyzed by item and by student. Item-level analyses examined the relationships among item difficulty, item order, literacy skills targeted by the item, the cognitive skills required by the item, the percent of students not answering the item, and other features of the item. Student-level analyses examined the relationships among students’ missing responses, overall performance, cognitive skill mastery profiles, and characteristics such as gender and home language. Most students answered most items: no item was answered by fewer than 98.8% of the students and 95.5% of students had 0 missing responses, 3.2% had 1 missing response, and only 1.3% had more than 1 missing responses). However, whether students responded to items was related to the student’s characteristics, including gender, whether the student had an individual education plan and language spoken at home, and to the item’s characteristics such as item difficulty and the cognitive skills required to answer the item. Unlike in previous studies of large-scale assessments, the missing response rates were not higher for multiple-choice items appearing later in the timed sections. Instead, the first two items in some sections had higher missing response rates. Examination of the student-level missing response rates, however, showed that when students had high numbers of missing responses, these often represented failures to complete a section of the test. Also, if nonresponse was concentrated in items that required particular skills, the accuracy of the estimates for those skills was lower than for other skills. The results of this study have implications for test designers who seek to improve provincial large-scale assessments, and for teachers who seek to help students improve their cognitive skills and develop test taking strategies.
854

Relationships between Missing Response and Skill Mastery Profiles of Cognitive Diagnostic Assessment

Zhang, Jingshun 13 August 2013 (has links)
This study explores the relationship between students’ missing responses on a large-scale assessment and their cognitive skill profiles and characteristics. Data from the 48 multiple-choice items on the 2006 Ontario Secondary School Literacy Test (OSSLT), a high school graduation requirement, were analyzed using the item response theory (IRT) three-parameter logistic model and the Reduced Reparameterized Unified Model, a Cognitive Diagnostic Model. Missing responses were analyzed by item and by student. Item-level analyses examined the relationships among item difficulty, item order, literacy skills targeted by the item, the cognitive skills required by the item, the percent of students not answering the item, and other features of the item. Student-level analyses examined the relationships among students’ missing responses, overall performance, cognitive skill mastery profiles, and characteristics such as gender and home language. Most students answered most items: no item was answered by fewer than 98.8% of the students and 95.5% of students had 0 missing responses, 3.2% had 1 missing response, and only 1.3% had more than 1 missing responses). However, whether students responded to items was related to the student’s characteristics, including gender, whether the student had an individual education plan and language spoken at home, and to the item’s characteristics such as item difficulty and the cognitive skills required to answer the item. Unlike in previous studies of large-scale assessments, the missing response rates were not higher for multiple-choice items appearing later in the timed sections. Instead, the first two items in some sections had higher missing response rates. Examination of the student-level missing response rates, however, showed that when students had high numbers of missing responses, these often represented failures to complete a section of the test. Also, if nonresponse was concentrated in items that required particular skills, the accuracy of the estimates for those skills was lower than for other skills. The results of this study have implications for test designers who seek to improve provincial large-scale assessments, and for teachers who seek to help students improve their cognitive skills and develop test taking strategies.
855

An irt model to estimate differential latent change trajectories in a multi-stage, longitudinal assessment

Shim, Hi Shin 08 April 2009 (has links)
Repeated measures designs are widely used in educational and psychological research to compare the changes exhibited in response to a treatment. Traditionally, measures of change are found by calculating difference scores (subtracting the observed initial score from the final score) for each person. However, problems such as the reliability paradox and the meaning of change scores arise from using simple difference scores to study change. A new item response theory model will be presented that estimates latent change scores instead of difference scores, addresses some of the limitations of using difference scores, and provides a direct comparison of the mean latent changes exhibited by different groups (e.g. females versus males). A simulation-based test was conducted to ascertain the viability of the model and results indicate that parameters of the newly developed model can be estimated accurately. Two sets of analyses were performed on the Early Childhood Longitudinal Study-Kindergarten cohort (ECLS-K) to examine differential growth in math ability between 1) male and female students and 2) Caucasian and African American students from kindergarten through fifth grade.
856

A generalized partial credit FACETS model for investigating order effects in self-report personality data

Hayes, Heather 05 July 2012 (has links)
Despite its convenience, the process of self-report in personality testing can be impacted by a variety of cognitive and perceptual biases. One bias that violates local independence, a core criterion of modern test theory, is the order effect. In this bias, characteristics of an item response are impacted not only by the content of the current item but also the accumulated exposure to previous, similar-content items. This bias is manifested as increasingly stable item responses for items that appear later in a test. Previous investigations of this effect have been rooted in classical test theory (CTT) and have consistently found that item reliabilities, or corrected item-total score correlations, increase with the item's serial position in the test. The purpose of the current study was to more rigorously examine order effects via item response theory (IRT). To this end, the FACETS modeling approach (Linacre, 1989) was combined with the Generalized Partial Credit model (GPCM; Muraki, 1992) to produce a new model, the Generalized Partial Credit FACETS model (GPCFM). Serial position of an item serves as a facet that contributes to the item response, not only via its impact on an item's location on the latent trait continuum, but also its discrimination. Thus, the GPCFM differs from previous generalizations of the FACETS model (Wang&Liu, 2007) in that the item discrimination parameter is modified to include a serial position effect. This parameter is important because it reflects the extent to which the purported underlying trait is represented in an item score. Two sets of analyses were conducted. First, a simulation study demonstrated effective parameter recovery, though measurements of error were impacted by sample size for all parameters, test length for trait level estimates, and the size of the order effect for trait level estimates, and an interaction between sample size and test length for item discrimination. Secondly, with respect to real self-report personality data, the GPCFM demonstrated good fit as well as superior fit relative to competing, nested models while also identifying order effects in some traits, particularly Neuroticism, Openness, and Agreeableness.
857

A psychometric investigation into the use of an adaptation of the Ghiselli predictability index in personnel selection

Twigge, Liesle 03 1900 (has links)
Thesis (Mcom)--University of Stellenbosch, 2003. / ENGLISH ABSTRACT: The field of human resources involves continuous decision-making regarding the matching of the workforce with the workplace, since this match determines individuals' motivation to perform the actions associated with the workplace. If, at the time of the decision, the decision maker could obtain information on end performance, the chances of achieving the desired results would be increased. However, personnel selection is complicated by the obvious fact that information on end performance is not available at the time of the selection decision. All such decisions thus involve predictions about people's performance. The classic validity model forms the foundation of all prediction in as far as the strength of the relationship between the predictor of performance and the actual performance determines the accuracy of the predictor. Over time, numerous possibilities have been considered on how to increase the magnitude of this relationship as experienced through the validity coefficient, mostly involving modifications and/or extensions to the standard regression model. An interesting and challenging alternative to the usual multiple-regression based attempts may be found in the work of Ghise11i (1956, 1960a, 1960b). He has chosen to improve prediction directly through the development of a composite predictability index that explains variance in the prediction errors resulting from an existing prediction model. It would, however, appear as if the procedure has found very little, if any, practical acceptance, partly attributed to the fact that the predictability index failed to significantly explain unique variance in the criterion when added to a model already containing one or more predictors. Resultantly, based on the Ghiselli idea, this research investigates the possibility of modifying such a predictability index so that it does significantly explain unique variance in the criterion when added to a model already containing one or more predictors. In addition, the study investigates whether the expansion of the prediction model is warranted by examining the effect the increase in subject predictability has on the predictive validity of the selection procedure, as well as the monetary effect it has on the utility of the procedure. Hypotheses are tested to determine the possibility of developing an index from a personality measurement that shows a strong and significant correlation with the residuals computed from the regression of the criterion on an ability predictor; to determine if the addition of the index to an ability predictor significantly explains variance in the criterion measurement that is not yet explained by the ability predictor relationships, and to determine whether this ability is affected by the direction in which the index has been developed. Furthermore, hypotheses are tested to determine the increment on validity and selection utility. The data for the analysis was obtained from Psytech (SA), where a validation study was performed at the Gordon Institute of Business Science using the Apil-B ability test, the Critical Reasoning Test Battery and the Organisational Personality Profile measurements to predict the performance of 100 MBA students. The results of the analysis confirmed Ghiselli' s earlier findings that the traditional predictability index does not significantly explain variance in the criterion residual when added to the selection battery. However, by modifying the Ghiselli procedure, the study found that the index was able to significantly explain variance when added to a battery already containing the predictor. When the index is based on the real values of the residuals, the addition of the predictability index to the model significantly explains unique variance in the criterion, but not so when based on the absolute values of the residuals. It also indicated that the inclusion of the predictability index to the prediction model created a substantial increase in the validity of the selection procedure and that the increase in validity translated into a noteworthy improvement in utility. Conclusions are drawn from the obtained results and recommendations are made for future research. / AFRIKAANSE OPSOMMING: 'n Psigometriese Ondersoek na die Gebruik van 'n Aanpassing van die Ghiselli Voorspellingsindeks in Personeelkeuring: Die veld van menslike hulpbronne sluit 'n aaneenlopende besluitnemingsproses aangaande die passing van die arbeidsmag met die werkplek in, aangesien hierdie passing die individu se motivering met betrekking tot optredes wat met die werkplek geassossieer word, bepaal. lndien die besluitnemer ten tye van die besluitneming alreeds oor inligting rakende die eindprestasie van die individu beskik, sal die moontlikheid verhoog word om die gewenste resultate uit die besluitneming te verkry. Personeelkeuring word egter gekompliseer deur die voor die hand liggende feit dat inligting rakende die eindprestasie nie beskikbaar is ten tye van die keuringsbesluit nie. Alle besluite van hierdie aard sluit dus voorspellings oor individue se prestasie in. Die klassieke geldigheidsmodel vorm die basis van alle voorspellings gebaseer op die sterkte van die verwantskap tussen die voorspeller van prestasie en die werklike prestasie van die individu. Oor die jare is verskeie moontlikhede oorweeg om die sterkte van die hierdie verwantskap soos uitgedruk deur die geldigheidskoëffisiënt te verhoog, hoofsaaklik deur middel van aanpassings en/of verlengings van die standaardregressiemodel. 'n Interessante en uitdagende alternatief vir die pogings gebaseer op meervoudige regessie kan gevind word in die werk van Ghiselli (1956, 1960a, 1960b). Hy poog om voorspelling direk te verbeter deur die ontwikkeling van 'n saamgestelde voorspellingsindeks wat variansie verklaar in die voorspellingsfoute verkry uit 'n bestaande voorspellingsmodel. Dit wil egter voorkom asof die voorspellingsindeks gefaal het om unieke variansie in die kriterium te verklaar wanneer dit toegevoeg word tot 'n model wat alreeds een of meer voorspellers bevat. Gebaseer op die Ghiselli-idee, ondersoek hierdie navorsing dus die moontlikheid om die voorspellingsindeks aan te pas sodat dit beduidend unieke variansie in die kriterium verklaar wanneer dit toegevoeg word tot 'n model wat alreeds een of meer voorspellers bevat. Die studie ondersoek enersyds ook die regverdiging van die uitbreiding van die voorspellingsmodel deur die impak van die verbetering in voorspelling op die voorspellingsgeldigheid van die keuringsprosedure, en andersyds bestudeer dit ook die monetêre effek op die nutwaarde van die prosedure. Hipoteses word getoets om die moontlikheid van 'n indeks, wat uit 'n persoonlikheidsmeting ontwikkel, is en wat sterk en beduidend met die residue wat uit die regressie van die kriterium op die vermoënsvoorspeller bereken is, te bepaal. Daar word ook getoets of die toevoeging van die indeks tot 'n vermoënsvoorspeller beduidende variansie in die kriteriummeting verklaar wat nie alreeds deur die vermoënsvoorspeller verklaar word nie. Daar word verder bepaal of hierdie vermoë geaffekteer word deur die rigting waarin die indeks ontwikkel is. Verder word hipoteses getoets aangaande die impak op beide die geldigheid en die nutwaarde van die keuringsprosedure. Die data vir die analises is verkry by Psytech SA, waar 'n valideringstudie uitgevoer is by die Gordon Institute of Business Science deur die gebruik van die Apil-B vermoënstoets, die Critical Reasoning Test Battery en die Organisational Personality Profile metings om die prestasie van 100 MBA studente te voorspel. Die resultate van die analise bevestig Ghiselli se vroeëre bevindings dat die tradisioneel ontwikkelde indeks nie beduidend variansie in die kriteriumresidue verklaar wanneer dit toegevoeg word tot die keuringsbattery nie. Deur egter die oorspronklike Ghiselli prosedure aan te pas word gevind dat die toevoeging van die indeks tot die regressiemodel wel beduidend unieke variansie verklaar. Die vermoë van die indeks om variansie te verklaar wanneer dit tot die battery toegevoeg word, is beduidend wanneer die indeks gebaseer word op die werklike waardes van die residue, maar toon geen beduidendheid wanneer dit gebaseer word op die absolute waardes van die residue nie. Die resultate dui ook daarop dat die insluiting van die voorspellingsindeks in die model 'n betekenisvolle toename in die voorspellingsgeldigheid van die keuringsprosedure teweegbring, en dat die toename in voorspellingsgeldigheid vertaal na 'n substantiewe styging in nut. Gevolgtrekkings word uit die verkreë resultate afgelei, en aanbevelings vir toekomstige navorsing word gemaak.
858

A Study of Statistical Power and Type I Errors in Testing a Factor Analytic Model for Group Differences in Regression Intercepts

January 2010 (has links)
abstract: In the past, it has been assumed that measurement and predictive invariance are consistent so that if one form of invariance holds the other form should also hold. However, some studies have proven that both forms of invariance only hold under certain conditions such as factorial invariance and invariance in the common factor variances. The present research examined Type I errors and the statistical power of a method that detects violations to the factorial invariant model in the presence of group differences in regression intercepts, under different sample sizes and different number of predictors (one or two). Data were simulated under two models: in model A only differences in the factor means were allowed, while model B violated invariance. A factorial invariant model was fitted to the data. Type I errors were defined as the proportion of samples in which the hypothesis of invariance was incorrectly rejected, and statistical power was defined as the proportion of samples in which the hypothesis of factorial invariance was correctly rejected. In the case of one predictor, the results show that the chi-square statistic has low power to detect violations to the model. Unexpected and systematic results were obtained regarding the negative unique variance in the predictor. It is proposed that negative unique variance in the predictor can be used as indication of measurement bias instead of the chi-square fit statistic with sample sizes of 500 or more. The results of the two predictor case show larger power. In both cases Type I errors were as expected. The implications of the results and some suggestions for increasing the power of the method are provided. / Dissertation/Thesis / M.A. Psychology 2010
859

Assessing Dimensionality in Complex Data Structures: A Performance Comparison of DETECT and NOHARM Procedures

January 2011 (has links)
abstract: The purpose of this study was to investigate the effect of complex structure on dimensionality assessment in compensatory and noncompensatory multidimensional item response models (MIRT) of assessment data using dimensionality assessment procedures based on conditional covariances (i.e., DETECT) and a factor analytical approach (i.e., NOHARM). The DETECT-based methods typically outperformed the NOHARM-based methods in both two- (2D) and three-dimensional (3D) compensatory MIRT conditions. The DETECT-based methods yielded high proportion correct, especially when correlations were .60 or smaller, data exhibited 30% or less complexity, and larger sample size. As the complexity increased and the sample size decreased, the performance typically diminished. As the complexity increased, it also became more difficult to label the resulting sets of items from DETECT in terms of the dimensions. DETECT was consistent in classification of simple items, but less consistent in classification of complex items. Out of the three NOHARM-based methods, χ2G/D and ALR generally outperformed RMSR. χ2G/D was more accurate when N = 500 and complexity levels were 30% or lower. As the number of items increased, ALR performance improved at correlation of .60 and 30% or less complexity. When the data followed a noncompensatory MIRT model, the NOHARM-based methods, specifically χ2G/D and ALR, were the most accurate of all five methods. The marginal proportions for labeling sets of items as dimension-like were typically low, suggesting that the methods generally failed to label two (three) sets of items as dimension-like in 2D (3D) noncompensatory situations. The DETECT-based methods were more consistent in classifying simple items across complexity levels, sample sizes, and correlations. However, as complexity and correlation levels increased the classification rates for all methods decreased. In most conditions, the DETECT-based methods classified complex items equally or more consistent than the NOHARM-based methods. In particular, as complexity, the number of items, and the true dimensionality increased, the DETECT-based methods were notably more consistent than any NOHARM-based method. Despite DETECT's consistency, when data follow a noncompensatory MIRT model, the NOHARM-based method should be preferred over the DETECT-based methods to assess dimensionality due to poor performance of DETECT in identifying the true dimensionality. / Dissertation/Thesis / Ph.D. Educational Psychology 2011
860

An Investigation of Power Analysis Approaches for Latent Growth Modeling

January 2011 (has links)
abstract: Designing studies that use latent growth modeling to investigate change over time calls for optimal approaches for conducting power analysis for a priori determination of required sample size. This investigation (1) studied the impacts of variations in specified parameters, design features, and model misspecification in simulation-based power analyses and (2) compared power estimates across three common power analysis techniques: the Monte Carlo method; the Satorra-Saris method; and the method developed by MacCallum, Browne, and Cai (MBC). Choice of sample size, effect size, and slope variance parameters markedly influenced power estimates; however, level-1 error variance and number of repeated measures (3 vs. 6) when study length was held constant had little impact on resulting power. Under some conditions, having a moderate versus small effect size or using a sample size of 800 versus 200 increased power by approximately .40, and a slope variance of 10 versus 20 increased power by up to .24. Decreasing error variance from 100 to 50, however, increased power by no more than .09 and increasing measurement occasions from 3 to 6 increased power by no more than .04. Misspecification in level-1 error structure had little influence on power, whereas misspecifying the form of the growth model as linear rather than quadratic dramatically reduced power for detecting differences in slopes. Additionally, power estimates based on the Monte Carlo and Satorra-Saris techniques never differed by more than .03, even with small sample sizes, whereas power estimates for the MBC technique appeared quite discrepant from the other two techniques. Results suggest the choice between using the Satorra-Saris or Monte Carlo technique in a priori power analyses for slope differences in latent growth models is a matter of preference, although features such as missing data can only be considered within the Monte Carlo approach. Further, researchers conducting power analyses for slope differences in latent growth models should pay greatest attention to estimating slope difference, slope variance, and sample size. Arguments are also made for examining model-implied covariance matrices based on estimated parameters and graphic depictions of slope variance to help ensure parameter estimates are reasonable in a priori power analysis. / Dissertation/Thesis / Ph.D. Educational Psychology 2011

Page generated in 0.0319 seconds