• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 10
  • 10
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Score Reliability of Adolescent Alcohol Screening Measures: A Meta-Analytic Inquiry

Shields, Alan, Campfield, Delia C., Miller, Christopher S., Howell, Ryan T., Wallace, Kimberly, Weiss, Roger D. 20 August 2008 (has links)
This study describes the reliability reporting practices in empirical studies using eight adolescent alcohol screening tools and characterizes and explores variability in internal consistency estimates across samples. Of 119 observed administrations of these instruments, 40 (34%) reported usable reliability information. The Personal Experience Screening QuestionnaireProblem Severity scale generated average reliability estimates exceeding 0.90 (95% CI=0.90-0.96) and the Adolescent Alcohol Involvement Scale generated average score reliability estimates below 0.80 (95% CI=0.67-0.85). Average reliability estimates of the remaining instruments were distributed between these extremes. Sample characteristics were identified as potentially important predictors of variability in the reliability estimates of all the instruments and all instruments under evaluation generated more reliable scores in clinical settings (M=0.89) as opposed to nonclinical settings (M=0.82; r effect size (38)=0.29, p.10). Clinicians facing instrument selection decisions can use these data to guide their choices and researchers evaluating the performance of these instruments can use these data to inform their future studies.
2

The Michigan Alcoholism Screening Test and Its Shortened Form: A Meta-Analytic Inquiry Into Score Reliability

Shields, Alan L., Howell, Ryan T., Potter, Jennifer Sharpe, Weiss, Roger D. 01 September 2007 (has links)
Meta-analytic methods provide a framework around which an inquiry into MAST and SMAST score reliability was completed. Of the 470 measurement opportunities observed between 1971 and 2005, 62 (13.2%) were coupled with accurate reliability information. Weighted reliability estimates centered on.80 suggesting that the MAST and SMAST generally produce scores of similar and adequate reliability for most research purposes. However, the variability of internal consistency estimates shows that at times these tools will not produce reliable scores, particularly among female and nonclinical respondents. Multiple regression equations provide practical guidelines to improve reliability estimates for the future use of these instruments.
3

The File Drawer Problem in Reliability Generalization: A Strategy to Compute a Fail-Safe N With Reliability Coefficients

Howell, Ryan, Shields, Alan L. 01 January 2008 (has links)
Meta-analytic reliability generalizations (RGs) are limited by the scarcity of reliability reporting in primary articles, and currently, RG investigators lack a method to quantify the impact of such nonreporting. This article introduces a stepwise procedure to address this challenge. First, the authors introduce a formula that allows researchers to estimate the lower bound population average reliability for a desired instrument. Second, they present an equation to determine the Fail-Safe N for RG. This equation estimates the number of ''file drawer'' studies required to drop the aggregate score reliability of an instrument below a specified criterion value. Finally, the authors demonstrate the utility of these equations using published RG studies. Comments on the conclusions drawn from each RG application are provided.
4

REVIEW AND EVALUATION OF RELIABILITY GENERALIZATION RESEARCH

Henchy, Alexandra Marie 01 January 2013 (has links)
Reliability Generalization (RG) is a meta-analytic method that examines the sources of measurement error variance for scores for multiple studies that use a certain instrument or group of instruments that measure the same construct (Vacha-Haase, Henson, & Caruso, 2002). Researchers have been conducting RG studies for over 10 years since it was first discussed by Vacha-Haase (1998). Henson and Thompson (2002) noted that, as RG is not a monolithic technique; researchers can conduct RG studies in a variety of ways and include diverse variables in their analyses. Differing recommendations exist in regards to how researchers should retrieve, code, and analyze information when conducting RG studies and these differences can affect the conclusions drawn from meta-analytic studies (Schmidt, Oh, & Hayes, 2009) like RG. The present study is the first comprehensive review of both current RG practices and RG recommendations. Based upon the prior research findings of other meta-analytic review papers (e.g., Dieckmann, Malle, & Bodner 2009), the overarching hypothesis was that there would be differences between current RG practices and best practice recommendations made for RG studies. Data consisted of 64 applied RG studies and recommendation papers, book chapters, and unpublished papers/conference papers. The characteristics that were examined included how RG researchers: (a) collected studies, (b) organized studies, (c) coded studies, (d) analyzed their data, and (e) reported their results. The results showed that although applied RG researchers followed some of the recommendations (e.g., RG researchers examined sample characteristics that influenced reliability estimates), there were some recommendations that RG researchers did not follow (e.g., the majority of researchers did not conduct an a priori power analysis). The results can draw RG researchers’ attentions to areas where there is a disconnect between practice and recommendations as well as provide a benchmark for assessing future improvement in RG implementation.
5

Assessment of the Therapeutic Alliance Scales: A Reliability and Validity Meta-Analytic Evaluation

Bouchard, Danielle 29 June 2018 (has links)
Abstract Extensive research has been conducted out on the construct of therapeutic alliance. With the growing emphasis on evidence-based practice in psychology it is vital that measures used in both clinical and research settings are empirically well-suited for the population under investigation. However, many measurement issues related to the reliability and validity of the alliance construct remain unaddressed or unresolved. Two studies were designed to add to the scientific evidence on the therapeutic alliance by establishing empirical evidence of the psychometric properties of this construct’s most commonly used measures, with the intention of identifying the most psychometrically sound alliance measures. This was first done by systematically reviewing the literature to identify studies that used the most commonly used alliance measures. Next, key psychometric properties of each measure (internal reliability and predictive validity) were reviewed to determine if the alliance was assessed in the context of individual adult psychotherapy. In the first study, I conducted a reliability generalization analysis to (a) estimate the average reliability coefficient of each alliance measure identified in the systematic review and (b) examine the potential influence that study characteristics may have had on the reliability estimates. Six different alliance measures were included (Agnew Relationship Measure, California Psychotherapy Alliance Scales, Counselor Rating Form, Penn Alliance Scales, Therapeutic Bond Scale, Working Alliance Inventory), in various formats and rater versions, resulting in a total of 17 alliance measure variants for this first analysis. In the second study, I conducted a validity generalization analysis using only those studies from the first study that were identified as containing outcome data. The purpose of this study was to synthesize the alliance-outcome effect sizes that have been reported for the most commonly used therapeutic alliance measures and to assess the potential impact study characteristics may have on those effect sizes. Five different alliance measures (California Psychotherapy Alliance Scales, Counselor Rating Form, Penn Alliance Scales, Therapeutic Bond Scale, Working Alliance Inventory, Vanderbilt Therapeutic Alliance Scale) in various formats and rater versions, resulting in a total of 15 alliance measure variants were included in this analysis. This second study was different from previous alliance-outcome meta-analyses as I only included studies that (a) could be identified as providing psychotherapy, as opposed to other mental health services, (b) assessed the alliance from individual adult psychotherapy, (c) were identified as using the most commonly used alliance measures, and (d) measured the alliance at the midpoint of treatment, or earlier. This second study also differed from previous meta-analyses as I conducted separate analyses for correlational data and partial correlational data. The reliability generalization study found that majority of the alliance measures were good choices for assessing the alliance based on their mean reliability coefficients. The validity generalization study found relatively no difference in the early alliance’s ability to predict treatment outcomes in individual adult psychotherapy between full correlation data (r = .24) and partial correlation data (r = .23). There was also no difference found among the different alliance measures, or their variants, in their ability to predict treatment outcomes, suggesting that no one alliance measure is statistically better at predicting outcomes. The results from both studies suggest that, based on their overall level of reliability as well as their ability to predict treatment outcomes, both researcher and clinicians should consider these measures, with few exceptions, as comparably good choices for assessing the alliance in adult individual psychotherapy.
6

Assessing the Reliability of Scores Produced by the Substance Abuse Subtle Screening Inventory

Miller, Christopher S., Woodson, Joshua, Howell, Ryan T., Shields, Alan L. 04 December 2009 (has links)
The Substance Abuse Subtle Screening Inventory (SASSI) is a 10 scale indirect screening instrument used to detect substance use disorders. The current meta-analytic study described reliability reporting practices across 48 studies involving the SASSI. Reliability generalization methods were then employed to evaluate typical score reliability for the screening measure. Results showed approximately 73 of studies did not report reliability estimates. Analysis of data from the remaining studies revealed adequate reliability for the total scale (α .87) and face valid scales (FVA α .88 and FVOD α .92), but substantially lower reliability estimates for the indirect scales (range of α .23.65). The study's findings underscore the need for improved reliability reporting for the SASSI and suggest cautious use of the measure, especially its indirect scales, as an indicator of problematic substance use/abuse in clinical settings.
7

Substance Use Scales of the Minnesota Multiphasic Personality Inventory: An Exploration of Score Reliability via Meta-Analysis

Miller, Christopher S., Shields, Alan L., Campfield, Delia, Wallace, Kim A., Weiss, Roger D. 01 January 2007 (has links)
Three drug and alcohol use screening scales are embedded within the Minnesota Multiphasic Personality Inventory-2: the MacAndrew Alcoholism Scale (MAC) and its revised version (MAC-R), the Addiction Acknowledgement Scale (AAS), and the Addiction Potential Scale (APS). The current study evaluated the reliability reporting practices among 210 studies administering the MAC/MAC-R, APS, and/or AAS. Furthermore, reliability generalization methods were used to characterize the previously reported reliability estimates associated with each instrument. The vast majority of studies (90.6%) did not provide measurement reliability data, suggesting a need for improved psychometric reporting. Data from the remaining studies yielded mean and median score reliability estimates below.70 for each of the identified measures. Although limited in some instances by sample size constraints, results suggest that these instruments tend not to produce scores with acceptable levels of reliability for most research or clinical situations.
8

Assessing the Reliability of Scores Produced by the Substance Abuse Subtle Screening Inventory (SASSI).

Woodson, Joshua A. 03 May 2008 (has links) (PDF)
The fundamental principle that reliability is a property of scores and not of instruments provides the foundation of a meta-analytic technique called reliability generalization (RG). RG studies characterize the reliability of scores generated by a given instrument and identify methodological and sample characteristics that contribute to the variability in the reliability of those scores. The present study is an RG of the Substance Abuse Subtle Screening Inventory (SASSI). Reliability estimates were obtained from 19.8% of studies using the SASSI. Bivariate correlations revealed strong, positive correlations between SASSI score reliability and score variability of the Subtle Attributes (r = .877, p < .05) and Family History (r = .892, p < .05) subscales and between score reliability and ethnicity for both the Family History (r = .683, p < .05) and Tendency to Involvement in Correctional Setting (r = .76, p < .05) subscales.
9

Reliability Generalization: a Systematic Review and Evaluation of Meta-analytic Methodology and Reporting Practice

Holland, David F. (Educational consultant) 12 1900 (has links)
Reliability generalization (RG) is a method for meta-analysis of reliability coefficients to estimate average score reliability across studies, determine variation in reliability, and identify study-level moderator variables influencing score reliability. A total of 107 peer-reviewed RG studies published from 1998 to 2013 were systematically reviewed to characterize the meta-analytic methods employed and to evaluate quality of reporting practice against standards for transparency in meta-analysis reporting. Most commonly, RG studies meta-analyzed alpha coefficients, which were synthesized using an unweighted, fixed-effects model applied to untransformed coefficients. Moderator analyses most frequently included multiple regression and bivariate correlations employing a fixed-effects model on untransformed, unweighted coefficients. Based on a unit-weighted scoring system, mean reporting quality for RG studies was statistically less than that for a comparison study of 198 meta-analyses in the organizational sciences across 42 indicators; however, means were not statistically significantly different between the two studies when evaluating reporting quality on 18 indicators deemed essential to ethical reporting practice in meta-analyses. Since its inception a wide variety of statistical methods have been applied to RG, and meta-analysis of reliability coefficients has extended to fields outside of psychological measurement, such as medicine and business. A set of guidelines for conducting and reporting RG studies is provided.
10

Reliability Generalization of the Alcohol Use Disorders Identification Test.

Patel, Chandni 12 August 2008 (has links) (PDF)
The Alcohol Use Disorder Identification Test (AUDIT) is a brief screening instrument for assessing alcohol use problems among adults. This instrument is widely used and continued evaluation of its psychometric performance is needed. Reliability and validity are the primary psychometric characteristics of interest when evaluating psychological instruments. The focus of the present study is on reliability, which reflects the consistency or repeatability of the scores produced by a given instrument. Using meta-analytic methods, results showed that approximately 65% of previously published studies using the AUDIT did not appropriately report reliability estimates. Among the remaining studies, weighted reliability estimate centered on .81 (SD = .07) suggesting that the AUDIT generally produces scores of adequate reliability for most research purposes. Multiple regression equations showed that, among a variety of sample and methodological characteristics, the standard deviation of scores was the only statistically significant predictor of the variability in AUDIT score reliability estimates.

Page generated in 0.1382 seconds