• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Sensitivity of Value Added School Effect Estimates to Different Model Specifications and Outcome Measures

Pride, Bryce L. 01 January 2012 (has links)
The Adequate Yearly Progress (AYP) Model has been used to make many high-stakes decisions concerning schools, though it does not provide a complete assessment of student academic achievement and school effectiveness. To provide a clearer perspective, many states have implemented various Growth and Value Added Models, in addition to AYP. The purpose of this study was to examine two Value Added Model specifications, the Gain Score Model and the Layered Effects Model, to understand similarities and differences in school effect results. Specifically, this study correlated value added school effect estimates, which were derived from two model specifications and two outcome measures (mathematics and reading test scores). Existing data were obtained from a moderately large and rural school district in Florida. The outcome measures of 7,899 unique students were examined using the Gain Score Model and the Layered Effects Model to estimate school effects. Those school effect estimates were then used to calculate and examine the relationship between school rankings. Overall, the findings in this study indicated that the school effect estimates and school rankings were more sensitive to outcome measures than they were to model specifications. The mathematics and reading correlations from the Gain Score Model for school effects and school rankings were low (indicating high sensitivity), when advancing from Grades 4 to 5, and were moderate in other grades. The mathematics and reading correlations from the Layered Effects Model were low at Grade 5 for school effects and school rankings, as were the correlations at Grade 7 for the school rankings. In the other grades, correlations were moderate to high (indicating lower sensitivity). Correlations between the Gain Score Model and the Layered Effects Model from mathematics were high in each grade for both school effects and school rankings. Reading correlations were also high for each of the grades. These results were similar to the findings of previous school effects research and added to the limited body of literature. Depending upon the outcome measure used, school effects and rankings can vary significantly when using Value Added Models. These models have become a popular component in educational accountability systems, yet there is no one perfect model. If used, these models should be used cautiously, in addition to other accountability approaches.
2

Comparison of value-added models for school ranking and classification: a Monte Carlo study

Wang, Zhongmiao 15 May 2009 (has links)
A “Value-Added” definition of school effectiveness calls for the evaluation of schools based on the unique contribution of schools to individual student academic growth. The estimates of value-added school effectiveness are usually used for ranking and classifying schools. The current simulation study examined and compared the validity of school effectiveness estimates in four statistical models for school ranking and classification. The simulation study was conducted under two sample size conditions and the situations typical in school effectiveness research. The Conditional Cross-Classified Model (CCCM) was used to simulate data. The findings indicated that the gain score model adjusting for students’ test scores at the end of kindergarten (i. e., prior entering to an elementary school) (Gain_kindergarten) could validly rank and classify schools. Other models, including the gain score model adjusting for students’ test scores at the end of Grade 4 (i. e., one year before estimating the school effectiveness in Grade 5) (Gain_grade4), the Unconditional Cross-Classified Model (UCCM), and the Layered Mixed Effect Model (LMEM), could not validly rank or classify schools. The failure of the UCCM model in school ranking and classification indicated that ignoring covariates would distort school rankings and classifications if no other analytical remedies were applied. The failure of the LMEM model in school ranking and classification indicated that estimation of correlations among repeated measures could not alleviate the damage caused by the omitted covariates. The failure of the Gain_grade4 model cautioned against adjustment using the test scores of the previous year. The success of the Gain_kindergarten model indicated that under some circumstances, it was possible to achieve valid school rankings and classifications with only two time points of data.
3

Comparison of value-added models for school ranking and classification: a Monte Carlo study

Wang, Zhongmiao 15 May 2009 (has links)
A “Value-Added” definition of school effectiveness calls for the evaluation of schools based on the unique contribution of schools to individual student academic growth. The estimates of value-added school effectiveness are usually used for ranking and classifying schools. The current simulation study examined and compared the validity of school effectiveness estimates in four statistical models for school ranking and classification. The simulation study was conducted under two sample size conditions and the situations typical in school effectiveness research. The Conditional Cross-Classified Model (CCCM) was used to simulate data. The findings indicated that the gain score model adjusting for students’ test scores at the end of kindergarten (i. e., prior entering to an elementary school) (Gain_kindergarten) could validly rank and classify schools. Other models, including the gain score model adjusting for students’ test scores at the end of Grade 4 (i. e., one year before estimating the school effectiveness in Grade 5) (Gain_grade4), the Unconditional Cross-Classified Model (UCCM), and the Layered Mixed Effect Model (LMEM), could not validly rank or classify schools. The failure of the UCCM model in school ranking and classification indicated that ignoring covariates would distort school rankings and classifications if no other analytical remedies were applied. The failure of the LMEM model in school ranking and classification indicated that estimation of correlations among repeated measures could not alleviate the damage caused by the omitted covariates. The failure of the Gain_grade4 model cautioned against adjustment using the test scores of the previous year. The success of the Gain_kindergarten model indicated that under some circumstances, it was possible to achieve valid school rankings and classifications with only two time points of data.

Page generated in 0.0624 seconds