Spelling suggestions: "subject:"crossclassified random effects modeling"" "subject:"reclassified random effects modeling""
1 |
Modeling cross-classified data with and without the crossed factors' random effects' interactionWallace, Myriam Lopez 08 September 2015 (has links)
The present study investigated estimation of the variance of the cross-classified factors’ random effects’ interaction for cross-classified data structures. Results for two different three-level cross-classified random effects model (CCREM) were compared: Model 1 included the estimation of this variance component and Model 2 assumed the value of this variance component was zero and did not estimate it. The second model is the model most commonly assumed by researchers utilizing a CCREM to estimate cross-classified data structures. These two models were first applied to a real world data set. Parameter estimates for both estimating models were compared. The results for this analysis served as a guide to provide generating parameter values for the Monte Carlo simulation that followed. The Monte Carlo simulation was conducted to compare the two estimating models under several manipulated conditions and assess their impact on parameter recovery. The manipulated conditions included: classroom sample size, the structure of the cross-classification, the intra-unit correlation coefficient (IUCC), and the cross-classified factors’ variance component values. Relative parameter and standard error bias were calculated for fixed effect coefficient estimates, random effects’ variance components, and the associated standard errors for both. When Model 1 was used to estimate the simulated data, no substantial bias was found for any of the parameter estimates or their associated standard errors. Further, no substantial bias was found for conditions with the smallest average within-cell sample size (4 students). When Model 2 was used to estimate the simulated data, substantial bias occurred for the level-1 and level-2 variance components. Several of the manipulated conditions in the study impacted the magnitude of the bias for these variance estimates. Given that level-1 and level-2 variance components can often be used to inform researchers’ decisions about factors of interest, like classroom effects, assessment of possible bias in these estimates is important. The results are discussed, followed by implications and recommendations for applied researchers who are using a CCREM to estimate cross-classified data structures. / text
|
2 |
The Variation of a Teacher's Classroom Observation Ratings across Multiple ClassroomsLei, Xiaoxuan 06 January 2017 (has links)
Classroom observations have been increasingly used for teacher evaluations, and thus it is important to examine the measurement quality and the use of observation ratings. When a teacher is observed in multiple classrooms, his or her observation ratings may vary across classrooms. In that case, using ratings from one classroom per teacher may not be adequate to represent a teacher’s quality of instruction. However, the fact that classrooms are nested within teachers is usually not considered while classroom observation data is analyzed. Drawing on the Measures of Effective Teaching dataset, this dissertation examined the variation of a teacher’s classroom observation ratings across his or her multiple classrooms. In order to account for the teacher-level, school-level, and rater-level variation, a cross-classified random effects model was used for the analysis. Two research questions were addressed: (1) What is the variation of a teacher’s classroom observation ratings across multiple classrooms? (2) To what extent is the classroom-level variation within teachers explained by observable classroom characteristics? The results suggested that the math classrooms shared 4.9% to 14.7% of the variance in the classroom observation ratings and English Language and Arts classrooms shared 6.7% to 15.5% of the variance in the ratings. The results also showed that the classroom characteristics (i.e., class size, percent of minority students, percent of male students, percent of English language learners, percent of students eligible for free or reduced lunch, and percent of students with disabilities) had limited contributions to explaining the classroom-level variation in the ratings. The results of this dissertation indicate that teachers’ multiple classrooms should be taken into consideration when classroom observation ratings are used to evaluate teachers in high-stakes settings. In addition, other classroom-level factors that could contribute to explaining the classroom-level variation in classroom observation ratings should be investigated in future research.
|
Page generated in 0.1262 seconds