• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Extension of the cross-classified multiple membership growth curve model for longitudinal data

Li, Jie, active 2013 05 December 2013 (has links)
Student mobility is a common phenomenon in longitudinal data in educational research. The characteristics of education longitudinal data create a problem for the conventional multilevel model. Grady and Beretvas (2010) introduced a cross-classified multiple membership growth curve (CCMM-GCM) model to handle Student mobility over time by capturing complex higher level clustering structure in the data. There are some limitations in the CCMM-GCM model. By creating dummy coded indicators for each measurement occasion, the new model can improve the accuracy and provides an easier and more flexible structure at the higher level. This study provides some support that the new model better fits a dataset than the CCMM-GCM model / text
2

Modeling cross-classified data with and without the crossed factors' random effects' interaction

Wallace, Myriam Lopez 08 September 2015 (has links)
The present study investigated estimation of the variance of the cross-classified factors’ random effects’ interaction for cross-classified data structures. Results for two different three-level cross-classified random effects model (CCREM) were compared: Model 1 included the estimation of this variance component and Model 2 assumed the value of this variance component was zero and did not estimate it. The second model is the model most commonly assumed by researchers utilizing a CCREM to estimate cross-classified data structures. These two models were first applied to a real world data set. Parameter estimates for both estimating models were compared. The results for this analysis served as a guide to provide generating parameter values for the Monte Carlo simulation that followed. The Monte Carlo simulation was conducted to compare the two estimating models under several manipulated conditions and assess their impact on parameter recovery. The manipulated conditions included: classroom sample size, the structure of the cross-classification, the intra-unit correlation coefficient (IUCC), and the cross-classified factors’ variance component values. Relative parameter and standard error bias were calculated for fixed effect coefficient estimates, random effects’ variance components, and the associated standard errors for both. When Model 1 was used to estimate the simulated data, no substantial bias was found for any of the parameter estimates or their associated standard errors. Further, no substantial bias was found for conditions with the smallest average within-cell sample size (4 students). When Model 2 was used to estimate the simulated data, substantial bias occurred for the level-1 and level-2 variance components. Several of the manipulated conditions in the study impacted the magnitude of the bias for these variance estimates. Given that level-1 and level-2 variance components can often be used to inform researchers’ decisions about factors of interest, like classroom effects, assessment of possible bias in these estimates is important. The results are discussed, followed by implications and recommendations for applied researchers who are using a CCREM to estimate cross-classified data structures. / text
3

The Variation of a Teacher's Classroom Observation Ratings across Multiple Classrooms

Lei, Xiaoxuan 06 January 2017 (has links)
Classroom observations have been increasingly used for teacher evaluations, and thus it is important to examine the measurement quality and the use of observation ratings. When a teacher is observed in multiple classrooms, his or her observation ratings may vary across classrooms. In that case, using ratings from one classroom per teacher may not be adequate to represent a teacher’s quality of instruction. However, the fact that classrooms are nested within teachers is usually not considered while classroom observation data is analyzed. Drawing on the Measures of Effective Teaching dataset, this dissertation examined the variation of a teacher’s classroom observation ratings across his or her multiple classrooms. In order to account for the teacher-level, school-level, and rater-level variation, a cross-classified random effects model was used for the analysis. Two research questions were addressed: (1) What is the variation of a teacher’s classroom observation ratings across multiple classrooms? (2) To what extent is the classroom-level variation within teachers explained by observable classroom characteristics? The results suggested that the math classrooms shared 4.9% to 14.7% of the variance in the classroom observation ratings and English Language and Arts classrooms shared 6.7% to 15.5% of the variance in the ratings. The results also showed that the classroom characteristics (i.e., class size, percent of minority students, percent of male students, percent of English language learners, percent of students eligible for free or reduced lunch, and percent of students with disabilities) had limited contributions to explaining the classroom-level variation in the ratings. The results of this dissertation indicate that teachers’ multiple classrooms should be taken into consideration when classroom observation ratings are used to evaluate teachers in high-stakes settings. In addition, other classroom-level factors that could contribute to explaining the classroom-level variation in classroom observation ratings should be investigated in future research.
4

A Monte Carlo Study: The Impact of Missing Data in Cross-Classification Random Effects Models

Alemdar, Meltem 12 August 2009 (has links)
Unlike multilevel data with a purely nested structure, data that are cross-classified not only may be clustered into hierarchically ordered units but also may belong to more than one unit at a given level of a hierarchy. In a cross-classified design, students at a given school might be from several different neighborhoods and one neighborhood might have students who attend a number of different schools. In this type of scenario, schools and neighborhoods are considered to be cross-classified factors, and cross-classified random effects modeling (CCREM) should be used to analyze these data appropriately. A common problem in any type of multilevel analysis is the presence of missing data at any given level. There has been little research conducted in the multilevel literature about the impact of missing data, and none in the area of cross-classified models. The purpose of this study was to examine the effect of data that are missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR), on CCREM estimates while exploring multiple imputation to handle the missing data. In addition, this study examined the impact of including an auxiliary variable that is correlated with the variable with missingness (the level-1 predictor) in the imputation model for multiple imputation. This study expanded on the CCREM Monte Carlo simulation work of Meyers (2004) by the inclusion of studying the effect of missing data and method for handling these missing data with CCREM. The results demonstrated that in general, multiple imputation met Hoogland and Boomsma’s (1998) relative bias estimation criteria (less than 5% in magnitude) for parameter estimates under different types of missing data patterns. For the standard error estimates, substantial relative bias (defined by Hoogland and Boomsma as greater than 10%) was found in some conditions. When multiple imputation was used to handle the missing data then substantial bias was found in the standard errors in most cells where data were MNAR. This bias increased as a function of the percentage of missing data.
5

A Longitudinal Study of School Practices and Students’ Characteristics that Influence Students' Mathematics and Reading Performance of Arizona Charter Middle Schools

Giovannone, Carrie Lynn January 2010 (has links)
No description available.

Page generated in 0.0858 seconds