• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1136
  • 163
  • 14
  • 9
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • Tagged with
  • 1422
  • 1422
  • 898
  • 258
  • 198
  • 198
  • 187
  • 176
  • 175
  • 156
  • 141
  • 136
  • 131
  • 129
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

What Shapes Teachers’ Attitudes, Assumptions, Beliefs And Perceptions About Students With Disabilities And Their Ability To Be Successful In The General Education Classroom?

Mox, Jennifer Noelle 14 July 2022 (has links)
No description available.
592

Factors That Contribute To Implementation Fidelity Of A School-Based Substance Abuse Prevention Program: From Research To “Real World” Setting

Volk, Deborah 12 May 2008 (has links)
No description available.
593

Learning to Talk to One Another: A Study to Implement Collaboration in Healthcare Studies

Evans, Jenny 16 August 2022 (has links)
No description available.
594

A procedure for developing a common metric in item response theory when parameter posterior distributions are known

Baldwin, Peter 01 January 2008 (has links)
Because item response theory (IRT) models are arbitrarily identified, independently estimated parameters must be transformed to a common metric before they can be compared. To accomplish this, the transformation constants must be estimated and because these estimates are imperfect, there is a propagation of error effect when transforming parameter estimates. However, this error propagation is typically ignored and estimates of the transformation constants are treated as true when transforming parameter estimates to a common metric. To address this shortcoming, a procedure is proposed and evaluated that accounts for the uncertainty in the transformation constants when adjusting for differences in metric. This procedure utilizes random draws from model parameter posterior distributions, which are available when IRT models are estimated using Markov chain Monte Carlo (MCMC) methods. Given two test forms with model parameter vectors Λ Y and ΛX, the proposed procedure works by sampling the posterior of ΛY and Λ X, estimating the transformation constants using these two samples, and transforming sample X to the scale of sample Y. This process is repeated N times, where N is the desired number of transformed posterior draws. A simulation study is conducted to evaluate the feasibility and success of the proposed strategy compared to the traditional strategy of treated scaling constants estimates as error-free. Results were evaluated by comparing the observed coverage probabilities of the transformed posteriors to their expectation. The proposed strategy yielded equal or superior coverage probabilities compared to the traditional strategy for 140 of the 144 comparisons made in this study (97%). Conditions included four methods of estimated the scaling constants and three anchor lengths.
595

A Bayesian testlet response model with covariates: A simulation study and two applications

Baldwin, Su G 01 January 2008 (has links)
Understanding the relationship between person, item, and testlet covariates and person, item, and testlet parameters may offer considerable benefits to both test development and test validation efforts. The Bayesian TRT models proposed by Wainer, Bradlow, and Wang (2007) offer a unified structure within which model parameters may be estimated simultaneously with model parameter covariates. This unified approach represents an important advantage of these models: theoretically correct modeling of the relationship between covariates and their respective model parameters. Analogous analyses can be performed via conventional post-hoc regression methods, however, the fully Bayesian framework offers an important advantage over the conventional post-hoc methods by reflecting the uncertainty of the model parameters when estimating their relationship to covariates. The purpose of this study was twofold. First was to conduct a basic simulation study to investigate the accuracy and effectiveness of the Bayesian TRT approach in estimating the relationship of covariates to their respective model parameters. Additionally, the Bayesian TRT results were compared to post-hoc regression results, where the dependent variable was the point estimate of the model parameter of interest. Second, an empirical study applied the Bayesian TRT model to two real data sets: the Step 3 component of the United States Medical Licensing Examination (USMLE), and the Posttraumatic Growth Inventory (PTGI) by Tedeschi and Calhoun (1996). The findings of both simulation and empirical studies suggest that the Bayesian TRT performs very similarly to the post-hoc approach. Detailed discussion is provided and potential future studies are suggested in chapter 5.
596

A multi-level investigation of teacher instructional practices and the use of responsive classroom

Solomon, Benjamin G 01 January 2011 (has links)
A year-long longitudinal study was conducted to quantify different types of teaching in the beginning of the year, and the effect of those choices on end of year instructional practices and student outcomes. Teacher practices were organized around the fidelity of implementation to the Responsive Classroom (RC) program (Northeast Foundation for Children, 2009). Most notably, a central RC tenant entitled “the first six weeks” was examined. RC is a universal prevention program that previously has been categorized as a Tier I social-behavioral program for students when considered within an RTI model (Elliott, 1999). Twenty-seven teachers from the New England region and 179 students participated. The Academic Competence Evaluation Scales (ACES), teacher-form (DiPerna & Elliott, 2000) was used to measure student outcomes. The Classroom Practice Measure (CPM; Rimm-Kaufman et al., 2007) was used to measure level of RC implementation. Finally, to quantify teaching behavior, a momentary time-sampling observation, called the Teaching Observation Tool (TOT; Marcotte, Klein, & Solomon, 2010), was implemented. Results from a series of multilevel models utilizing students nested within teachers indicated that both a constant, high level of instructional time and investment in environmental management time in the fall results in higher levels of student reading (significant) and math achievement (non-significant) in the spring, and lower levels of time spent correcting behavior. Teachers with large discrepancies in instructional time from fall to spring and teachers who failed to release environmental control to students over time had students with lower levels of reading and math growth. Relationships between the CPM, ACES, and the TOT indicate that RC is significantly correlated with increases in student reading achievement and motivation beyond what would be expected of a teacher that does not implement RC. However, in contrast to past research, RC in this study was not correlated with teacher reported improvements in social skills. Implications for practice and directions for future research are discussed.
597

Measuring teacher effectiveness using student test scores

Soto, Amanda Corby 01 January 2013 (has links)
Comparisons within states of school performance or student growth, as well as teacher effectiveness, have become commonplace. Since the advent of the Growth Model Pilot Program in 2005 many states have adopted growth models for both evaluative (to measure teacher performance or for accountability) and formative (to guide instructional practice, curricular or programmatic choices) purposes. Growth model data, as applied to school accountability and teacher evaluation, is generally used as a mechanism to determine whether teachers and schools are functioning to move students toward curricular proficiency and mastery. Teacher evaluation based on growth data is an increasingly popular practice in the states, and the introduction of cross-state assessment consortia in 2014 will introduce data that could support this approach to teacher evaluation on a larger scale. For the first time, students in consortium member states will be taking shared assessments and being held accountable for shared curricular standards – setting the stage to quantify and compare teacher effectiveness based on student test scores across states. States' voluntary adoption of the Common Core State Standards and participation in assessment consortia speaks to a new level of support for collaboration in the interest of improved student achievement. The possibility of using these data to build effectiveness and growth models that cross state lines is appealing, as states and schools might be interested in demonstrating their progress toward full student proficiency based on the CCSS. By utilizing consortium assessment data in place of within-state assessment data for teacher evaluation, it would be possible to describe the performance of one state's teachers in reference to the performance of their own students, teachers in other states, and the consortium as a whole. In order to examine what might happen if states adopt a cross-state evaluation model, the consistency of teacher effectiveness rankings based on the Student Growth Percentile (or SGP) model and a value added model are compared for teachers in two states, Massachusetts and Washington D.C., both members of the Partnership for Assessment of Readiness for College and Career (PARCC) assessment consortium The teachers will be first evaluated based on their students within their state, and again when that state is situated within a sample representing students in the other member states. The purpose of the current study is to explore the reliability of teacher effectiveness classifications, as well as the validity of inferences made from student test scores to guide teacher evaluation. The results indicate that two of the models currently in use, SGPs and a covariate adjusted value added model, do not provide particularly reliable results in estimating teacher effectiveness with more than half of the teacher being inconsistently classified in the consortium setting. The validity of the model inferences is also called into question as neither model demonstrates a strong correlation with student test score change as estimated by a value table. The results are outlined and discussed in relation to each model's reliability and validity, along with the implications for the use of these models in making high-stakes decisions about teacher performance.
598

Are Final Residency Milestones Predictive of Early Fellowship Performance in Pediatrics?

Reed, Suzanne 10 November 2022 (has links)
No description available.
599

The effects of dimensionality and item selection methods on the validity of criterion-referenced test scores and decisions

Dirir, Mohamed Awil 01 January 1993 (has links)
Many of the measurement models currently used in testing require that the items that make up the test span a unidimensional space. The assumption of unidimensionality is difficult to satisfy in practice since item pools are arguably multidimensional. Among the causes of test multidimensionality are the presence of minor dimensions (such as test motivation, speed of performance and reading ability) beyond the dominant ability the test is supposed to measure. The consequences of violating the assumption of unidimensionality may be serious. Different item selection procedures when used for constructing tests will have unknown and differential effects on the reliability and validity of tests. The purposes of this research were (1) to review research on test dimensionality, (2) to investigate the impact of test dimensionality on the ability estimation and the decision accuracy of criterion-referenced tests, and (3) to examine the effects of interaction of item selection methods with test dimensionality and content categories on ability estimation and decision accuracy of criterion-referenced tests. The empirical research consisted of two parts: in Part A, three item pools with different dimensionality structures were generated for two different tests. Four item selection methods were used to construct tests from each item pool, and the ability estimates and the decision accuracies of the 12 tests were compared in each test. In Part B, real data were used as an item bank, and four item selection methods were used to construct short tests from the item bank. The measurement precision and the decision accuracies of the resulted tests were compared. It was found that the strength of minor dimensions affect the precision of the ability estimation and decision accuracy of mastery tests, and that optimal item selection methods perform better than other item selection methods, especially when test data are not unidimensional. The differences in measurement precision and decision accuracy among data with different degrees of multidimensionality and among the different item selection methods were statistically and practically significant. An important implication of the study results for the practitioners are that the presence of minor dimensions in a test may lead to the misclassification of examinees, and hence limit the usefulness of the test.
600

Predicting the educational achievement of preschool and kindergarten children from the cognitive subtests of Early Screening Profiles

Cohn, Mary-Elizabeth 01 January 1990 (has links)
The purpose of the study was to collect predictive validity data on the cognitive subtests and composite of Early Screening Profiles, a screening instrument that will be published in 1990. Data collection involved 135 children, ages 3-6 through 6-11. The scores on Early Screening Profiles were compared to scores on the Achievement Scale of the Kaufman Assessment Battery for Children (K-ABC), the Peabody Picture Vocabulary Test-Revised (PPVT-R), and, for the 85 children in kindergarten or grade one at the time of follow-up testing, a teacher rating scale, Teacher Rating of Academic Performance (TRAP). Time between testing ranged from 5 $1\over2$ to 8 months. For the population studied, statistically significant, strong correlations of.75,.73, and.70 were found between the composite of Early Screening Profiles and K-ABC Achievement, PPVT-R, and TRAP (p $<$.01). Strong or moderate correlations, all significant at the.01 level, resulted when Early Screening Profiles cognitive subtests were compared to criterion subtests. High agreement rates were found for standard scores of one standard deviation above the mean (82%) and one standard deviation below the mean (84%). Comparison of the Early Screening Profiles cognitive composite score with the total scores of all three criterion measures yielded average specificity and sensitivity rates of.80 and.74, respectively, for scores of 115 or higher. For scores of 85 or lower, the average specificity was high (.97) and the average sensitivity rate was modest (.32). No significant differences emerged based on sex. The older group of children scored higher than the younger on the K-ABC Achievement Scale. Research results indicate that the cognitive subtests and composite of Early Screening Profiles show promise of becoming useful and valid additions to the field of early childhood screening.

Page generated in 0.1413 seconds