• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3102
  • 336
  • 258
  • 209
  • 103
  • 78
  • 36
  • 30
  • 30
  • 30
  • 30
  • 30
  • 29
  • 28
  • 17
  • Tagged with
  • 4917
  • 2269
  • 1864
  • 1103
  • 375
  • 367
  • 297
  • 234
  • 226
  • 222
  • 212
  • 209
  • 204
  • 200
  • 197
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

A longitudinal study to determine the stanine stability of a group's test-score performance in the elementary school

Corcoran, John E January 1958 (has links)
Thesis (Ed.D.)--Boston University
132

Impact of Violations of Measurement Invariance in Longitudinal Mediation Modeling

Unknown Date (has links)
Research has shown that cross-sectional mediation analysis cannot accurately reflect a true longitudinal mediated effect. To investigate longitudinal mediated effects, different longitudinal mediation models have been proposed and these models focus on different research questions related to longitudinal mediation. When fitting mediation models to longitudinal data, the assumption of longitudinal measurement invariance is usually made. However, the consequences of violating this assumption have not been thoroughly studied in mediation analysis. No studies have examined issues of measurement non-invariance in a latent cross-lagged panel mediation (LCPM) model with three or more measurement occasions. The goal of the current study is to investigate the impact of violations of measurement invariance on longitudinal mediation analysis. The focal model in the study is the LCPM model suggested by Cole and Maxwell (2003). This model can be used to examine mediated effects among the latent predictor, mediator, and outcome variables across time. In addition, it can account for measurement error and allow for the evaluation of longitudinal measurement invariance. Simulation methods were used and the investigation was performed using population covariance matrices and sample data generated under various conditions. Eight design factors were considered for data generation: sample size, proportion of non-invariant items, position of latent factors with non-invariant items, type of non-invariant parameters, magnitude of non-invariance, pattern of non-invariance, size of the direct effect, and size of the mediated effect. Results from population investigation were evaluated based on overall model fit and the calculated direct and mediated effects; results from finite sample analysis were evaluated in terms of convergence and inadmissible solutions, overall model fit, bias/relative bias, coverage rates, and statistical power/type I error rates. In general, results obtained from finite sample analysis were consistent with those from the population investigation, with respect to both model fit and parameter estimation. The type I error rate of the mediated effects was inflated under the non-invariant conditions with small sample size (200); power of the direct and mediated effects was excellent (1.0 or close to 1.0) across all investigated conditions. Type I error rates based on the chi-square statistic test were seriously inflated under the invariant conditions, especially when the sample size was relatively small. Power for detecting model misspecifications due to longitudinal non-invariance was excellent across all investigated conditions. Fit indices (CFI, TLI, RMSEA, and SRMR) were not sensitive in detecting misspecifications caused by violations of measurement invariance in the investigated LCPM model. Study results also showed that as the magnitude of non-invariance, the proportion of non-invariant items, and the number of positions of latent variables with non-invariant items increased, estimation of the direct and mediated effects tended to be less accurate. The decreasing pattern of change in item parameters over measurement occasions resulted in the least accurate estimates of the direct and mediated effects. Parameter estimates were fairly accurate under the conditions of the decreasing and then increasing pattern and the mixed pattern of change in item parameters. Findings from this study can help empirical researchers better understand the potential impact of violating measurement invariance on longitudinal mediation analysis using the LCPM model. / A Dissertation submitted to the Department of Educational Psychology and Learning Systems in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester 2019. / March 6, 2019. / invariance, longitudinal, measurement, modeling, statistics / Includes bibliographical references. / Yanyun Yang, Professor Co-Directing Dissertation; Qian Zhang, Professor Co-Directing Dissertation; Fred W. Huffer, University Representative; Betsy J. Becker, Committee Member.
133

THE CLASSIFICATION OF STUDENTS WITH RESPECT TO ACHIEVEMENT, WITH IMPLICATIONS FOR STATE-WIDE ASSESSMENT

Unknown Date (has links)
Source: Dissertation Abstracts International, Volume: 38-05, Section: A, page: 2727. / Thesis (Ph.D.)--The Florida State University, 1977.
134

The effects of alternate testing strategies on student achievement

Unknown Date (has links)
The present study compared the effects of three classroom testing strategies on student achievement. The strategies varied with respect to both the detail in feedback provided students after each unit test and in the availability of a retest. Within one strategy, students were informed only of their total test score and had no opportunity to take a retest. Within a second strategy, students were provided scores on each skill assessed by the test and allowed to take a retest one week later. Within the third strategy, students were provided detailed feedback concerning the nature of problems they had experienced with each skill, in addition to the scores on each skill and the option of taking a retest. / The study was conducted in the context of an introductory graduate statistics course. Students were randomly assigned to one of the three testing strategies for the duration of the term. In contrast to previous research within mastery learning, the curriculum and delivery of instruction were held constant across treatment conditions. / The achievement of students in the respective groups was contrasted on two summative exams. The first exam measured the exact skills assessed by the unit test and retests. The second exam measured a more generic set of skills and was designed to test students' ability to generalize their knowledge. No significant differences in achievement on either exam were observed between treatment conditions. The findings of this study suggest that when instructional time and objectives are held constant, simply providing students with detailed feedback regarding their performance on the test and the opportunity to take a retest does not represent sufficient action to improve student achievement. / Source: Dissertation Abstracts International, Volume: 52-11, Section: A, page: 3898. / Major Professor: Albert C. Oosterhof. / Thesis (Ph.D.)--The Florida State University, 1991.
135

COMPUTER-BASED TESTING: A COMPARISON OF MODES OF ITEM PRESENTATION

Unknown Date (has links)
This study investigated the effects of two modes of computerized test item presentation on student performance, total testing time, and anxiety. The item type used was one that might be created by a teacher as part of a test after a unit of instruction. The test administration was designed to approximate a classroom application of computers to testing. / Sixty students enrolled in the Educational Psychology Class at Florida State University were randomly assigned to either the paced or unpaced mode of item presentation. Both modes entailed presenting 30 test items, over two instructional objectives, one at a time on a computer screen. A total of 15 minutes was allowed. In the paced mode, an item remained on the screen for 30 seconds and was removed. Examinees could not retake items. In the unpaced mode, an item remained on the screen until the examinee removed it. Examinees could retake any item. Following the timed test, an untimed anxiety questionnaire was administered on the screen. / The statistical design of the study was a two factor 2 (treatment) x 2 (computer experience) analysis of covariance. Covariance analysis was used to control variance between the students due to differences in age and general classroom achievement. The dependent variables were scores on the computer administered test, total testing time, and scores on the anxiety questionnaires. / No significant differences in test scores were found for the paced and unpaced groups, nor for the computer experienced and unexperienced groups. A significant treatment effect on test time was found. The paced group took significantly less time to complete the test, regardless of computer experience. No significant treatment or computer experience effects on anxiety were found. It appears that both computer experienced and unexperienced students can take a paced test with no decrease in scores or increase in anxiety, but with a substantial decrease in testing time. / Source: Dissertation Abstracts International, Volume: 48-07, Section: A, page: 1748. / Thesis (Ph.D.)--The Florida State University, 1987.
136

DEVELOPMENT OF AN EVALUATION FEEDBACK PROCESS AND AN EVALUATION UTILIZATION ASSESSMENT INSTRUMENT

Unknown Date (has links)
Evaluation feedback which is not presented appropriately for its prospective users might not be used. The proposed study determined the current and preferred characteristics of evaluation feedback, and the current and desired levels of the utilization of feedback. Subjects were the evaluator and receivers of evaluation information for the Primary Education Program (PREP) at the Leon County Public Elementary Schools. / This research culminated in three products; the first being an evaluation feedback process for the PREP program. This process fit public school decision-makers' preferences for specific types of evaluation feedback. Second, an instrument was developed to assess utilization of evaluation information which also can be used for other school programs. Finally, the Leon County Public Schools now have immediately-applicable procedures for improving feedback for programs and for assessing utilization of that feedback. In a wider application, the resulting instruments, process, and knowledge can be adapted for use in other school districts where utilization of evaluation information is less than optimal. / Source: Dissertation Abstracts International, Volume: 46-08, Section: A, page: 2275. / Thesis (Ph.D.)--The Florida State University, 1985.
137

Basic skills achievement patterns from kindergarten through tenth grade

Unknown Date (has links)
A descriptive and exploratory longitudinal study was conducted to investigate whether a gap existed between grade level standards and the academic achievement of students with low school readiness and to determine if the gap widened as those students progressed through school. The cohort of interest was students who had taken the KITE readiness test and had attended schools in the Leon County school district from kindergarten through tenth grade for the school years 1974-75 through 1984-85. Students were divided into low, average, and high readiness groups on the basis of their readiness test scores. / Mean differences and variability in communication and mathematics achievement among readiness groups on the norm-referenced CTBS test series and the criterion-referenced SSAT-I test series over time were examined. Because maturation and learning play such a vital role in academic achievement, the achievement patterns of the cohort were examined in the context of the fan spread growth model. It is assumed in this model that as variability within each group increases over time, so does the mean gap between the groups of interest. / Analyses revealed that the students in the low readiness groups did fall further from the set academic standard for the CTBS test series, however, the reverse was true for the criterion-referenced SSAT-I test series. Although for the CTBS tests little fan spread was found between the low and average readiness groups, a notable increasing fan spread was found between the average and high readiness groups. For the SSAT-I tests, overall, fan spread between readiness groups was small and decreasing. When analyzing the data by race and by sex, it was found that the largest differences were between white students and black students, not between males and females. For the CTBS, black males scored consistently lower than other subgroups. For the SSAT-I, black females performed below other subgroups. / Source: Dissertation Abstracts International, Volume: 49-06, Section: A, page: 1437. / Major Professor: F. Craig Johnson. / Thesis (Ph.D.)--The Florida State University, 1988.
138

Sampling effects of writing topics and discourse modes on generalizability of individual student and school writing performance on a standardized fourth-grade writing assessment

Unknown Date (has links)
This study investigated generalizability of writing performance of individual students and schools within the context of a large-scale direct writing assessment. The study focused on the sampling effects of the two major components of writing tasks--writing topics and discourse modes. Generalizability studies were conducted to estimate the sampling effects of writing topics and discourse modes using data from the 1994 Florida Writing Assessment for Fourth Grade. / Results at both individual and school level indicate that topics requiring the same discourse skills had little effect on writing performance. The study found significant discourse mode effects, which strained the generalization of writing scores beyond the sample discourse mode. The results suggest that different aims of discourse may put unequal cognitive demands on students. As a result, writing competency on one discourse domain may not generalize well to other aims of discourse. / The study also investigated the effects of school size on generalizability of school writing scores. The study found that writing scores of large schools have a higher level of generalizability than those of smaller schools, indicating a positive relationship between the generalizability and school size. / The study confirmed the need for a large number of tasks to obtain a reliable estimate of writing competency in both individual- and school-level assessments. The study, however, demonstrated that an assessment can provide a more reliable estimate of writing competency for schools than for individual students. Furthermore, school-level assessments based on matrix sampling design proved to be a viable solution for overcoming limited sampling problem, thus improving generalizability of school writing scores. / Source: Dissertation Abstracts International, Volume: 56-03, Section: A, page: 0901. / Major Professor: Albert Oosterhof. / Thesis (Ph.D.)--The Florida State University, 1995.
139

Performance assessment: Measurement issues of generalizability, dependability of scoring, and relative information on student performance

Unknown Date (has links)
The purpose of this study was to investigate whether the limited number of observations that might be included in a performance assessment would adequately generalize to potential circumstances that would not be observed. Related studies were also conducted to determine how dependably scores are assigned to the measures of students' performance and how different information is provided by paper and pencil test versus performance assessment. A performance assessment was developed in the context of an introductory graduate statistics course and administered to the graduate students along with a paper and pencil test. / A generalizability study was used to estimate the dependability of the performance assessment and to improve the design of the assessment. Dependability of scoring was analyzed through the application of classical test theory and generalizability theory. Correlational and exploratory factor analysis was conducted to determine the relative information provided by two test formats. / This study found that raters do not introduce substantial error into the measurement of performance. Rather, the major source of error is the inconsistency of student performance across tasks, indicating that the number of tasks could be increased to achieve a reliable score for student performance. The correlation between overall scores assigned by two raters and the results of G study suggest that raters are able to consistently evaluate student performance and eventually, the number of raters can be reduced to one and eliminated as a facet in the design of the generalizability study. Relatively high correlation was found between the two measures and there was no evidence of a format factor associated with the use of performance assessment. The factor analytic solution suggests a relationship between factor structure and item discrimination. / Source: Dissertation Abstracts International, Volume: 56-04, Section: A, page: 1328. / Major Professor: Albert C. Oosterhof. / Thesis (Ph.D.)--The Florida State University, 1995.
140

Changing the language of instruction for Mathematics and Science in Malaysia: the PPSMI policy and the washback effect of bilingual high-stakes secondary school exit exams

Tan, Hui May January 2010 (has links)
No description available.

Page generated in 0.0841 seconds