Return to search

The Impact Of Timed Versus Untimed Standardized Tests On Reading Scores Of Third Grade Students In Title I Schools

The purpose of this study was to investigate the extent to which the performance of Title I third grade students in a central Florida School District differed on tests administered under timed and untimed conditions. Further examined was the literature on reasons for the achievement gap which centered around seven different themes: (a) standardized testing, (b) achievement gap data and identified factors (c) deficit theory, (d) cultural mismatch theory, (e) extended time accommodations, (f) test anxiety and stress, and (g) timed versus untimed tests. Six Title I schools participated in this study by assigning 194 students to take the 2006 Released FCAT Reading Test under either timed or untimed conditions. Although there were no interactions between the covariates and testing conditions, those who were in the free or reduced lunch program or were in exceptional education programs had lower FCAT scores than those who were not. However, when school was included as a moderator, there was a statistically significant interaction between testing conditions and schools on FCAT scores indicating that the relationship between testing conditions and FCAT scores varied for each individual school. A factorial ANCOVA was conducted, and it was found that the mean differences between students who took the timed and untimed 2006 FCAT Reading Test varied from school to school after accounting for the covariates. For two schools, those students who took the untimed tests scored higher than those who took the timed tests. In contrast, those students who took the untimed tests scored lower than those students who took the iv timed test for one of the schools. There was no statistically significant difference for three of the schools. A factorial MANCOVA was used to compare reading performance on the 2006 Reading FCAT between the timed and untimed groups on domain specific tests. The relationship between testing condition and FCAT scores for each domain specific test varied depending on the individual school. Therefore, it could not be concluded from these analyses that testing conditions would consistently result in increases or decreases of student performance on standardized domain specific tests.

Identiferoai:union.ndltd.org:ucf.edu/oai:stars.library.ucf.edu:etd-3201
Date01 January 2012
CreatorsHaniff, Ruth Elizabeth
PublisherSTARS
Source SetsUniversity of Central Florida
LanguageEnglish
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceElectronic Theses and Dissertations

Page generated in 0.0015 seconds