• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Interest and effort in large-scale assessment: the influence of student motivational variables on the validity of reading achievement outcomes

Butler, Jayne Christine January 2008 (has links)
Results from large-scale assessments of academic achievement are key sources of evidence in the development of education policy and reform. The increasing influence of these assessments underscores the need for the results to be valid and reliable. This study investigates possible threats to the validity of reading proficiency assessments by examining the influence of two motivational variables: the interest attributed to the texts students read, and the amount of effort that students invest in undertaking the reading assessment. Using data from Australian pilot assessments and the Programme for International Student Achievement (PISA) this study explores the influence of interest and effort on reading proficiency outcomes and on the conclusions that can be drawn from these assessments.
2

An investigation of executive function impact on reading assessment performance in online versus in-person administration

Hussey IV, Francis Desmond 31 October 2024 (has links)
Purpose. The purpose of this analysis was to determine if executive function affected reading assessment stability variably in online and in-person administration formats. Method. Participants (n = 67, ages 7-9) completed executive function assessments and then completed reading assessments at two different time points in order to establish pre- test/post-test assessment validity. Participants completed reading assessments in one of three conditions: all in-person, in-person and subsequently online, or all online. Executive function performance was categorized as “good” or “poor” and pre/post reading assessment correlations were calculated. Data was subsequently grouped into four general areas of reading: rate, accuracy, comprehension, and phonological awareness. Analyses were carried out to determine if pre/post reading component correlations were significantly predicted by executive function performance and assessment administration group. Additional variables included in analysis were age, gender, and study group. Results. Linear regression models and subsequent one-way analysis of variance (ANOVAs) did not indicate that pre-test/post-test reading component correlations were significantly predicted by executive function within the context of administration groups. Age, gender, and study group were also not found to significantly predict the correlation values. However, phonological awareness correlations were shown to be significantly predicted by administration group. Wherein participants in the in-person administration group tended to have higher correlation values compared to the online administration group. Reading assessment correlation to executive function was then confirmed to be in-line with current literature. Conclusion: Analyses of executive function categorization across administration group was not shown to be a significant predictor of pre/post reading component correlations. This suggests that online administration of behavioral assessments exhibits similar stability to administration of assessments in-person. Moreover, this also suggests that individuals who struggle with executive function deficits are not significantly more likely to have their performance negatively impacted on these assessments. Limitations and future study directions are also discussed.
3

Evaluating Child-Based Reading Constructs and Assessments with Struggling Adult Readers

Nanda, Alice Owens 12 August 2009 (has links)
Due to the paucity of research on struggling adult readers, researchers rely on child-based reading constructs and measures when investigating the reading skills of adults struggling with reading. The purpose of the two studies in this investigation was to evaluate the appropriateness of using child-based reading constructs and assessments with adults reading between the third- and fifth-grade levels. The first study examined whether measurement constructs behind reading-related tests for struggling adult readers are similar to what is known about measurement constructs for children. The sample included 371 adults, including 218 native English speakers and 153 English speakers of other languages. Using measures of skills and subskills, confirmatory factor analyses were conducted to test three theoretical measurement models of reading: an achievement model of reading skills, a core deficit model of reading subskills, and an integrated model containing achievement and deficit variables. Although the findings present the best measurement models, the contribution of this study is the description of difficulties encountered when applying child-based assumptions to developing measurement models for struggling adult readers. The second study examined the usefulness of the Comprehensive Test of Phonological Processing (CTOPP) Elision and Blending Words subtests (Wagner, Torgesen, & Rashotte, 1999) with struggling adult readers. The sample included 254 adults, including 207 native English speakers and 47 native Spanish speakers. Overall performance, subtest reliability, and subtest validity were evaluated for the participants. Analyses included comparisons of struggling adult readers to the CTOPP norm group as well as comparisons within the struggling adult readers by demographic characteristics of age, gender, special-education status, and native language. Compared to the norm group, struggling adult readers exhibited lower overall performance as well as lower subtest reliability and validity. Regardless of demographic grouping, subtest validity was low for struggling adult readers. Overall performance and subtest reliability differed for struggling adult readers depending on demographic grouping, particularly age and native language. This study raises concerns about the appropriateness of administering and interpreting Elision and Blending Words subtests with struggling adult readers. In conclusion, both studies caution the use of child-based reading constructs and assessments with struggling adult readers.
4

Evaluating Child-Based Reading Constructs and Assessments with Struggling Adult Readers

Nanda, Alice Owens 12 August 2009 (has links)
Due to the paucity of research on struggling adult readers, researchers rely on child-based reading constructs and measures when investigating the reading skills of adults struggling with reading. The purpose of the two studies in this investigation was to evaluate the appropriateness of using child-based reading constructs and assessments with adults reading between the third- and fifth-grade levels. The first study examined whether measurement constructs behind reading-related tests for struggling adult readers are similar to what is known about measurement constructs for children. The sample included 371 adults, including 218 native English speakers and 153 English speakers of other languages. Using measures of skills and subskills, confirmatory factor analyses were conducted to test three theoretical measurement models of reading: an achievement model of reading skills, a core deficit model of reading subskills, and an integrated model containing achievement and deficit variables. Although the findings present the best measurement models, the contribution of this study is the description of difficulties encountered when applying child-based assumptions to developing measurement models for struggling adult readers. The second study examined the usefulness of the Comprehensive Test of Phonological Processing (CTOPP) Elision and Blending Words subtests (Wagner, Torgesen, & Rashotte, 1999) with struggling adult readers. The sample included 254 adults, including 207 native English speakers and 47 native Spanish speakers. Overall performance, subtest reliability, and subtest validity were evaluated for the participants. Analyses included comparisons of struggling adult readers to the CTOPP norm group as well as comparisons within the struggling adult readers by demographic characteristics of age, gender, special-education status, and native language. Compared to the norm group, struggling adult readers exhibited lower overall performance as well as lower subtest reliability and validity. Regardless of demographic grouping, subtest validity was low for struggling adult readers. Overall performance and subtest reliability differed for struggling adult readers depending on demographic grouping, particularly age and native language. This study raises concerns about the appropriateness of administering and interpreting Elision and Blending Words subtests with struggling adult readers. In conclusion, both studies caution the use of child-based reading constructs and assessments with struggling adult readers.
5

Policy Implications: Replacing the Reading TAKS Cut Scores with the Common Core Curriculum Reading Cut Scores on Three Middle School Campuses

Thaemlitz, Kristi 16 December 2013 (has links)
As school accountability intensifies, school districts strive not only to prepare their students to meet the No Child Left Behind (NCLB) mandates, but also to prepare students for college and careers after high school. Understanding the necessary reading rigor to ensure academic success is key for educators. Although Texas opted not to adopt the Common Core Curriculum Standards and the accompanying Stretch Lexile measures for reading that require higher reading levels at each grade, Texas educators must still prepare students for academic success. This study determined how the use of more rigorous Lexile standards found in other states and associated with the Common Core Curriculum Standards would affect passing scores on Texas reading assessments in grades 6-8. The population for this study included three middle schools during the 2010 school year within one large suburban school district. State reading assessment data collected from these three schools included students' scores from grades 6, 7, and 8. A Chi-square Test for Independence determined that there was statistical significance for some groups of students in the accountability system: all students, Hispanic students, and economically disadvantaged students. Each of these groups was found to pass at a significantly lower rate when using the Stretch Lexile standard. Results were also examined in terms of political, economical, educational, and social policy implications. The policy implications discussed in this study are far-reaching for Texas educators and students, especially economically disadvantaged and Hispanic students. The higher standards can potentially trigger the school improvement process for campuses and districts failing to make NCLB's required adequate yearly progress. Additional expenses related to supplemental educational services, school choice, and professional development drain district Title I budgets due to mandatory set-aside amounts, disallowing funds for other student-centered programs. Implications for practitioners include clearly establishing intervention systems, adhering to a multi-tiered intervention system, and providing a screening tool for teachers so that progress monitoring can be accomplished for students as they move toward more rigorous reading expectations that will result in college and career preparedness.

Page generated in 0.071 seconds