Spelling suggestions: "subject:"8tudent improvement"" "subject:"astudent improvement""
1 |
Opening Pandora's box : Texas elementary campus administrators use of educational policy and highly qualified classroom teachers professional development through data-informed decisions for science educationBrown, Linda Lou 21 March 2011 (has links)
Federal educational policy, No Child Left Behind Act of 2001, focused attention on America’s education with conspicuous results. One aspect, highly qualified classroom teacher and principal (HQ), was taxing since states established individual accountability structures. The HQ impact and use of data-informed decision-making (DIDM) for Texas elementary science education monitoring by campus administrators, Campus Instruction Leader (CILs), provides crucial relationships to 5th grade students’ learning and achievement. Forty years research determined improved student results when sustained, supported, and focused professional development (PD) for teachers is available. Using mixed methods research, this study applied quantitative and qualitative analysis from two, electronic, on-line surveys: Texas Elementary, Intermediate or Middle School Teacher Survey© and the Texas Elementary Campus Administrator Survey© with results from 22.3% Texas school districts representing 487 elementary campuses surveyed. Participants selected in random, stratified sampling of 5th grade teachers who attended local Texas Regional Collaboratives science professional development (PD) programs between 2003-2008. Survey information compared statistically to campus-level average passing rate scores on the 5th grade science TAKS using Statistical Process Software (SPSS). Written comments from both surveys analyzed with Qualitative Survey Research (NVivo) software. Due to the level of uncertainty of variables within a large statewide study, Mauchly’s Test of Sphericity statistical test used to validate repeated measures factor ANOVAs. Although few individual results were statistically significant, when jointly analyzed, striking constructs were revealed regarding the impact of HQ policy applications and elementary CILs use of data-informed decisions on improving 5th grade students’ achievement and teachers’ PD learning science content. Some constructs included the use of data-warehouse programs; teachers’ applications of DIDM to modify lessons for differentiated science instruction, the numbers of years’ teachers attended science PD, and teachers’ influence on CILs staffing decisions. Yet CILs reported 14% of Texas elementary campuses had limited or no science education programs due to federal policy requirement for reading and mathematics. Three hypothesis components were supported and accepted from research data resulted in two models addressing elementary science, science education PD, and CILs impact for federal policy applications. / text
|
2 |
Cut Once, Measure Everywhere: The Stability of Percentage of Students Above a Cut ScoreHollingshead, Lynne Marguerite 26 July 2010 (has links)
Large-scale assessment results for schools, school boards/districts, and entire provinces or states are commonly reported as the percentage of students achieving a standard – that is, the percentage of students scoring above the cut score that defines the standard on the assessment scale. Recent research has shown that this method of reporting is sensitive to small changes in the cut score, especially when comparing results across years or between groups. This study extends that work by investigating the effects of reporting group size on the stability of results. For each of ten group sizes, 1000 samples with replacement were drawn from the May 2009 Ontario Grade 6 Assessment of Reading, Writing and Mathematics. The results showed that for small group sizes – analogous to small schools – there is little confidence and that extreme caution must be taken when interpreting differences observed between years or groups.
|
3 |
Cut Once, Measure Everywhere: The Stability of Percentage of Students Above a Cut ScoreHollingshead, Lynne Marguerite 26 July 2010 (has links)
Large-scale assessment results for schools, school boards/districts, and entire provinces or states are commonly reported as the percentage of students achieving a standard – that is, the percentage of students scoring above the cut score that defines the standard on the assessment scale. Recent research has shown that this method of reporting is sensitive to small changes in the cut score, especially when comparing results across years or between groups. This study extends that work by investigating the effects of reporting group size on the stability of results. For each of ten group sizes, 1000 samples with replacement were drawn from the May 2009 Ontario Grade 6 Assessment of Reading, Writing and Mathematics. The results showed that for small group sizes – analogous to small schools – there is little confidence and that extreme caution must be taken when interpreting differences observed between years or groups.
|
4 |
Self-Assessment and Student Improvement in an Introductory Computer Course at the Community College LevelSpicer-Sutton, Jama, Lampley, James, Good, Donald W. 22 May 2013 (has links)
Excerpt:The purpose of this study was to determine a student’s computer knowledge upon course entry and if there was a difference in college students’ improvement scores as measured by the difference in pretest and post‐test scores of new or novice users, moderate users, and expert users at the end of a college level introductory computing class.
|
5 |
Self-Assessment and Student Improvement in an Introductory Computer Course at the Community College Level 1Spicer-Sutton, Jama, Lampley, James, Good, Donald W. 01 April 2014 (has links)
The purpose of this study was to determine a student’s computer knowledge upon course entry and if there was a difference in college students’ improvement scores as measured by the difference in pretest and post-test scores of new or novice users, moderate users, and expert users at the end of a college level introductory computing class. This study also determined whether there were differences in improvement scores by gender or age group. The results of this study were used to determine whether there was a difference in improvement scores among the three campus locations participating in this study.
Four hundred sixty-nine students participated in this study at a community college located in Northeast Tennessee. A survey, pretest, and post-test were administered to students in a college level introductory computing class. The survey consisted of demographic data that included gender, age category, location, Internet access, educational experience and the self-rated user category, while the pretest and post-test explored the student’s knowledge of computer terminology, hardware, the current operating system, Microsoft Word, Microsoft Excel, and Microsoft PowerPoint.
The data analysis revealed significant differences in pretest scores between educational experience categories. In each instance, the pretest mean for first semester freshmen students was lower than second semester freshmen and sophomores. The study also reported significant differences between the self-rated user categories and pretest scores as well as differences in improvement scores (post-test scores minus pretest scores). However, the improvement scores (post-test scores minus pretest scores) were higher than the other self-rated user categories. Of the three participating campus locations, students at Location 1 earned higher improvement scores than did students at Location 2. The results also indicated that there was a significant difference between the types of course delivery and course improvement scores (post-test scores minus pretest scores). The improvement scores for on ground delivery was 5 points higher than the hybrid course delivery. Finally, study revealed no significant differences according to the gender and age categories.
|
Page generated in 0.0783 seconds