Spelling suggestions: "subject:"educational evaluation,"" "subject:"cducational evaluation,""
71 |
A Mixed-Method Investigation of Common Assessments Within a Suburban Secondary SchoolIrvin, Matthew 01 December 2016 (has links)
<p> The purpose of this mixed method case study, on the continued implementation of common assessments developed within Professional Learning Communities (PLCs), was to investigate possible relationships between teacher collaboration, common assessments and End of Course (EOC) assessments. The researcher investigated the perceptions of teachers and administrators in a Midwest secondary setting on common assessment development and utilization on the culture of teaching and data-driven decision making. </p><p> The information from this study will provide the researched school district as well as others with insights into their implementation of PLCs and specifically the development and utilization of common assessments. In order to evaluate student learning in a classroom setting, the state of Missouri piloted SLOs in public schools in the 2016-2017 school year. Common assessments are a staple of the SLO process to foster collaborative use of assessment results and data-informed instruction to address student learning outcomes. Data collection included each of the EOC assessed academic departments, the researcher surveyed teachers and interviewed supervising principals and participating teachers. In order to evaluate common assessments, the researcher collected student achievement data through SLO pre-assessments EOC scores during the 2015-2016 school year. The study utilized the Pearson Product Moment Correlation Coefficient to conduct analysis of the two data points to determine the strength of the relationship. </p><p> Through evaluating common assessment utilization, this study intended to address potential modifications needed in common assessment and accompanying practices in the school’s PLC setting. By completing quantitative analysis of common assessment scores and qualitative data from surveys and interviews the researcher ascertained: Government and English PLC revealed a relationship between their instruction and corresponding assessments; Algebra had a modest relationship while Biology failed to connect classroom to assessments. Through qualitative data analysis, the researcher determined a need for continual professional development around assessment and data literacy to better support teachers with increased accountability of SLO implementation in future school years. Further, implications of the study could serve to assist schools in the implementation of SLOs and ancillary areas of assessment, teacher collaboration, and data use for school advancement and impacting student outcomes.</p>
|
72 |
Departmentalized Classroom Environments Versus Traditional Classroom Environments in Second Through Fourth Grades| A Quantitative AnalysisRay, Staci Janelle 13 April 2017 (has links)
<p> Since No Child Left Behind was introduced, kindergarten through 12th-grade educators have seen a dramatic increase in accountability, rigor of standards, and responsibilities in the classroom (New America Foundation, 2015). In order to meet the increased demands of federal education regulations in second through fourth grades, many administrators are looking for alternative methods to ensure student success (Gewertz, 2014). Departmentalization is one of the alternative methods being used (Jacobs, 2014). Educators believe departmentalization results in many benefits (Chan & Jarman, 2004); however, historical research has contradicted this view (American Association of School Administrators, 1965). With the demands of today’s standards for education, the connection, if any, between student success and departmentalization must be determined. This study was designed to determine if there is a statistically significant difference in student success metrics between students in second through fourth grades in traditional classrooms versus students in departmentalized classrooms. In this study, student success metrics included raw scores on norm-referenced tests, percentile scores on norm-referenced tests, and grade-level averages on norm-referenced tests. These student success metrics are used in Arkansas to determine federal and state funding eligibility (New America Foundation, 2015). The statistical tests used in this study yielded inconsistent results as to a statistical difference between traditional classroom environments and departmentalized classroom environments in second through fourth grades. Factors other than classroom environment, such as teacher training, principal leadership, technology, and parent involvement, may have had an effect on student achievement (Buabeng-Andoh, 2012; Sebastian & Allensworth, 2012). </p>
|
73 |
Teacher and Administrator Perceptions of One-to-One Technology Device ImplementationPratt, Stewart F. 13 April 2017 (has links)
<p> The influence of technology on society shows little sign of diminishing (Puybaraud, 2012). Increased capabilities and the affordability of technology devices have brought a resurgence of one-to-one device implementation in schools (Dawson, 2016). This qualitative study was designed to elicit the perceptions of administrators and teachers on one-to-one device implementation. Marc Prensky’s (2001) premise that students are digital natives embedded in media and digital device-rich environments provided the conceptual framework for this study. Furthermore, Prensky (2001) proposed modern students learn, conceptualize, and respond differently than previous generations. The participants in this study represented six southwest Missouri school districts with student populations of 300-2,500 in grades 6-12 which underwent one-to-one device implementation within the last five years. Data were gathered from responses of eight teachers, 11 principals, and five superintendents. Perceptions of principals and teachers in school districts implementing one-to-one devices were gathered during the first phase of data collection. These data were transcribed and analyzed for key words and phrases, as well as common themes. Then, during the second phase of data collection, an electronic questionnaire instrument was used to gain feedback of participating superintendents. The following findings emerged from this study: appropriate time, importance of key personnel, shifts in teaching, and shifts in learning through the one-to-one implementation process.</p>
|
74 |
Conceptualizations and uses of the Pennsylvania Framework for leadership in the practices of secondary school principalsNolt, Dwight E. 16 November 2016 (has links)
<p> In 2010, the state of Pennsylvania began the work of creating and adopting a state-wide principal effectiveness plan that mirrored the framework established for the evaluation of teachers. Backed by a series of assumptions about the power of an assessment or evaluation tool to increase the effectiveness of school leaders, a team of educators at the state level reviewed plans from numerous states and districts as well as the widely recognized VAL-ED school leadership evaluation plan to inform the creation of a plan tailored for Pennsylvania school leaders. </p><p> The growing focus on evaluation of school leaders was fueled in part by a disconnect between overwhelmingly positive principal evaluations and standardized state assessment scores for student achievement that indicate a disproportionate percentage of “failing” schools. A growing body of research has explored the influences of principal leadership on student performance as well as the theoretical frameworks for effective principal evaluation plans. Less prevalent was research on the influence of an evaluation plan to guide, change or improve the practices of school leaders. </p><p> In the 2012-13 school year, over 200 school district, charter schools, Career and Technology Centers, and intermediate units in Pennsylvania agreed to implement the Principal Effectiveness Plan (PEP), later called the Pennsylvania Framework for Leadership (PFL), for the possible evaluation of up to 1900 school leaders in over 1300 individual school sites. This research was designed specifically to study the influence of the pilot year of the Pennsylvania Framework for Leadership on a group of principals in secondary schools in Pennsylvania by exploring how the principals conceptualized the uses of the plan on their daily practices. </p><p> The study was designed to explore qualitative data gathered through 17 secondary principal interviews which were a representative sample targeted from the 117 secondary principals who completed the pilot process and were included in the data set of 484 principal reports submitted to the Pennsylvania Department of Education at the end of the pilot year. In addition, survey data from PDE were used to inform the construction of the interview protocol. Researcher journal and memos were also considered (Maxwell, 2005, p. 96, p. 110, Miles & Huberman, 1994, p. 72).</p>
|
75 |
The role of leadership in using data to inform instruction| A case studyCoaloa, Debra L. 16 February 2017 (has links)
<p>Data use is proliferating in schools as a tool to inform instructional improvement. Teacher evaluation is increasingly viewed as an important data source and mechanism in this effort. This qualitative case study sought to examine how data generated from teacher evaluation and other teacher learning experiences worked in conjunction to improve practice. More specifically, this study examined the role of leadership in using data for the purpose of increasing teacher knowledge and skills. Spanning a four-month period, the study focused on eight English teachers, a principal, and two assistant principals in one high school involved in implementing a new teacher evaluation process and immersed in data use for the purpose of improving practice. Findings revealed that the principal was not well equipped to build the capacity of her staff to use data to examine their pedagogy in a way that would foster instructional innovation. Her efforts resulted in little more than minor tweaks to practice. Likewise, she did not have a clear approach to improving instruction. Her emphasis was on initiating multiple disconnected learning experiences that were not consistently aligned nor did they include an explanation of why and how these experiences would enhance instruction or an expectation for following through to ensure that new learning would take hold. Professional development was mostly delivered in a top-down fashion that resulted in the exclusion of teacher voice. Finally, the principal responded to external accountability demands by buffering her teachers from the cumbersome, unpleasant aspects of them, while simultaneously using them as leverage to pursue instructional improvement. Ultimately, despite good intentions, the principal was not well positioned to promote the use of data as a tool for teacher learning.
|
76 |
Education after Expulsion| A Program EvaluationStricker, Scott 17 April 2019 (has links)
<p> This program evaluation seeks to determine whether a new expulsion program established in a suburban school district in the Mountain West region of the United States was successful in its goals of reengaging expelled students and preparing them for a successful transition back to a traditional school. This new program was designed as a foil to computer based programs of previous years and adopted a social-emotional focus to increase student resiliency. Quantitative student data, as well as qualitative data from student focus groups was analyzed to gauge program effectiveness. Findings indicate that students earned significantly more credits and had significantly fewer absences than students from the previous year’s program. Focus groups suggested that a warm, welcoming environment staffed by caring, supportive adults was critical to increasing student engagement. Additionally, direct instruction and practice of social-emotional and resiliency skills contributed to a sense of preparedness to return to a traditional school environment. </p><p>
|
77 |
Mixed-Method Study Exploring International Students' Career Readiness at a Four-Year Private University in the MidwestBonnand, Chloe 25 April 2019 (has links)
<p> This study explored international students’ career readiness in a four-year, private university in the Midwest. In order to measure career readiness, the researcher reviewed three factors: financial support, academic major, and country of origin. International students graduating in May 2018 received a survey from the researcher on career readiness. A total of 38 students completed the survey. The final question of the survey was an invitation to participate in an individual interview and/or a focus group. The qualitative data from the survey showed that financial support impacted international students’ career readiness. International students with financial support were more prepared to enter the workforce than students who did not have financial support. Academic majors also impacted career readiness. Students with Business majors pointed out that due to the numerous different fields within the field of Business, it was difficult to show proficiency in one area needed by an employer. Students pursuing other degrees such as Science and Education had a clear idea of the steps to take after graduation and what employers were looking for in new graduates. Country of origin did not have an impact on career readiness as all international students pointed out the difficulties and uncertainties met after graduation due to the immigration restrictions on student and work visas in the United States.</p><p>
|
78 |
Essays on the Economics of Community College Students' Academic and Labor Market SuccessDadgar, Mina January 2012 (has links)
Most students who enter a community college with the stated intention of attaining a credential or transferring to a four-year university leave without accomplishing either of those goals (National Center for Education Statistics, 2011). This dissertation attempts to contribute to the growing economic literature that seeks to understand the conditions and policies that can positively influence community college students' academic and labor market success. In the first essay, I examine the effectiveness of remediation for students who are identified to have the lowest skills in mathematics. Descriptively, while students assigned to remediation tend to have poor outcomes overall, students assigned to the lowest levels of remedial math have the worst outcomes of all students. I use data from the state of Virginia's 2004 cohort of students and use a regression discontinuity design and find that students assigned to the third lowest level of remedial math would have benefited if they had been able to skip that remedial course. In the second essay, I use administrative data to examine how working while taking classes affects community college students' academic outcomes. I use two different identification strategy: an individual fixed effects strategy that takes advantage of the quarterly nature of the data to control for unobserved and time-invariant differences among students, and an instrumental variable difference-in-differences (IV-DID) framework that takes advantage of the fact that there is an exogenous supply of retail jobs during the winter holidays. Using the IV-DID framework, I compare academic outcomes during the fall versus the winter quarter for students who are more likely to work in retail versus students who are less likely to work in retail, based on pre-enrollment association with retail jobs. I find small negative effects of working on GPA and possibly positive outcomes of working on credit accumulation. Finally, in the third essay, Madeline J. Weiss and I examine the returns to community college credentials using administrative data. Using an individual fixed effects identification strategy that compares trajectories of wages across individuals, we find positive and substantial wage returns to associate degrees and long-term certificates and no wage returns to short-term certificates, over and above wage increases for students who enrolled and earned some credits but never earned a credential or transferred. We also find that associate degrees tend to be awarded in low-returns fields, but that in almost any given field, the returns to associate degrees is higher than the returns to certificates.
|
79 |
Identifying Effective Education Interventions in Sub-Saharan Africa: A meta-analysis of rigorous impact evaluationsConn, Katharine M. January 2014 (has links)
The aim of this dissertation is to identify effective educational interventions in Sub-Saharan African with an impact on student learning. This is the first meta-analysis in the field of education conducted for Sub-Saharan Africa. This paper takes an in-depth look at twelve different types of education interventions or programs and attempts to not only present analytics on their relative effectiveness, but to also explore why certain interventions seem to be more effective than others. After a systematic literature review, I combine 56 articles (containing 66 separate experiments, 83 treatment arms, and 420 effect size estimates), and I use random-effects meta-analytic techniques to both a.) evaluate the relative impact of different types of interventions and b.) explain variation in effect sizes within and across intervention types. When I examine the relative pooled effect sizes of all twelve intervention areas, I find that interventions in pedagogical methods (changes in instructional techniques) have a higher pooled effect size on achievement outcomes than all other eleven intervention types in the full sample (e.g., school management programs, school supplies interventions, or interventions that change the class size or composition). The pooled effect size associated with these pedagogical interventions is 0.918 standard deviations in the full sample (SE = 0.314, df = 15.1, p = 0.01), 0.566 in the sample excluding outliers and including only randomized controlled trials (SE = 0.194, df = 11, p = 0.01), and 0.228 in a sample that includes only the highest quality studies (SE = 0.078, df = 5.2, p = 0.03). These findings are robust to a number of moderating factors. Using meta-regression, I find that on average, interventions in pedagogical methods have an effect size over 0.30 standard deviations (significant at the 5% level) greater than all other intervention areas combined, even after controlling for multiple study-level and intervention-level variables. Beyond this average effect, I show that studies that employ adaptive instruction and teacher coaching techniques are particularly effective. Further, while studies that provide health treatments or school meals have on average the lowest pooled effect size, I show that if these studies are analyzed using cognitive assessments (tests of memory and attention), health treatments actually produce a relatively large pooled effect size of 0.176 standard deviations (SE = 0.028, df = 2.18); this is particularly true of studies that either prevent or treat malaria. In addition, this meta-analysis examines the state of current education impact evaluation research in Sub-Saharan Africa and highlights both research gaps as well as differences in study design, methodology, and reporting of metrics by academic field. I find that the bulk of the research in this area comes from the field of economics (62%), followed by the fields of education (23%) and public health (15%). Further, the majority of this research has been conducted in a set of six countries: Kenya, Nigeria, South Africa, Uganda, Burkina Faso, and Madagascar, while rigorous evaluations of education programs have never taken place in others. Moreover, topics currently under rigorous study are not necessarily representative of the major issues facing many Sub-Saharan African school systems today. For example, there are no impact evaluations of multi-grade or multi-shift teaching and only one evaluation of a bilingual education program. This meta-analysis thus recommends a shift in the impact evaluation research agenda to include both a broader geographic and topical focus, as well as an increased emphasis on improvements in pedagogical methods, without which other interventions may not reach their maximum potential impact.
|
80 |
A Mixed Method Analysis on the Relationship between Engagement, Achievement, Satisfaction, and Syllabus Design in a Private Midwest UniversityKohler, Hannah 31 January 2019 (has links)
<p> <i>Background:</i> Online learning is now at the forefront of education, making a college degree more accessible than ever before. With online enrollments at an all-time high, quality instruction is essential to the sustainability of the institution and ultimately affects student enrollment and retention. Research exists on the effectiveness of syllabus design and the use of inventories, but the gap in the existing literature lies in combining the two. </p><p> <i>Purpose:</i> The purpose of this mixed methods study was to analyze possible relationships between syllabus design and student achievement, student engagement, student satisfaction, faculty instruction, and faculty satisfaction. </p><p> <i>Research Design:</i> An Online Syllabus Inventory (OSI) was developed as an evaluative and instructional tool and served as the independent variable for syllabus design between administration of control and experimental courses. </p><p> <i>Data Collection and Analysis:</i> This mixed methods study synthesized quantitative and qualitative data gathered from 28 online courses and 379 students. Data sources included student analytics from a learning management system, course evaluations from a student information system, and feedback from study participants. </p><p> <i>Findings:</i> In the domain of student achievement, a significant difference was found between two control and experimental courses. In the domain of student engagement, a significant difference was found in six courses. Among the sample, course-level factors were found to be significantly different in the domain of student satisfaction. No significant difference was found among instructor-level factors.</p><p>
|
Page generated in 0.1566 seconds