Spelling suggestions: "subject:"educational evaluation educationization"" "subject:"educational evaluation education.action""
41 |
Perceived Lack of Teacher Empathy and Remedial Classroom Conflicts| A Phenomenological StudyYoung, Henry W., Jr. 19 January 2017 (has links)
<p> In light of earlier research pertaining to empathy, it is reasonable to believe that certain teachers feel empathic toward students in remedial classrooms. It is also evident that teacher empathy is something that students relish. However, a perceived lack of teacher empathy among students in remedial classes is a concern. The general problem addressed in the study was the effect of teachers’ lack of empathy on remedial college students’ perceptions of teacher–student conflict. The specific problem addressed in the study was the limited research on the impact of teachers’ empathy on remedial students’ perceptions. The purposes of the study were to understand remedial students’ perceptions of teachers’ empathy and to assess the perceived impact of lack of teacher empathy on teacher–student conflict. Participants consisted of 10 students enrolled at Cuyahoga Community College remedial English classes in Cleveland, Ohio. The phenomenological study explored the lived experiences and perceptions of these students in developmental/remedial classes. Students participated in face-to-face recorded interviews. Data were analyzed using NVivo software. Four main themes and several subthemes emerged from the data. Recommendations were offered to help facilitate resolution of teacher–student conflicts that may emerge out of perceived lack of teacher empathy.</p>
|
42 |
Evaluating the validity of MCAS scores as an indicator of teacher effectivenessCopella, Jenna M 01 January 2013 (has links)
The Massachusetts Department of Secondary and Elementary Education (DESE) has implemented an Educator Evaluation Framework that requires MCAS scores be used as a significant indicator of teacher effectiveness when available. This decision has implications for thousands of Massachusetts public school teachers. To date, DESE has not provided evidence to support the validity of using MCAS scores to make interpretations about teacher effectiveness. A review of the literature reveals much variation in the degree to which teachers use state-adopted content standards to plan instruction. The findings in the literature warrant investigation into teacher practice among Massachusetts public school teachers. The research questions for this study will be: 1.) Are there variations in the degree to which Massachusetts public school teachers use the Curriculum Frameworks to plan Math instruction?; and 2.) Is MCAS as an instrument sensitive enough to reflect variations in teacher practice in the student's scores? A survey of Massachusetts public school principals and Math teachers, grades three through eight, investigated the research questions. Survey results revealed that Massachusetts teachers use the Curriculum Frameworks to plan instruction to varying degrees. Survey results also suggest a lack of relationship between teacher practice related to the use of the Curriculum Frameworks and student MCAS scores. These findings suggest MCAS scores may not be an appropriate indicator of teacher effectiveness; however, there are limitations to the study that require further investigation into these questions.
|
43 |
Measuring teacher effectiveness using student test scoresSoto, Amanda Corby 01 January 2013 (has links)
Comparisons within states of school performance or student growth, as well as teacher effectiveness, have become commonplace. Since the advent of the Growth Model Pilot Program in 2005 many states have adopted growth models for both evaluative (to measure teacher performance or for accountability) and formative (to guide instructional practice, curricular or programmatic choices) purposes. Growth model data, as applied to school accountability and teacher evaluation, is generally used as a mechanism to determine whether teachers and schools are functioning to move students toward curricular proficiency and mastery. Teacher evaluation based on growth data is an increasingly popular practice in the states, and the introduction of cross-state assessment consortia in 2014 will introduce data that could support this approach to teacher evaluation on a larger scale. For the first time, students in consortium member states will be taking shared assessments and being held accountable for shared curricular standards – setting the stage to quantify and compare teacher effectiveness based on student test scores across states. States' voluntary adoption of the Common Core State Standards and participation in assessment consortia speaks to a new level of support for collaboration in the interest of improved student achievement. The possibility of using these data to build effectiveness and growth models that cross state lines is appealing, as states and schools might be interested in demonstrating their progress toward full student proficiency based on the CCSS. By utilizing consortium assessment data in place of within-state assessment data for teacher evaluation, it would be possible to describe the performance of one state's teachers in reference to the performance of their own students, teachers in other states, and the consortium as a whole. In order to examine what might happen if states adopt a cross-state evaluation model, the consistency of teacher effectiveness rankings based on the Student Growth Percentile (or SGP) model and a value added model are compared for teachers in two states, Massachusetts and Washington D.C., both members of the Partnership for Assessment of Readiness for College and Career (PARCC) assessment consortium The teachers will be first evaluated based on their students within their state, and again when that state is situated within a sample representing students in the other member states. The purpose of the current study is to explore the reliability of teacher effectiveness classifications, as well as the validity of inferences made from student test scores to guide teacher evaluation. The results indicate that two of the models currently in use, SGPs and a covariate adjusted value added model, do not provide particularly reliable results in estimating teacher effectiveness with more than half of the teacher being inconsistently classified in the consortium setting. The validity of the model inferences is also called into question as neither model demonstrates a strong correlation with student test score change as estimated by a value table. The results are outlined and discussed in relation to each model's reliability and validity, along with the implications for the use of these models in making high-stakes decisions about teacher performance.
|
44 |
A case study of the manageability and utility of assessment in three New Zealand primary schools 1993-2006 : a thesis submitted to the Victoria University of Wellington in fulfilment of the requirements for the degree of Doctor of Philosophy in Education /Young, John Richard, January 2009 (has links)
Thesis (Ph.D.)--Victoria University of Wellington, 2009. / Includes bibliographical references.
|
45 |
Further education college quality systems : a framework of design principles for the development of teaching quality improvement processesAlbury, Steven William January 2014 (has links)
This research is a case study of the quality improvement process in an English further education college. It examines the way that staff involved in the design and operation of the quality system shape the process in a part of the education sector that struggles with issues of performance. The case is placed into the context of an unstable policy environment, where further education colleges have been subjected to frequent bouts of government intervention and a funding regime that is unfavourable when compared to secondary schools and universities. The contribution to knowledge of this thesis is that it addresses an under-researched area of further education by viewing the quality process from the perspective of the governors, managers and professional staff responsible for its design and operation. As such it addresses a problem where a lot of attention has been given to teaching staff who experience the quality process or to macro studies where the focus is on outputs in the sector. However, less attention has been paid to the governors, senior staff and quality teams who assess teaching and learning in colleges. The data for the case study were gathered over a two-year period between 2010-2012 and include interviews with college staff, senior staff from OFSTED and the Department for Business Innovation and Skills and staff from a second college, used to help verify the findings. In addition to this, documentation for the quality system was gathered including inspection documents and policy documents. The data were analysed in order to surface traits of social and organisational practice that address the problem of operating a quality system in an environment that is highly resistant to systemisation and predictability. The findings are presented as 'fuzzy' generalisations supplemented by guidance in the form of design principles. The thesis provides an empirically grounded description of key elements of the relationships and the surrounding sociotechnical system that were found in the case. The design principles augment the case study and provide guidance on how a combination of trust relationships, resilience of processes to disruption and flexibility of application provide a background for the quality improvement process at Stretchford College, which was rated as 'Outstanding' at the time of the research.
|
Page generated in 0.2057 seconds