• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 219
  • 2
  • Tagged with
  • 225
  • 225
  • 225
  • 225
  • 108
  • 69
  • 59
  • 25
  • 24
  • 21
  • 18
  • 17
  • 16
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Student Art Assessments, Teacher Evaluations, and Job Satisfaction among Art Teachers

Quin?ones, Agar V. 06 April 2018 (has links)
<p> The purpose of this qualitative case study was to explore and recognize if district-created student art assessments and teacher evaluations influenced the job satisfaction of art teachers due to the increased teacher turnovers and teacher shortages. The experiences, beliefs, and perceptions of the art teachers were critical in understanding and establishing if the increased implementation of the accountability measures added to the stress level and job satisfaction experienced by art teachers. The sample for this case study comprised of 10 male and female art teachers who were certified to teach art within the State of Florida for at least five years and were currently or formerly employed in the Central Florida region. The art teachers were invited first through email invitations and subsequent participants were recruited through the snowball method. Data gathered in this study was collected through audio and visual recordings through the semi-structured interview process. The data collected were analyzed using NVivo 11 Pro (QSR International, 2017) software to uncover themes, patterns, and critical phrases shared by participants. The five themes were: (a) there is a greater level of stress is experienced by teachers from student art assessments and teacher evaluations than ever before; (b) there is much confusion and lack of information on the purpose, procedures, and calculation of student art assessments and VAM scores; (c) class size and an overloaded schedule are detrimental to both the already heightened stress level of art teachers; (d) a supportive, understanding, and appreciative leadership team at each school has a positive impact on an art teacher; (e) a teacher evaluation system that is applicable and fitting for performing arts courses is a necessity within the district. Research findings from this qualitative study exposed the experiences, perceptions, and challenges art teachers have encountered in relation to the district-created student art assessments and teacher evaluations, while teaching in the Central Florida region.</p><p>
62

Universal Design for Learning| A New Clinical Practice Assessment Tool Toward Creating Access and Equity for ALL Students

Fogarty, Diane 31 October 2017 (has links)
<p> To examine to what extent current general education pre-service teachers within a teacher preparation program at a private institution of higher education know and understand the principles of Universal Design for Learning (UDL), expert focus groups were conducted. General education program syllabi were examined for UDL content and found to be lacking in such content. General education pre-service teachers videotaped lessons were reviewed for UDL content and were also found to be inadequate in demonstrating knowledge and understanding of Universal Design for Learning principles. Focus groups comprised of university fieldwork instructors and teacher education experts were asked to review and give feedback on a current clinical observation tool being utilized. Feedback indicated that the current tool was insufficient for measuring pre-service teachers&rsquo; knowledge and understanding of UDL. Further, the current tool was not anchored to the UDL framework or any other teaching framework. In service to contributing to the field of teacher preparation, a new clinical practice tool grounded in Universal Design for Learning was created.</p><p>
63

The Impact of High-Stakes Accountability on Instructional Leadership and Data-Driven Decision-Making

Schuler, Kristina K. 21 July 2017 (has links)
<p> This qualitative, multi-case study was designed to examine the impact high-stakes accountability and data-driven decision making has had on administrators and teacher leaders. Serving as the conceptual framework, instructional leadership theory is defined as a multitude of relationships, behaviors, and responsibilities that directly impact student achievement (O&rsquo;Donnell &amp; White, 2005; Bottoms &amp; Fry, 2009). The researcher utilized instructional leadership theory as lens to explore the participants&rsquo; thoughts, feelings and perceptions with respect to the implementation of these tenets (Mertens, 2005). The focus of this study is to analyze how administrators and educators are directly responsible for students&rsquo; performances and with the rigors of accountability from the principles of NCLB, educators are having to turn to new approaches such as data-driven decision making (King, 2002) and quick-paced instruction to meet the needs of students.</p><p> A qualitative multi-case study approach allowed the researcher to examine how principals and teachers were affected by tenets of the <i>No Child Left Behind Act</i> and high-stakes accountability (Creswell, 2007). For this research study, four single cases (i.e., individual subjects) as well as four focus groups (containing 5-7 participants) were selected to &ldquo;capture multiple realities&rdquo; (Hancock &amp; Algozzine, 2006, p. 72) and open-ended, emerging data (Creswell, 2003). Through data analysis, three themes emerged: 1) <i>Changing Culture,</i> with a subtheme of Collaboration; and 2) <i>Changing Evidence</i> with subthemes of Data-Driven Input and Purposeful Goals; and 3) <i>Increased Rigor</i> with subthemes of Aggressive Pace and Performance and Individualized Instruction. These themes provide an understanding of the impact high-stakes accountability and data-driven decision making has had on public school principals and educators.</p><p>
64

At-Risk Students| An Analysis of School Improvement Grants in the State of Missouri

Witherspoon, Anissa 05 December 2017 (has links)
<p> The educational system in the United States continues to pose many challenges for law and policy makers. Many of these challenges can be traced back to two landmark cases, <i>Plessy vs. Ferguson</i> and <i>Brown vs. Board of Education of Topeka.</i> And, while the U.S. Department of Education developed programs to address many of these issues, the cost versus the benefits must be considered. This research study examined the impact of federally-funded School Improvement Grants (SIGs) for elementary, middle, and high schools across the state of Missouri from 2010 to 2015 on retention rates, graduation rates, and test scores. The state of Missouri identified 56 schools as low-performing, and therefore, eligible to receive the grants. Specifically, this study examined whether the amount of SIG funds allocated per student was associated with increases in achievement scores (mathematics and English), graduation rates, and dropout rates. Using bivariate regression, the findings showed a statistically significant relationship only between the amount of SIG funds allocated per student and English scores. Surprisingly, the relationship showed that as the amount of funds allocated per student increased, English scores decreased. However, after a multivariate regression, findings indicated mathematics scores significantly increased as the amount of SIG funds per student increased, while English scores remained significant in the same direction. This research study also analyzed the relationship between the amount of SIG funds allocated per student and median household income during the first year the funds were disseminated. Because special attention was given to the educational achievement gap and race/ethnicity, this research study also compared Black and White student populations. The results showed that as the population of Black students increased, mathematics and English scores decreased. Furthermore, the findings showed that as the population of Black students increased, the amount of SIG funds allocated per student decreased. This suggested that there may be a need to examine how funds were allocated and what other issues may have confounded the relationships between SIG funds and the major variables presented in this research.</p><p>
65

A Mixed Methods Bounded Case Study| Data-Driven Decision Making within Professional Learning Communities for Response to Intervention

Rodriguez, Gabriel R. 21 December 2017 (has links)
<p>Rodriguez, Gabriel R. Bachelor of Science, University of Southwestern Louisiana, Spring 1999; Bachelor of Arts, University of Louisiana at Lafayette, Spring 2005; Master of Education in Educational Leadership, University of Louisiana at Lafayette, Fall 2007; Doctor of Education, University of Louisiana at Lafayette, Spring 2017 Major: Educational Leadership Title of Dissertation: A Mixed Methods Bounded Case Study: Data-driven Decision Making Within Professional Learning Communities for Response to Intervention Dissertation Director: Dr. Dianne F. Olivier Pages in Dissertation: 206; Words in Abstract: 196 ABSTRACT A growing number of schools are implementing PLCs to address school improvement, staff engage with data to identify student needs and determine instructional interventions. This is a starting point for engaging in the iterative process of learning for the teach in order to increase student learning (Hord & Sommers, 2008). The iterative process of data-driven decision making within PLCs may isolate true PLCs into simplified data meetings, while a professional learning community can more accurately be described as a process (Jessie, 2007). The purpose of this study was to examine how data are used within the professional learning community process for Response to Intervention (RTI). Thus, the overarching research question guiding this study is, to what extent do teachers use data-driven decision making in Professional Learning Communities for Response to Intervention? To develop rich descriptions of Data-driven decision making, PLCs, and Response to Intervention, one-on-one face to face interviews were conducted with each school principal in the district. Additionally, focus group interviews with teachers at each school provided rich descriptions related to the three key constructs. Perceptions of Professional Learning Communities were also collected through a quantitative survey method to describe the district?s engagement in PLCs.
66

The use of item response theory in developing a Phonics Diagnosic Inventory

Pirani-McGurl, Cynthia A 01 January 2009 (has links)
This study was conducted to investigate the reliability of the Phonics Diagnostic Inventory (PDI), a curriculum-based, specific skill mastery measurement tool for diagnosing and informing the treatment of decoding weaknesses. First, a modified one-parameter item response theory model was employed to identify the properties of potential items for inclusion in each subtest to then inform the construction of subtests using the most reliable items. Second, the properties of each subtest were estimated and examined. The test information and test characteristic curves (TCC) for the newly developed forms are reported. Finally, the accuracy and sensitivity of PDI cut scores for each subtest were examined. Specifically, based upon established cut scores, the accuracy with which students would be identified as in need of support and those who are not in need of support were investigated. The PDI generated from this research was found to more reliably diagnose specific decoding deficits in mid-year second grade students than initially constructed forms. Research also indicates further examination of cut scores is warranted to maximize decision consistency. Implications for future studies are also discussed.
67

Score reporting in teacher certification testing: A review, design, and interview/focus group study

Klesch, Heather S 01 January 2010 (has links)
The reporting of scores on educational tests is at times misunderstood, misinterpreted, and potentially confusing to examinees and other stakeholders who may need to interpret test scores. In reporting test results to examinees, there is a need for clarity in the message communicated. As pressure rises for students to demonstrate performance at a certain level, the communication of scores to the public needs to be examined. Although public school student testing often is placed in the spotlight, this study examines score reporting in teacher certification, which may not have the same complexities of student test score reporting, but does have the equally critical need to effectively communicate scoring information. The purpose of this study was to create multiple teacher certification examinee score reports based on findings in the literature on educational test score reporting, as well as marketing and design principles, and to conduct interviews and focus groups to gather feedback on the comprehension and preferences in interpreting the designed score reports and results. Different approaches for reporting test scores were used to design the score reporting materials for a hypothetical teacher certification testing examinee who had not passed. Educators and educational testing professionals were convened and interviewed to review the score reports and offer feedback, suggestions and discussion. The findings are covered in great detail. Using the findings, a final model score report was designed, which was then reviewed with doctoral students in educational measurement. Through this process, some clear patterns and differences arose. Overall, there was a desire on the educator and doctoral student end to provide as much information as possible, where supported by sound measurement principles. The reporting of raw performance information, as well as accommodating comprehension styles by providing performance information in contextual, statistical and visual ways were requested. Upon addressing these requests, two areas that may not have full clarity and direction remained: The process of converting raw score performance to a scaled score (participants wanted more information on this process), and information provided that could address candidate weak areas, directing examinees to materials that could improve their studies, understanding, and examination performance.
68

The development of a phonics diagnostic inventory: Assessment of instrument validity via concurrent and predictive validity techniques and a path model of literacy development

Farrell-Meier, Colleen 01 January 2010 (has links)
Data from instruments which are technically adequate and which inform instruction are not only considered best practice, but are legally mandated by laws such as the Individuals with Disabilities Education Improvement Act and No Child Left Behind. The Phonics Diagnostic Inventory (PDI) was designed to assess students’ proficiency with specific phonic skills to guide intervention for those at-risk for reading difficulties. The purpose of the current study was to extend a prior reliability study (Pirani-McGurl, 2009) and to assess the validity of a subset of PDI items. Concurrent validity between the PDI and TOWRE and DIBELS oral reading fluency (ORF) measures and predictive validity between the PDI and DIBELS ORF measures were examined. Validity of the PDI was also examined via its inclusion within a path model of literacy development and exploratory examination of potential cut scores of the estimated PDI score were conducted to maximize classification accuracy between measures. Results indicated that the modified PDI demonstrated good concurrent and predictive validity with the ORF and TOWRE measures. While the originally proposed path model did not demonstrate good fit, alternative models proposed showed improved fit indices. Potential cut scores are identified and the rate of true positives and negatives are reported. Limitations and future directions are discussed.
69

Using item mapping to evaluate alignment between curriculum and assessment

Kaira, Leah T 01 January 2010 (has links)
There is growing interest in alignment between state's standards and test content partly due to accountability requirements of the No Child Left Behind (NCLB) Act of 2001. Among other problems, current alignment methods almost entirely rely on subjective judgment to assess curriculum-assessment alignment. In addition none of the current alignment models accounts for student actual performance on the assessment and there are no consistent criteria for assessing alignment across the various models. Due to these problems, alignment results employing different models cannot be compared. This study applied item mapping to student response data for the Massachusetts Adult Proficiency Test (MAPT) for Math and Reading to assess alignment. Item response theory (IRT) was used to locate items on a proficiency scale and then two criterion response probability (RP) values were applied to the items to map each item to a proficiency category. Item mapping results were compared to item writers' classification of the items. Chi-square tests, correlations, and logistic regression were used to assess the degree of agreement between the two sets of data. Seven teachers were convened for a one day meeting to review items that do not map to intended grade level to explain the misalignment. Results show that in general, there was higher agreement between SMEs classification and item mapping results at RP50 than RP67. Higher agreement was also observed for items assessing lower level cognitive abilities. Item difficulty, cognitive demand, clarity of the item, level of vocabulary of item compared to reading level of examinees and mathematical concept being assessed were some of the suggested reasons for misalignment.
70

Evaluating IRT- and CTT- based methods of estimating classification consistency and accuracy indices from single administrations

Deng, Nina 01 January 2011 (has links)
Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the “true” DC/DA indices in various conditions, and (3) to assess the impact of choice of reliability estimate on the LL method. Four simulation studies were conducted: Study 1 looked at various test lengths. Study 2 focused on local item dependency (LID). Study 3 checked the consequences of IRT model-data misfit and Study 4 checked the impact of using different scoring metrics. Finally, a real data study was conducted where no advantages were given to any models or assumptions. The results showed that the factors of LID and model misfit had a negative impact on “true” DA index, and made all selected methods over-estimate DA index. On the contrary, the DC estimates had minimal impacts from the above factors, although the LL method had poorer estimates in short tests and the LEE and HH methods were less robust to tests with a high level of LID. Comparing the selected methods, the LEE and HH methods had nearly identical results across all conditions, while the HH method had more flexibility in complex scoring metrics. The LL method was found sensitive to the choice of test reliability estimate. The LL method with Cronbach’s alpha consistently underestimated DC estimates while LL with stratified alpha functioned noticeably better with smaller bias and more robustness in various conditions. Lastly it is hoped to make the software be available soon to permit the wider use of the HH method. The other methods in the study are already well supported by easy to use software.

Page generated in 0.4258 seconds