Spelling suggestions: "subject:"educationevaluation."" "subject:"functionevaluation.""
241 |
Incremental effects of ESEA Title I resources on student achievementThomas, Wayne Powell January 1980 (has links)
Studies to date of Title I of the Elementary and Secondary Education Act have not yielded definitive evidence of increased student achievement or of the effects of instructional resource allocations on student reading achievement. This evidence has been obscured by methodological flaws, instrumentation problems, a continued focus on national rather than local-level evaluation, and the investigation of resource variables over which the local schools have little or no control.
The degree of Title I impact on the student it is designed to serve is determined by decisions made at the local school and classroom levels. Previous investigations have found that most of the variation in student achievement lies within rather than between the local schools. For these reasons, this study identifies locally controlled and easily changed types of instructional resource allocations which are expected to influence the achievement of students in Title I instructional groups within the schools. Each resource allocation is defined by several variables which jointly represent the several aspects of each resource.
The relationships between the levels of allocation of these resources and the levels of student achievement in Title I classes are investigated using a hierarchical multiple regression model. The effects of three sets of resource variables on the reading achievement of 4,332 second, third, and fourth-graders in fifteen Virginia public school systems are investigated by determining the incremental amounts of achievement variance which each type of resource allocation contributes to the total explained variance using classrooms as the unit of analysis. The results indicate that variations in the amounts of instructional time, the proportion of total teacher time spent in instruction, and the student-to-teacher ratio are not associated with significant achievement increments. In addition, the degree of administrative support of Title I instruction and the activities of parent advisory councils do not explain significant amounts of achievement. However, there is some evidence that the use of instructional aides in the classroom is related to increases in achievement.
The findings of this study indicate that the effects of Title I instructional resource allocations may exist primarily within classrooms rather than between classrooms. Interactions between individual student characteristics and the ways in which instructional resources are allocated by the teacher in the classroom are suggested as a possible source of Title I effects. / Ph. D.
|
242 |
A comprehensive evaluation of the Roanoke County elementary school guidance and counseling programLehman, Joanne R. January 1990 (has links)
One of the earliest and most comprehensive elementary school guidance and counseling programs in the Commonwealth of Virginia is located in Roanoke County. An integral component of every elementary school guidance and counseling program is evaluation. The purpose of this study was to design and conduct a comprehensive evaluation of the Roanoke County Elementary School Guidance and Counseling Program in order to determine program effectiveness. Data was collected through questionnaires disseminated to samples comprised of 261 students, 34 teachers, 13 principals, and 280 parents of children in the program. Additional information was gathered from focus interviews conducted with counselors, teachers, and individual interviews with principals.
The findings indicate that the guidance program is effective in meeting stated objectives as well as student needs. It was also concluded that program effectiveness could be jeopardized in the future due to the growing number of students and responsibilities required of the counselors. In addition, the data indicated that all populations exhibit a favorable attitude toward the program. / Ed. D.
|
243 |
Nurses' communication skills: an evaluation of the impact of solution-focused communication trainingMackintosh, Carolyn, Bowles, N.B., Torn, Alison January 2001 (has links)
No / This paper describes the evaluation of a short training course in solution-focused brief therapy (SFBT) skills. This evaluation examined the relevance of SFBT skills to nursing and the extent to which a short training course affected nurses¿ communication skills. Nurses¿ communication skills have been criticized for many years, as has the training in communication skills that nurses receive. The absence of a coherent theoretical or practical framework for communication skills training led us to consider the utility of SFBT as a framework for a short training course for qualified nurses, the majority of them are registered nurses working with adults. Quantitative and qualitative data were collected: the former using pre- and post-training scales, the latter using a focus group conducted 6 months after the training. Data were analysed using the Wilcoxon signed-rank test and content analysis. Quantitative data indicated positive changes in nurses¿ practice following the training on four dimensions, and changes in nurses¿ willingness to communicate with people who are troubled reached levels of significance. Qualitative data uncovered changes to practice, centred on the rejection of problem-orientated discourses and reduced feelings of inadequacy and emotional stress in the nurses. There are indications that SFBT techniques may be relevant to nursing and a useful, cost-effective approach to the training of communication skills. Solution focused brief therapy provides a framework and easily understood tool-kit that are harmonious with nursing values.
|
244 |
Les disparités régionales du système d'enseignement zaïrois: étude diagnostique et politique de planificationMulangwa-Kyomba, Katako January 1986 (has links)
Doctorat en sciences psychologiques / info:eu-repo/semantics/nonPublished
|
245 |
Adult learning outcomes based on course delivery methodologyJenkins, Timothy Edward 01 January 2005 (has links)
This study compared student satisfaction and academic performance in online and face-to-face classes. 105 ITT Technical Institute students who were simultaneously enrolled in one online course and two on-campus courses were surveyed and interviewed. Factors examined included student to instructor communication, student to student interaction, content selection for online courses and course management for online courses. Sixty-four percent of the students did not pass their online courses and expressed dissatisfaction with the learning process. Course components and processes that could be improved were identified.
|
246 |
The role of integrated quality management system to measure and improve teaching and learning in South African further education and training sectorDhlamini, Joseph Thabang 12 1900 (has links)
Since 1994, South African education system has been undergoing continuous transformation which had an impact on the quality of teaching and learning. There appeared to be a huge underperformance in the High School and FET College learners which for many years forced Universities to embark on bridging courses in order to enroll new students. Furthermore, a misalignment of college’s National Technical Diploma (NATED) programmes that did not afford college graduates an opportunity to register with Universities nor Universities of Technology brought about the questioning of the quality of teaching and learning in the FET College sector. Tabling the unified quality improvement plans in education in South Africa, the Education Ministry introduced an integrated approach to measure teaching and learning with the view of identifying improvement strategies. However, the implementation of this integrated tool called the Integrated Quality Management System had educators and managers attaching ambiguous meanings to the system. The IQMS instrument is meant to be a dependable quality assurance tool to measure and improve the quality of teaching and learning. The ambiguity lies with educators and managers referring to IQMS as a means to acquire 1% pay progression and the possible return of the old apartheid systems’ inspectorate. This research study was promulgated by a concern on the effectiveness and efficiency of implementing the IQMS instrument to measure the quality of teaching and learning in South African FET sector. In exploring literature on the concept of quality teaching and learning in the FET sector in South Africa, the researcher identified that similar trends of integrating quality management systems in education are being followed globally. The difference to the South African system is the attachment of the salary progression of 1% as an incentive to performance. In view of the
introduction of the new system of education and training, the researcher realized that ‘short cut’ processes were followed in preparing educators to be able to offer new education programmes using the OBE system of teaching and learning. That appeared to be another shortfall to the adequacy of implementing IQMS as a quality assurance instrument to measure the quality of teaching and learning in the FET sector in South Africa.
In addition, there appeared to be conflicting trends in the FET sector where the same sector provided curriculum 2005 programmes for schools which differed from college programmes offering National Certificate Vocational {NC(V)}. Both sectors were expected to use IQMS as a tool to measure the quality of teaching and learning with the view of enhancing improvement thereof. Furthermore, the end product of the FET sector for both schools and colleges is the Further Education and Training Certificate (FETC). Unfortunately, it was difficult for the education department to achieve its objectives because time frames to prepare educators and the critical element of providing adequate human resources for the implementation of IQMS could not be met through Umalusi the national quality assurance body for the sector.
The FET Sector which is expected to deliver Education and Training to produce quality students for HE sector and the world of work is faced with shortfalls of quality delivery. The driving force of this research study was to explore the dependability and adequacy of implementing IQMS as a quality assurance instrument to effectively and efficiently measure the quality of teaching and learning to meet the expected outcomes. It is in this regard that the researcher through empirical evidence realized that IQMS did not have theoretical grounding hence there are no principles, procedures or processes that govern the implementation of this very important system.
In addition, the empirical evidence from the qualitative study proved that quality delivery of teaching and learning has been monitored using diverse assessment practices. A variety of assessment tools like the TQM and QMS which exist in FET Colleges with the summative IQMS in FET Schools of which the three practices are premised around Quality Management. Quality Management refers to a process where quality delivery in a school, college or any other organization is systematically managed to maintain the competence of the organization. It is in this regard that TQM, QMS and IQMS refer to Quality Assurance Practices in any organization that is geared to effective and efficient client relations. / Teacher Education / D.Ed. (Education Management)
|
247 |
The development and testing of an evaluation model for special educationLangford, Lyndon Limuel 23 September 2010 (has links)
The purpose of this study was to develop and test a model which addresses special education program evaluation needs. As such, the focus was on development. Often development and research are seen as one (e.g., Department of R & D; Director of R & D). They are, however distinctively different in process and product. The model developed provides general and special education leaders responsible for providing special education services with high quality data and procedures for decision making related to special education. Providing special education services is a complex responsibility. Not only are critical lifelong decisions related to students and their parents made, but there are stringent federal laws, complex state agency policies, detailed financial and programmatic reporting requirements, and often linkages to a variety of outside professionals and service provider agencies and organizations. There is a need for an effective program evaluation model useful to the uniqueness of special education. A variety of program evaluation models have been used in education and other organizational environments (e.g., Mabry, 2002; Patton, 2002; Posavac & Carey, 2003; Renger & Titcomb, 2002; Warburton, 2003). But, their application to special education has been limited and often ineffective or inefficient to address the evaluation needs of special education. This evaluation model development utilized the best of knowledge and procedures of existent evaluation models and adapted them to the uniqueness of special education. The special education evaluation model developed named Program Effectiveness in Special Education (PEiSE) identified espoused and in-use actions in a school district. This information with analysis, discussions, and data provided powerful special education information. To form the structure of the model, PEiSE utilized aspects of the CIPP Evaluation Model developed by Stufflebeam (2002), Logic Framework Model (Suchman, 1967), and the Utilization-Focused Evaluation Model (Patton, 1978). The process brought a number of within the district (Brunner & Guzman, 1989) and outside the district stakeholders into the development process which provided an expertise enhancing model effectiveness (Eisner, 1983). Information gathered from all stakeholders came in various forms and contained data acquired with little or no bias in the instruments or process used (Scriven, 1974; Provus, 1971; Cronbach, 1981; Stake, 1973). These processes not only had potential to improve the special education programs but also to improve the evaluation process itself (Eraut, 1984). The model also considered the limitations of resources of special education services (Stufflebeam, 1971; Tripodi, Pellin, & Epstein, 1971; Gold, 1988). Finally the process proved instrumental in bringing the primary discipline of general education and the complementary discipline of special education physically, philosophically and practically together for the benefit and improvement of services to all students. In conceptualizing the process, a flowchart of events was developed utilizing the form and philosophy of existing best practices in evaluation models and the foundational theory of organizational and program improvement and effectiveness (Argyris & Schon, 1974) PEiSE required the development of plans to reduce or eliminate discrepancies between what practices are espoused and what are actually in-use by practitioners. The PEiSE process included twelve phases: Point of Contact; Scope of Evaluation; Identify Formal Decision Makers; Structured Interviews with Formal Decision Makers; Compose List of Best Practices with Definitions; Formal Decision Makers Meeting/Approval of Best Practices List; Compose Espoused/In-Use Questionnaire; Collect/Analyze Questionnaire/Supportive Data; Recommendations for Action; Generate Action Plans Designed to Reduce or Eliminate Discrepancies; Execute Action Plans; and Measure Progress. An emphasis throughout PEiSE was that change is a necessary and welcomed part of organizational effectiveness as well as an integral part of organizational learning (Argyris & Schon, 1974). PEiSE guided administrators through the process of clearly articulating the change needed with development and implementation of action plans for change. PEiSE facilitated bringing together general and special education in a mutually beneficial manner to improve the quality and success of services to students with special needs. Specific differences in community and school district approaches to responding to compliance and intent of local, state, and federal regulations and initiatives are managed in the model developed. PEiSE was tested in a large, suburban school district. The testing indicated the model’s potential to: 1) advance evaluation of special education; 2) suggest new collaborative models for general and special education; 3) identify needed areas of research on evaluation, organization, issues of responsibilities, and professional expertise; 4) identify needed areas of pre-service and continuing professional preparation and development; 5) promote researched based programs related to student success. It was recommended that PEiSE include an additional phase of practitioner input on concerns and complaints of existing espoused best practices with suggestions or recommendations for different practices the district should consider. / text
|
248 |
公立高中教師教學評鑑指標建構之研究卓子瑛 Unknown Date (has links)
本研究主要在於建構公立高中教師教學評鑑指標,以供公立高中教師教學自我評鑑之用,並提供教育行政單位實施教師教學評鑑之參考。
為達到上述目的,本研究透過文獻探討,參考Danielson(2007)教學專業實踐架構(Professional practice-a framework for teaching)、德州(1986)教學視導系統(Texas teacher appraisal system,TTAS)、麻薩諸塞州(2005)中小學教師有效教學原則(Principles of effective teaching),形成評鑑指標初稿,再以半開放式德懷術專家問卷調查法,進行指標審查、修正與刪減。問卷回收後應用SPSS統計軟體中之敘述統計進行分析,以平均數、中位數、眾數判斷評鑑指標之重要性,以四分差判斷專家群看法之一致性。經由前後三次德懷術問卷調查統計分析之結果,獲得以下結論:
一、就教學評鑑領域的重要性而言,其重要性依次為:教學規劃準備、班級經營管理、呈現有效教學、實現專業責任。
二、就「教學規劃準備」指標重要性而言,其排序為「1」者有6項,分別為「1-1-2,1-2-1,1-2-2,1-2-4,1-3-1,1-4-4」;排序為「2」者有9項,分別為「1-1-3,1-1-4,1-2-3,1-3-2, 1-3-4,1-4-2, 1-4-3,1-5-1, 1-5-2」。
三、就「班級經營管理」指標重要性而言,其排序為「1」者有7項,分別為「2-1-3,2-2-2,2-3-1,2-3-3, 2-4-2,2-4-4, 2-5-2」;排序為「2」者有7項,分別為「2-1-1,2-1-2,2-1-4, 2-2-4, 2-3-4,2-4-1,2-5-1」。
四、就「呈現有效教學」指標重要性而言,其排序為「1」者有7項,分別為「3-1-1,3-1-2,3-3-3,3-4-1,3-4-4, 3-5-1, 3-5-2」;排序為「2」者有10項,分別為「3-1-3,3-1-4,3-2-1, 3-2-2,3-2-4,3-3-1,3-3-2,3-3-4,3-4-3, 3-5-3」。
五、就「實現專業責任」指標重要性而言,其重要性等級排序為「1」者有4項,分別為「4-1-3,4-1-4,4-3-1,4-4-1」;排序為「2」者有7項,分別為「4-1-1,4-1-2,4-2-1,4-3-3,4-5-1, 4-5-3,4-5-4」。
六、就專家群看法的一致性而言,其四分差數值介於.000至.500之間,顯示專家群的看法具高度一致性。
七、就教學評鑑指標之建構內容而言,本研究建構之教學評鑑指標包括(一)教學規劃準備;(二)班級經營管理;(三)呈現有效教學;(四)實現專業責任。4個領域,及20個規準、57個指標項目。
關鍵字:教育評鑑、教學評鑑、評鑑指標 / The thesis attempts to build teaching evaluation indicators for senior high school teachers. The indicators will be the reference both for teachers who want to self- assess, and for senior-high school administration which want to evaluate performance of teachers.
The teacher evaluative indicators are derived from famous teaching evaluative indicators: “Professional practice-a framework for teaching”(Danielson, 2007), “Texas teacher appraisal system, TTAS” (1986), and “Principles of effective teaching from Massachusetts Department of Education”(2005). Later, the raw indicators are reviewed, revised, and decreased by “semi-open Delphi”.
After analysed by SPSS, here comes 7 conclusions below, according to analyse the outcomes of questionnaire survey:
1) On importance of teaching evaluation area, the sequence is: planning and preparation, the classroom environment, effective instruction, professional and responsibility.
2) On importance of planning and preparation, the most important indicators are consistent of “1-1-2,1-2-1,1-2-2,1-2-4,1-3-1,1-4-4”, the second important indicators are consistent of “1-1-3,1-1-4,1-2-3,1-3-2, 1-3-4,1-4-2, 1-4-3,1-5-1, 1-5-2”.
3) On importance of the classroom environment, the most important indicators are consistent of “2-1-3,2-2-2,2-3-1,2-3-3, 2-4-2,2-4-4, 2-5-2”, the second important indicators are consistent of “2-1-1,2-1-2,2-1-4, 2-2-4, 2-3-4,2-4-1, 2-5-1”.
4) On importance of effective instruction, the most important indicators are consistent of “3-1-1,3-1-2,3-3-3,3-4-1,3-4-4, 3-5-1, 3-5-2”, the second important indicators are consistent of “3-1-3,3-1-4,3-2-1, 3-2-2,3-2-4,3-3-1, 3-3-2,3-3-4,3-4-3, 3-5-3”.
5) On importance of professional and responsibility, the most important indicators are consistent of “4-1-3,4-1-4,4-3-1,4-4-1”, the second important indicators are consistent of “4-1-1,4-1-2,4-2-1,4-3-3,4-5-1, 4-5-3,4-5-4”.
6) On coherence of professionals, lies between.000 and.500, shows highly coherence among professionals.
7) On content of teaching evaluation indicators, the evaluative indicators for senior high school teachers include 4 areas: 1. Planning and preparation, 2. the classroom environment, 3. Effective instruction, 4. Professional and responsibility. These 4 areas are consistent of 20 standards and 57 indicators.
Keywords: education evaluation, teaching evaluation, evaluation indicators
|
249 |
Hodnocení výsledků v globálním rozvojovém vzdělávání / Evaluation of results in global and development educationVondráčková, Jana January 2013 (has links)
Diploma thesis deals with the evaluation of results in global development education. It describes bases and content of global development education and its expected results. It compares quantitative and qualitative methods of measuring results considering their ability to measure real contribution of the evaluated activity. It highlights the need to demonstrate the validity and reliability of a measurement. It explains the role of values, criteria and indicators in the evaluation. Furthermore, the thesis deals with the measurement of contribution specifically in the area of global development education. It provides an overview of measurements made in global development education according to their validity. It also offers a list of quality criteria used in global development education. The research part of the thesis presents the results of an investigation which aimed to determine how selected Czech organizations dealing with global development education evaluate the results of their activities. It tries to determine what facilitates the evaluation of results and what represents a barrier to it. It results from a qualitative analysis of interviews and written materials provided by the organizations.
|
250 |
Avaliação de processos educativos formais para profissionais da área da saúde: revisão integrativa de literatura / Assessment of formal educational processes for health professionals: an integrative review of literatureOtrenti, Eloá 23 May 2011 (has links)
O aumento dos investimentos em processos educativos formais para profissionais trouxe consigo a necessidade de avaliar o efeito dessas atividades, seja sobre o trabalhador treinado, seja sobre a organização. Os instrumentos mais utilizados para avaliação aparentemente não estão atingindo os objetivos, e pouco contribuem para retroalimentar atividades educativas. Na área da saúde são realizados muitos treinamentos, no entanto, nem sempre são avaliados. Sendo assim, surgiu a necessidade de compreender mais sobre o tema e, assim, iniciar a construção de uma metodologia de avaliação capaz de mostrar as informações indispensáveis para retroalimentação dos programas e, por conseguinte, melhorar a prática assistencial que é a finalidade precípua das ações educativas na área da saúde. Assim, esse estudo teve como objetivos analisar a produção científica publicada sobre avaliação de processos educativos formais de profissionais da área da saúde e analisar as características dos instrumentos encontrados na revisão de literatura. Para isso, utilizamos a revisão integrativa, método pelo qual pesquisas primárias são analisadas para elaboração de uma síntese do conhecimento produzido sobre o tema investigado. Foram revisadas as bases de dados da BVS, Pubmed e Cochrane, no período de Janeiro de 2000 a Julho de 2010; a amostra final foi de 19 artigos científicos. Os resultados evidenciaram que não é utilizada uma metodologia validada e sistematizada para avaliar processos educativos formais, que o foco da avaliação é principalmente o aprendizado do participante, com pouca atenção ao processo de ensino e ao comportamento no cargo, sendo assim, ampliar os níveis de avaliação é essencial. Não basta olhar para a satisfação do treinando e a aquisição de conhecimento; também é importante conhecer o quanto esse novo conhecimento é aplicado no trabalho e o que isso impacta na instituição contratante e no usuário do sistema de saúde. A utilização dos resultados de avaliação apenas para captar se o treinando adquiriu algum conhecimento ou se ficou satisfeito com a ação também é um uso restrito das ferramentas de avaliação de processos educativos formais para profissionais. A ausência de avaliações pode reduzir o valor do treinamento que precisariam de alterações. / The increased investment in formal education processes for professionals has brought the need to assess the effect of these activities, the employee is trained, is about the organization. The most widely used instruments for assessing apparently are not reaching the goals, and contribute little to give feedback to educational activities. In the health area are conducted many training sessions, however, are not always evaluated. Thus, the need to understand more about the subject and thus initiate the building of an evaluation methodology capable of displaying the necessary information for feedback from the programs and therefore improve the care that is the primary aim of stock education in health. Thus, this study aims to examine the published scientific literature on evaluation of educational processes formal healthcare professionals and analyze the characteristics of the instruments found in the literature review. For this, we use an integrative review, a method by which primary research are analyzed for developing a synthesis of knowledge produced about a topic. We reviewed the databases of the VHL, Pubmed and Cochrane, in the period January 2000 to July 2010, the final sample of 19 scientific articles. The results showed that there is a validated methodology used to assess systematic and formal educational processes, the focus of the evaluation is primarily the learning of the participant, with little attention to teaching and behavior in office, thus, increase the levels of evaluation is essential. Do not just look for the satisfaction of training and knowledge acquisition, is also important to know how this new knowledge is applied at work and that this impacts on the contracting institution and the user of the health system. The use of evaluation results to capture only if the trainee has acquired some knowledge or if she was satisfied with the action is also limited use of evaluation tools for formal educational processes for professionals. The lack of assessment can reduce the amount of training they needed changes.
|
Page generated in 0.133 seconds