Spelling suggestions: "subject:"evaluatuation used"" "subject:"evalualuation used""
1 |
Examining Factors Affecting Evaluation Use: A Concurrent, Qualitative StudyLejeune, Andrew J Unknown Date
No description available.
|
2 |
Non-formal Educator Use of Evaluation Findings: Factors of InfluenceBaughman, Sarah 17 September 2010 (has links)
Increasing demands for accountability in educational programming have resulted in more frequent calls for program evaluation activity in educational organizations. Many organizations include conducting program evaluations as part of the job responsibilities of program staff. Cooperative Extension is a national system offering non-formal educational programs through land grant universities. Many Extension services require non-formal educational program evaluations be conducted by its locally-based educators.
Research on evaluation practice has focused primarily on the evaluation efforts of professional, external evaluators. The evaluation work of program staff that have many responsibilities including program evaluation has received little attention. This study examined how non-formal educators in Cooperative Extension use the results of their program evaluation efforts and what factors influence that use. A conceptual framework adapted from the evaluation use literature guides the examination of how evaluation characteristics, organizational characteristics and stakeholder involvement influence four types of evaluation use; instrumental use, conceptual use, persuasive use and process use. Factor analysis indicates ten types of evaluation use practiced by non-formal educators. Of the variables examined, stakeholder involvement is most influential followed by evaluation characteristics and organizational characteristics.
The research implications from the study include empirical confirmation of the framework developed by previous researchers as well as the need for further exploration of potentially influencing factors. Practical implications include delineating accountability and program improvement tasks within Extension in order to improve the results of both. There is some evidence that evaluation capacity building efforts may be increasing instrumental use by educators evaluating their own programs. Non-formal educational organizations are encouraged to involve stakeholders in all levels of evaluation work as one means to increase use of evaluation findings. / Ph. D.
|
3 |
Evaluations that matter in social workPetersén, Anna January 2017 (has links)
A great deal of evaluations are commissioned and conducted every year in social work, but research reports a lack of use of the evaluation results. This may depend on how the evaluations are conducted, but it may also depend on how social workers use evaluation results. The aim of this thesis is to explore and analyse evaluation practice in social work from an empirical, normative, and constructive perspective. The objectives are partly to increase the understanding of how we can produce relevant and useful knowledge for social work using evaluation results and partly, to give concrete suggestions on improvements on how to conduct evaluations. The empirical data has been organised as four cases, which are evaluations of temporary programmes in social work. The source materials are documents and interviews. The results show that findings from evaluations of temporary programmes are sparingly used in social work. Evaluations seem to have unclear intentions with less relevance for learning and improvement. In contrast, the evaluators themselves are using the data for new purposes. These empirical findings are elaborated further by using the knowledge form phronesis, which can be translated into practical wisdom. The overall conclusion is that social work is in need of knowledge that social workers find relevant and useful in practice. In order to meet these needs, researchers and evaluators must broaden their knowledge view and begin to include practical knowledge instead of solely relying on scientific knowledge when conducting evaluations. Finally, a new evaluation model is suggested. It is called phronesis-based evaluation and is argued to have great potential to address and include professionals’ praxis-based knowledge. It advocates a view that takes social work’s dynamic context into serious consideration and acknowledges values and power as important components of the evaluation process.
|
4 |
Evaluation of Emergency Response: Humanitarian Aid Agencies and Evaluation InfluenceOliver, Monica LaBelle 16 April 2008 (has links)
Organizational development is a central purpose of evaluation. Disasters and other emergency situations carry with them significant implications for evaluation, given that they are often unanticipated and involve multiple relief efforts on the part of INGOs, governments and international organizations. Two particularly common reasons for INGOs to evaluate disaster relief efforts are 1) accountability to donors and 2) desire to enhance the organization's response capacity. This thesis endeavors briefly to review the state of the evaluation field for disaster relief so as to reflect on how it needs to go forward. The conclusion is that evaluation of disaster relief efforts is alive and well. Though evaluation for accountability seems fairly straightforward, determining just how the evaluation influences the organization and beyond is not. Evaluation use has long been a central thread of discussion in evaluation theory, with the richer idea of evaluation influence only recently taking the stage. Evaluation influence takes the notion of evaluation use a few steps further by offering more complex, subtle, and sometimes unintentional ways that an evaluation might positively better a situation. This study contributes to the very few empirical studies of evaluation influence by looking at one organization.
|
5 |
Evaluation of emergency response: Humanitarian Aid Agencies and evaluation influenceOliver, Monica LaBelle 19 May 2008 (has links)
Organizational development is a central purpose of evaluation. Disasters and other emergency situations carry with them significant implications for evaluation, given that they are often unanticipated and involve multiple relief efforts on the part of INGOs, governments and international organizations. Two particularly common reasons for INGOs to evaluate disaster relief efforts are 1) accountability to donors and 2) desire to enhance the organization s response capacity. This thesis endeavors briefly to review the state of the evaluation field for disaster relief so as to reflect on how it needs to go forward. The conclusion is that evaluation of disaster relief efforts is alive and well. Though evaluation for accountability seems fairly straightforward, determining just how the evaluation influences the organization and beyond is not.
Evaluation use has long been a central thread of discussion in evaluation theory, with the richer idea of evaluation influence only recently taking the stage. Evaluation influence takes the notion of evaluation use a few steps further by offering more complex, subtle, and sometimes unintentional ways that an evaluation might positively better a situation. This study contributes to the very few empirical studies of evaluation influence by looking at one organization in depth and concluding that evaluation does influence in useful ways.
|
6 |
Evaluations, Actors and Institutions. The Case of Research, Technology and Innovation Policy in AustriaStreicher, Jürgen 06 April 2017 (has links) (PDF)
Evaluations have gained popularity for improving public policy measures, programmes and institutions in the field of science, technology and innovation (RTI). Though the frequency and quality of evaluations have increased, in terms of impact indicators and methodological diversification, concerns have been raised about their effectiveness to fuel change in policy making. This raises the issue of the low absorption level of evaluation findings by policy making in general and in Austria in particular.
Recent research emphasises the need for a holistic perspective on the benefits and usefulness of evaluations in order to allow a more thorough consideration of complex interdependencies and effects that can occur at different levels and in different forms. While previous research has put much emphasis on the conduct of evaluations and their implementation, there are less empirical studies that address institutional or contextual explanations when it comes to the effects of evaluations. This study aims to contribute to the narrowing of this gap in the literature by investigating how individual and composite actors (such as organisations), as well as, the policy itself are affected by policy evaluations, drawing attention to the factors and mechanisms that shape evaluation effects.
Making use of the concepts of "policy learning", actor-centred institutionalism and recent research in the field of evaluation utilisation for the analysis, this study developed a conceptual framework that proposes three groups of conditioning factors and mechanisms: Actors and their interactions, the institutional context, and the evaluation itself. A multiple case study approach, using evaluated programmes in the Austrian research, technology and innovation (RTI) policy scene, was employed to examine the effects of evaluations at various levels, the conditioning factors and mechanisms, as well as, the ensuing pathways of effects.
Results indicate that evaluations generate a wide range of diverse effects, beyond individual learning, and clearly and visibly impact programme development. Several contextual aspects shape evaluation effects. The current structures and practices endorse evaluations as routine, which may reduce chances of broader learning, and distance the evaluation and the possibility to learn from it from an interested audience. The thesis concludes with implications for theory and practice, and suggestions for paths of future research.
|
7 |
An investigation of outcomes assessment utilization in the context of teacher education program accreditationMyhlhousen-Leak, Georgetta Ann Daisy 01 May 2011 (has links)
Scholarship on the uses of program evaluation in general is extensive, but little specific empirical research has addressed the uses made of teacher program reviews. The purpose of this study was to investigate empirically the factors affecting uses of teacher program review processes and findings in each of four cases, selected from a prior state-wide population survey to include both higher and lower use exemplars. Results indicated that uses of program reviews included both process uses and findings uses and that a number of personal, contextual and other factors influenced the types of use, the recognition of uses that actually occurred, and the amount of use. Sometimes internal formative improvements were reported as taking place and were recognized as benefits but not identified originally as uses of the review processes and findings. This discrepancy occurred because the program staff and higher education administrators focused primarily on accreditation and viewed the successful accreditation outcome as the only use of the review, even when significant program improvements had resulted from the process. Relying primarily on interviews and documentation, the study described in detail three types of process use and three types of findings use. Process use was the most often reported types of use. Human, contextual and procedural factors were important influences on all types of use. Human factors influenced how the review was conducted and used. Context factors determined how the review was completed and how use occurred, either for accreditation or accreditation and program improvement. Procedural factors affected stakeholder involvement and how the administration related to and valued the program review processes and findings.
|
8 |
How Do Data Dashboards Affect Evaluation Use in a Knowledge Network? A Study of Stakeholder Perspectives in the Centre for Research on Educational and Community Services (CRECS)Alborhamy, Yasmine 02 November 2020 (has links)
Since there is limited research on the use of data dashboards in the evaluation field, this study explores the integration of a data dashboard in a knowledge network, the Centre for Research on Educational and Community Services (CRECS) as part of its program evaluation activities. The study used three phases of data collection and analysis. It investigates the process of designing a dashboard for a knowledge network and the different uses of a data dashboard in a program evaluation context through interviews and focus group discussions. Four members of the CRECS team participated in one focus group; two other members participated in individual interviews. Data were analyzed for thematic patterns. Results indicate that the process of designing a data dashboard consists of five steps that indicate the iterative process of design and the need for sufficient consultations with stakeholders. Moreover, the data dashboard has the potential to be used internally, within CRECS, and externally with other stakeholders. The data dashboard is also believed to be beneficial in program evaluation context as a monitoring tool, for evaluability assessment, and for evaluation capacity building. In addition, it can be used externally for accountability, reporting, and communication. The study sheds light on the potentials of data dashboards in organizations, yet prolonged and broader studies should take place to confirm these uses and their sustainability.
|
9 |
Steeping the Organization’s Tea: Examining the Relationship Between Evaluation Use, Organizational Context, and Evaluator CharacteristicsAllen, Marisa 14 June 2010 (has links)
No description available.
|
10 |
Peacebuilding Evaluations within International Organisations. Investigation of their relevance, roles and effectsVredeveld, Sabine January 2021 (has links)
Responding to and preventing violent conflict continue to be a major concern on the international agenda. However, the results of peacebuilding projects are often mixed and some interventions have even proven harmful in the past. In the debates on aid effectiveness, evaluations have been advocated as being an effective instrument to better understand the results of development and peacebuilding projects and thereby ultimately to improve the practice. However, despite a long tradition of evaluation utilisation research dating back to the 1970s, the effects of peacebuilding evaluations are far from being understood. The concept of evaluation use is too narrow and does not take the diversity of potential positive and negative evaluation effects into account. There is little evidence concerning the organisational factors that influence the use and effects of evaluations. Using a comparative case study analysis in three organisations implementing peacebuilding activities (Deutsche Gesellschaft für Internationale Zusammenarbeit, Saferworld and the World Bank), this study examines the roles and effects of peacebuilding evaluations within international organisations. The results show a wide range of positive and negative evaluation effects that are promoted or hindered by different attitudes and the process of the evaluation, in addition to organisational and other contextual factors. To improve our understanding of the interlinkages in this context, evaluation pathways causally linking different effects and factors are proposed.
|
Page generated in 0.1303 seconds