• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 20
  • 20
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Examining Factors Affecting Evaluation Use: A Concurrent, Qualitative Study

Lejeune, Andrew J Unknown Date
No description available.
2

Non-formal Educator Use of Evaluation Findings: Factors of Influence

Baughman, Sarah 17 September 2010 (has links)
Increasing demands for accountability in educational programming have resulted in more frequent calls for program evaluation activity in educational organizations. Many organizations include conducting program evaluations as part of the job responsibilities of program staff. Cooperative Extension is a national system offering non-formal educational programs through land grant universities. Many Extension services require non-formal educational program evaluations be conducted by its locally-based educators. Research on evaluation practice has focused primarily on the evaluation efforts of professional, external evaluators. The evaluation work of program staff that have many responsibilities including program evaluation has received little attention. This study examined how non-formal educators in Cooperative Extension use the results of their program evaluation efforts and what factors influence that use. A conceptual framework adapted from the evaluation use literature guides the examination of how evaluation characteristics, organizational characteristics and stakeholder involvement influence four types of evaluation use; instrumental use, conceptual use, persuasive use and process use. Factor analysis indicates ten types of evaluation use practiced by non-formal educators. Of the variables examined, stakeholder involvement is most influential followed by evaluation characteristics and organizational characteristics. The research implications from the study include empirical confirmation of the framework developed by previous researchers as well as the need for further exploration of potentially influencing factors. Practical implications include delineating accountability and program improvement tasks within Extension in order to improve the results of both. There is some evidence that evaluation capacity building efforts may be increasing instrumental use by educators evaluating their own programs. Non-formal educational organizations are encouraged to involve stakeholders in all levels of evaluation work as one means to increase use of evaluation findings. / Ph. D.
3

Evaluations that matter in social work

Petersén, Anna January 2017 (has links)
A great deal of evaluations are commissioned and conducted every year in social work, but research reports a lack of use of the evaluation results. This may depend on how the evaluations are conducted, but it may also depend on how social workers use evaluation results. The aim of this thesis is to explore and analyse evaluation practice in social work from an empirical, normative, and constructive perspective. The objectives are partly to increase the understanding of how we can produce relevant and useful knowledge for social work using evaluation results and partly, to give concrete suggestions on improvements on how to conduct evaluations. The empirical data has been organised as four cases, which are evaluations of temporary programmes in social work. The source materials are documents and interviews. The results show that findings from evaluations of temporary programmes are sparingly used in social work. Evaluations seem to have unclear intentions with less relevance for learning and improvement. In contrast, the evaluators themselves are using the data for new purposes. These empirical findings are elaborated further by using the knowledge form phronesis, which can be translated into practical wisdom. The overall conclusion is that social work is in need of knowledge that social workers find relevant and useful in practice. In order to meet these needs, researchers and evaluators must broaden their knowledge view and begin to include practical knowledge instead of solely relying on scientific knowledge when conducting evaluations. Finally, a new evaluation model is suggested. It is called phronesis-based evaluation and is argued to have great potential to address and include professionals’ praxis-based knowledge. It advocates a view that takes social work’s dynamic context into serious consideration and acknowledges values and power as important components of the evaluation process.
4

Evaluation of Emergency Response: Humanitarian Aid Agencies and Evaluation Influence

Oliver, Monica LaBelle 16 April 2008 (has links)
Organizational development is a central purpose of evaluation. Disasters and other emergency situations carry with them significant implications for evaluation, given that they are often unanticipated and involve multiple relief efforts on the part of INGOs, governments and international organizations. Two particularly common reasons for INGOs to evaluate disaster relief efforts are 1) accountability to donors and 2) desire to enhance the organization's response capacity. This thesis endeavors briefly to review the state of the evaluation field for disaster relief so as to reflect on how it needs to go forward. The conclusion is that evaluation of disaster relief efforts is alive and well. Though evaluation for accountability seems fairly straightforward, determining just how the evaluation influences the organization and beyond is not. Evaluation use has long been a central thread of discussion in evaluation theory, with the richer idea of evaluation influence only recently taking the stage. Evaluation influence takes the notion of evaluation use a few steps further by offering more complex, subtle, and sometimes unintentional ways that an evaluation might positively better a situation. This study contributes to the very few empirical studies of evaluation influence by looking at one organization.
5

Evaluation of emergency response: Humanitarian Aid Agencies and evaluation influence

Oliver, Monica LaBelle 19 May 2008 (has links)
Organizational development is a central purpose of evaluation. Disasters and other emergency situations carry with them significant implications for evaluation, given that they are often unanticipated and involve multiple relief efforts on the part of INGOs, governments and international organizations. Two particularly common reasons for INGOs to evaluate disaster relief efforts are 1) accountability to donors and 2) desire to enhance the organization s response capacity. This thesis endeavors briefly to review the state of the evaluation field for disaster relief so as to reflect on how it needs to go forward. The conclusion is that evaluation of disaster relief efforts is alive and well. Though evaluation for accountability seems fairly straightforward, determining just how the evaluation influences the organization and beyond is not. Evaluation use has long been a central thread of discussion in evaluation theory, with the richer idea of evaluation influence only recently taking the stage. Evaluation influence takes the notion of evaluation use a few steps further by offering more complex, subtle, and sometimes unintentional ways that an evaluation might positively better a situation. This study contributes to the very few empirical studies of evaluation influence by looking at one organization in depth and concluding that evaluation does influence in useful ways.
6

Evaluations, Actors and Institutions. The Case of Research, Technology and Innovation Policy in Austria

Streicher, Jürgen 06 April 2017 (has links) (PDF)
Evaluations have gained popularity for improving public policy measures, programmes and institutions in the field of science, technology and innovation (RTI). Though the frequency and quality of evaluations have increased, in terms of impact indicators and methodological diversification, concerns have been raised about their effectiveness to fuel change in policy making. This raises the issue of the low absorption level of evaluation findings by policy making in general and in Austria in particular. Recent research emphasises the need for a holistic perspective on the benefits and usefulness of evaluations in order to allow a more thorough consideration of complex interdependencies and effects that can occur at different levels and in different forms. While previous research has put much emphasis on the conduct of evaluations and their implementation, there are less empirical studies that address institutional or contextual explanations when it comes to the effects of evaluations. This study aims to contribute to the narrowing of this gap in the literature by investigating how individual and composite actors (such as organisations), as well as, the policy itself are affected by policy evaluations, drawing attention to the factors and mechanisms that shape evaluation effects. Making use of the concepts of "policy learning", actor-centred institutionalism and recent research in the field of evaluation utilisation for the analysis, this study developed a conceptual framework that proposes three groups of conditioning factors and mechanisms: Actors and their interactions, the institutional context, and the evaluation itself. A multiple case study approach, using evaluated programmes in the Austrian research, technology and innovation (RTI) policy scene, was employed to examine the effects of evaluations at various levels, the conditioning factors and mechanisms, as well as, the ensuing pathways of effects. Results indicate that evaluations generate a wide range of diverse effects, beyond individual learning, and clearly and visibly impact programme development. Several contextual aspects shape evaluation effects. The current structures and practices endorse evaluations as routine, which may reduce chances of broader learning, and distance the evaluation and the possibility to learn from it from an interested audience. The thesis concludes with implications for theory and practice, and suggestions for paths of future research.
7

An investigation of outcomes assessment utilization in the context of teacher education program accreditation

Myhlhousen-Leak, Georgetta Ann Daisy 01 May 2011 (has links)
Scholarship on the uses of program evaluation in general is extensive, but little specific empirical research has addressed the uses made of teacher program reviews. The purpose of this study was to investigate empirically the factors affecting uses of teacher program review processes and findings in each of four cases, selected from a prior state-wide population survey to include both higher and lower use exemplars. Results indicated that uses of program reviews included both process uses and findings uses and that a number of personal, contextual and other factors influenced the types of use, the recognition of uses that actually occurred, and the amount of use. Sometimes internal formative improvements were reported as taking place and were recognized as benefits but not identified originally as uses of the review processes and findings. This discrepancy occurred because the program staff and higher education administrators focused primarily on accreditation and viewed the successful accreditation outcome as the only use of the review, even when significant program improvements had resulted from the process. Relying primarily on interviews and documentation, the study described in detail three types of process use and three types of findings use. Process use was the most often reported types of use. Human, contextual and procedural factors were important influences on all types of use. Human factors influenced how the review was conducted and used. Context factors determined how the review was completed and how use occurred, either for accreditation or accreditation and program improvement. Procedural factors affected stakeholder involvement and how the administration related to and valued the program review processes and findings.
8

How Do Data Dashboards Affect Evaluation Use in a Knowledge Network? A Study of Stakeholder Perspectives in the Centre for Research on Educational and Community Services (CRECS)

Alborhamy, Yasmine 02 November 2020 (has links)
Since there is limited research on the use of data dashboards in the evaluation field, this study explores the integration of a data dashboard in a knowledge network, the Centre for Research on Educational and Community Services (CRECS) as part of its program evaluation activities. The study used three phases of data collection and analysis. It investigates the process of designing a dashboard for a knowledge network and the different uses of a data dashboard in a program evaluation context through interviews and focus group discussions. Four members of the CRECS team participated in one focus group; two other members participated in individual interviews. Data were analyzed for thematic patterns. Results indicate that the process of designing a data dashboard consists of five steps that indicate the iterative process of design and the need for sufficient consultations with stakeholders. Moreover, the data dashboard has the potential to be used internally, within CRECS, and externally with other stakeholders. The data dashboard is also believed to be beneficial in program evaluation context as a monitoring tool, for evaluability assessment, and for evaluation capacity building. In addition, it can be used externally for accountability, reporting, and communication. The study sheds light on the potentials of data dashboards in organizations, yet prolonged and broader studies should take place to confirm these uses and their sustainability.
9

Steeping the Organization’s Tea: Examining the Relationship Between Evaluation Use, Organizational Context, and Evaluator Characteristics

Allen, Marisa 14 June 2010 (has links)
No description available.
10

Utvärdera Informationssystem : Pragmatiskt perspektiv och metod / Evaluating Information Systems : Pragmatic perspective and method

Lagsten, Jenny January 2009 (has links)
Syfte: Det övergripande syftet med avhandlingen är att utveckla en intressentbaseradutvärderingsmetod för att utvärdera informationssystem. Jag har valt attkalla utvärderingsmetoden för VISU (Verksamhetsutvecklande InformationssystemUtvärdering). I VISU uppfattas utvärderingsprocessen som en socialprocess där människor arbetar tillsammans för att bestämma sig för egenskaperoch värden hos det informationssystem som utvärderas. Utgångspunkten förutvärderingsprocessen enligt VISU är att utvärdering ska användas praktiskt avmänniskor för att skapa utveckling och förbättring i verksamheter. Frågor: Tvåövergripande forskningsfrågor har drivit avhandlingsarbetet: 1. Hur bör en metodför utvärdering av informationssystem vara utformad för att bidra till enverksamhets utveckling? 2. Vad innebär en pragmatiskt grundad modell för ISUtvärdering? Forskningsmetod: I avhandlingsarbetet utvecklar jag VISU ochundersöker hur metoden fungerar samt vilka konsekvenser som användandet avVISU ger i praktiska utvärderingssammanhang. Forskningsarbetet har bedrivitsenligt Canonical Action Research. Aktionsforskningsstrategin har valts för attmotsvara undersökningsbehovet vid metodutveckling - behovet av att prövametoden i praktiska utvärderingsprocesser där metoden är avsedd att användas.Vidare grundas och förankras VISU i pragmatiska kunskapsteorier och teorier omutvärdering (särskilt intressentmodellen). VISU är också grundad i forskning omIS-utvärdering och förankrad i skolan om interpretativ IS-utvärdering. Kunskapsbidrag: Det praktiska bidraget från avhandlingen är en metod, VISU,för att utvärdera informationssystem. De teoretiska kunskapsbidragen är:rationalitet för en pragmatisk utvärderingsprocess, en multiparadigmmodell förIS-utvärdering samt en modell för utvärderingsbruk. / Purpose: The purpose of the study is to develop an evaluation method forevaluating information systems based on the stakeholder model for evaluation.The method is called VISU (Swedish acronym for IS evaluation for workpracticedevelopment). In VISU, the evaluation process is recognised as a social processwhere people are working together in order to determine the qualities and valuesof the IS under evaluation. The point of departure is that evaluation is to be usedby people in order to make change and betterment. Questions: Two questionshave guided the study: 1) How should a method for evaluation of informationsystems be designed in order to contribute to the development of an organisation? and 2) What is the significance of a pragmatic grounded evaluation model? and 2) What is the significance of a pragmatic grounded evaluation model? Method: The research process has been conducted according to Canonical ActionResearch. The Action Research strategy has been chosen in order to satisfy theresearch needs when doing method development – the need to try the method outin non trivial situations. Further, VISU is grounded in pragmatic knowledgetheories, evaluation theories (stakeholder model) and IS evaluation theories andanchored in the school of interpretive IS evaluation. Contribution: The practicalcontribution from the study is VISU, a method for evaluation informationsystems. The theoretical contributions are rationality for pragmatic IS evaluation,a multiparadigm model for IS evaluation and a model for evaluation use.

Page generated in 0.1086 seconds