• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • Tagged with
  • 14
  • 14
  • 10
  • 7
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Program evaluation capacity for nonprofit human services organizations : an analysis of determining factors /

Alaimo, Salvatore. January 2008 (has links)
Thesis (Ph.D.)--Indiana University, 2008. / Department of Philanthropic Studies, Indiana University-Purdue University Indianapolis (IUPUI). Advisor(s): David A. Reingold, Debra Mesch, David Van Slyke, Patrick Rooney. Includes vita. Includes bibliographical references (leaves 314-330).
2

The Role of Evaluation Policy in Organizational Capacity to Do and Use Evaluation

Al Hudib, Hind 14 September 2018 (has links)
Despite the recent calls made by scholars in the evaluation field regarding the importance of evaluation policy and its influence on evaluation practice, there remains a lack of empirical evidence pertaining to the relationship between evaluation policy and evaluation capacity building (ECB). This study sought to explore the role of evaluation policy in building, or in impeding, organizational capacity to do and use evaluation. Through three interconnected studies—a review of an extensive sample of evaluation policies; interviews with scholars and practitioners from Canada, the United States, and Europe; and focus groups held with evaluation community members in Jordan and Turkey—the research identified a set of 10 categories of evaluation policy and proceeded to develop and validate an ecological framework depicting the relationship between evaluation policy and organizational capacity to do and use evaluation. The findings suggest that the role of evaluation policy in building organizational capacity for evaluation is moderated by a number of variables operating at the contextual, organizational and individual levels and that an in-depth understanding of the dynamic, unfolding and ongoing connections between ECB, on the one hand, and the broader social, economic, political and cultural systems associated with an organization, on the other, is essential in focusing ECB efforts. While the findings reveal that the role of evaluation policy in leveraging organizational evaluation capacity has been limited, they also show some evidence that if an evaluation policy is carefully designed to privilege learning as a central and desirable function of evaluation it will be more likely to have a positive influence on the organizational capacity to do and use evaluation. The investigation helps to advance understanding of these connections and provides some insight into the components of evaluation policies and the role that they might play in shaping the future of evaluation practice. This thesis makes an important contribution to the body of knowledge on organizational evaluation capacity. Although much has been published in the evaluation literature on ECB, its relationship to evaluation policy has not been explored or described based on empirical data. The main practical implication of the research is the possibility for organizations seeking to develop evaluation policies that are ECB-oriented to use the ecological framework and the set of evaluation policy categories as guides. Similarly, organizations that are seeking to review and update their current policies to make them more ECB-friendly stand to benefit in this way. Future research may focus on expanding the scope of the framework and its applicability for different types of organizations in different contexts. Finally, it is argued that the development of policies designed to promote learning is a necessary step towards the advancement of evaluation practice.
3

Evaluation in Competence by Design Medical Education Programs

Milosek, Jenna D. 29 March 2023 (has links)
To ensure medical residents are prepared to work in complex and evolving settings, postgraduate medical education is transitioning to competency-based medical education, which is known as Competence by Design (CBD) in Canada. To understand how CBD is operationalized within specific residency programs and how it contributes to patient, faculty, and learner outcomes, there is a need to engage in program evaluation. However, the actual extent that, reasons for, and methods in which CBD programs are engaging in program evaluation remain unclear. Furthermore, minimal attention has been given to building program evaluation capacity within medical education programs (i.e., doing evaluation and using evaluation findings). In this research project, I explore and formally document: (a) the extent that and the ways in which CBD programs are engaging in program evaluation, (b) the reasons why these programs are engaging or not engaging in program evaluation, (c) the actual and potential positive and negative consequences of these programs engaging in program evaluation, (d) the ways that these programs build their capacities to do program evaluation and use evaluation findings, (e) the ways that program evaluators currently support these programs, and (f) the ways that program evaluators can help stakeholders build their capacities to do program evaluation and use evaluation findings. Through this research, I contribute to the limited body of empirical research on program evaluation in medical education. Confirming how CBD programs are engaging in program evaluation can advise stakeholders and program evaluators on how best to support CBD programs in building their capacities to do program evaluation and use evaluation findings, inform the design and implementation of other medical education programs, and, ultimately, enlighten program evaluation research on authentic and current evaluation practices in medical education. To meet the objectives of this study, I used a three-phase, sequential mixed methods approach. In Phase 1, I conducted a survey of Canadian program directors whose programs have transitioned to CBD to determine: (a) the extent to which CBD programs are engaging in program evaluation, and (b) the reasons why CBD programs are engaging or not engaging in program evaluation. In Phase 2, I interviewed interested program directors to explore: (c) how CBD programs are engaging in program evaluation, and (d) the ways in which CBD programs can build their capacities to do program evaluation and use evaluation findings. In Phase 3, I interviewed Canadian program evaluators to investigate: (e) how program evaluators are currently supporting CBD programs in program evaluation, and (f) how program evaluators can help CBD programs build their capacities to do program evaluation and use evaluation findings. Overall, the Phase 1 findings show that: (a) over three quarters of respondents indicated that their program does engage in program evaluation and most invite stakeholders to participate. However, most programs rarely leverage the expertise of a program evaluator and acknowledge interpreting quantitative program evaluation data is a challenge. Additionally, (b) most programs engage in program evaluation to improve their program and make decisions. However, most programs do not have an employee whose primary responsibility is program evaluation. They do not receive funding for program evaluation which affects their abilities to engage in program evaluation. Moreover, some programs do not engage in program evaluation because they do not know how to do program evaluation. The Phase 2 findings show that: (c) when program directors do engage in program evaluation, they are using ad hoc evaluation methods and a team-based format. However, program directors of CBD programs are struggling to engage in program evaluation because of limited available resources (i.e., time, financial, human resources, and technology infrastructure) and buy-in. Additionally, (d) program directors are building their capacity to do evaluation and use the findings from their specialty/subspecialty program evaluation. The Phase 3 findings show that: (e) program evaluators are supporting CBD programs by responding in a reactive way as temporary and external evaluation consultants. Finally, (f) program evaluators can help CBD programs build their capacities to do program evaluation and use the findings by using a participatory evaluation approach, leveraging existing data, encouraging the use of program evaluation approaches that are appropriate to the CBD implementation context, or encouraging programs to share findings which establishes an accountability cycle. In light of these findings, I discuss ways to engage in program evaluation, build capacity to do evaluation, and build capacity to use evaluation findings in CBD programs.
4

The Relationship between Motivation and Evaluation Capacity in Community-based Organizations

Sen, Anuradha 11 June 2019 (has links)
Community-based organizations increasingly face the need to systematically gather and provide data, information, and insights on the quality of their services and performances to governments, donors, and funding agencies. To meet these demands, community-based organizations have identified the need to build their own evaluation capacity. Increasing the evaluation capacity of an organization requires evaluation capacity building at an individual level, which might be affected by other factors like employee work motivation. This quantitative study uncovers the relationship between employee work motivation and individual evaluation capacity using the Multidimensional Work Motivation Scale and the Evaluation Capacity Assessment Instrument. I found that employees with higher intrinsic motivation have higher evaluation capacity, whereas, those with higher amotivation have lower evaluation capacity. Apart from that, this study also investigates the relationships motivation - evaluative thinking, and evaluation capacity - evaluative thinking, finding that individual evaluation capacity and evaluative thinking are closely related. This study elucidates the link between employee motivation, evaluation capacity, and evaluative thinking, which will not only benefit the organizations to better their practice of evaluation, but also help the employees to make progress in their career paths. / Master of Science in Life Sciences
5

Building Evaluation Capacity in Schools

Maras, Melissa Ann 15 July 2008 (has links)
No description available.
6

Evaluation that empowers : an iterative investigation of how organisations can become evaluation-minded

Greenaway, Lesley January 2016 (has links)
This research grew out of my concern that the dominant discourse about evaluation in the UK limits how it is defined, recognised and practised. It is a discourse which primarily values performance, accountability, results and value for money. In this research, ‘Evaluation that Empowers’ (EtE) aims to present a different discourse about evaluation that recognises other voices within the evaluation mix. This perspective embraces a broader definition of evaluation where: learning and development are a priority, and where the roles of evaluator and participants are collaborative and mutually recognised. The purpose of this research was to explore, develop, test and refine the EtE theoretical model against the real-life evaluation experience and practice in organisations. The EtE Model develops the notion of ‘evaluation-mindedness’ as the capacity for an organisation to create a deep and sustainable change in how it thinks about and embeds evaluation practices into its day to day actions. The research used a theory building approach over four distinct iterative studies. The literature review provided a guiding framework for future empirical studies; the EtE Model was applied and refined in the context of a single longitudinal case study; and further literature provided a critical review of the EtE Model in relation to current Evaluation Capacity Building literature. Finally, the EtE Model was developed into an evaluative conversation (The EtE Toolkit) and was field tested in two organisations. Findings suggest that organisations benefited from staff and volunteers engaging in critical discussion and self-assessment of their evaluation practices. For one organisation, the EtE conversation highlighted broader organisational issues, another organisation planned to adapt the EtE process to support self-evaluation across its service teams, and for one participant an emerging story of professional development was generated. This research has made an original contribution to the theory and practice of evaluation by developing a model and toolkit for engaging key evaluation stakeholders in a process of critical review of evaluation policy and practice or a meta-evaluation of evaluation. It has explored and developed the concept of evaluation-mindedness which can be applied to organisations, teams and individuals.
7

Design and Validation of an Evaluation Checklist for Organizational Readiness for Evaluation Capacity Development

Walker-Egea, Connie F. 09 October 2014 (has links)
Evaluation capacity development (ECD) has been acknowledged as a system of processes to help organizations achieve sustainable evaluation practice. Examining the existing evaluation capacity of an organization before starting an ECD process is necessary and will increase the possibilities of success, determined by the establishment or strengthening of an evaluation system into the organization. In response to this need, this study involved the designing of the Organizational Readiness for Evaluation Capacity Development (ORECD) checklist and its initial validation, using a mixed method research design. The study was conducted in four phases, including: (a) the design of the ORECD checklist based on a review of the literature; (b) a review of the ORECD checklist by five experts to obtain face and content validity evidences, with emphasis on relevance and clarity of the items and how well the items fit the corresponding component; (c) a pretesting about the appropriateness of the wording of the items and format of the ORECD checklist by a sample of doctoral graduate students with formal training in evaluation and professional evaluators; and (d) a field study with 32 nonprofit organizations to determine the utility and benefits of using the ORECD checklist and potential improvements to the instrument. This phase generated information about the psychometric properties as well as consequential validity evidence. Findings indicated that the ORECD checklist has great potential to determine the readiness of an organization to develop evaluation capacity, as demonstrated by the feedback received from various groups of participants, establishing face, content, and consequential validity. Results from the psychometric analysis showed correlations that, for the most part, suggested that the components are measuring aspects of the same construct. In addition, the alpha for most of the components supported the reliability of the ORECD checklist. The two components with alphas close to but below .70 required modifications in order to improve their reliability. Also, it was necessary to modify or reword some of the items. Ongoing efforts should provide information about how the changes made to the ORECD checklist are working and additional validity evidences as the one that can be obtained through factor analysis. This will allow the exploration of the underlying structure of the ORECD checklist and its components. It is expected that the ORECD checklist can be a contribution to the body of literature about ECD helping to address organizational readiness in order to support and sustain the development of evaluation capacity within organizations.
8

How Do Data Dashboards Affect Evaluation Use in a Knowledge Network? A Study of Stakeholder Perspectives in the Centre for Research on Educational and Community Services (CRECS)

Alborhamy, Yasmine 02 November 2020 (has links)
Since there is limited research on the use of data dashboards in the evaluation field, this study explores the integration of a data dashboard in a knowledge network, the Centre for Research on Educational and Community Services (CRECS) as part of its program evaluation activities. The study used three phases of data collection and analysis. It investigates the process of designing a dashboard for a knowledge network and the different uses of a data dashboard in a program evaluation context through interviews and focus group discussions. Four members of the CRECS team participated in one focus group; two other members participated in individual interviews. Data were analyzed for thematic patterns. Results indicate that the process of designing a data dashboard consists of five steps that indicate the iterative process of design and the need for sufficient consultations with stakeholders. Moreover, the data dashboard has the potential to be used internally, within CRECS, and externally with other stakeholders. The data dashboard is also believed to be beneficial in program evaluation context as a monitoring tool, for evaluability assessment, and for evaluation capacity building. In addition, it can be used externally for accountability, reporting, and communication. The study sheds light on the potentials of data dashboards in organizations, yet prolonged and broader studies should take place to confirm these uses and their sustainability.
9

Exploring Evaluation Competency Amongst Public Health Nurses in Canada: A Scoping and Document Review

McKay, Kelly 14 April 2022 (has links)
This study sought to better understand program evaluation capacity and competency amongst public health nurses. Program evaluation plays a vital role in public health and is an identified core competency for public health practice (Canadian Public Health Agency). In Part One, I conducted a scoping review to systematically map the current literature on this topic and to identify important areas for future research. Twenty-three articles were selected based on pre-established exclusion and inclusion criteria and the assistance of a secondary reviewer. The articles highlighted the value of program evaluation in public health and its importance as a nursing skill amidst the evolving health care sector. Themes identified included: a broader lack of public health competencies (including program evaluation) among all public health professionals; the complexities and challenges of evaluating public health interventions; and the uncertainty of what constitutes adequate evaluation competency in public health. Furthermore, my review noted inconsistent terminology to describe a public health nurse and the need for further exploration around the specific evaluation capacity of public health nurses. In Part Two, I explored the stated or expected evaluation competencies for public health nurses through a document review of relevant Canadian public health nursing core competencies, guidelines, and standards for practice. The identification of 52 stated evaluation competencies, demonstrates the assumption that public health nurses have competency and or capacity related to program evaluation and contrasts with the themes identified in my scoping review. Furthermore, the documents I reviewed included no specific reference to the Canadian Evaluation Society (CES), however some of the included content did align with the CES Program Evaluation Standards. This study demonstrates a misalignment between the discourse in the literature reviewed related to evaluation competency amongst public health nurses and the stated or assumed evaluation competencies put forth in leading public health nursing documentation. In the absence of any standardized evaluation training and preparation for public health nurses, further exploration is needed around what these broad evaluation competencies mean in practice and how they can be objectively assessed, exhibited, and better integrated into public health nursing education and evaluation capacity building activities. These questions warrant further investigation to ensure public health interventions are properly evaluated and that public health nurses have the competencies required for effective public health practice.
10

Program Evaluation Capacity for Nonprofit Human Services Organizations: An Analysis of Determining Factors

Alaimo, Salvatore 13 October 2008 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The increasing call for accountability combined with increasing competition for resources has given program evaluation more importance, prominence and attention within the United States nonprofit sector. It has become a major focus for nonprofit leaders, funders, accrediting organizations, board members, individual donors, the media and scholars. Within this focus however there is emerging attention and literature on the concept of evaluation capacity building to discover what organizations require to be able to effectively and efficiently evaluate their programs. This study examines this topic within the environment and stakeholder relationship dynamics of nonprofit human service organizations. A multi-stakeholder research approach using qualitative interviews of executive directors, board chairs, program staff, funders and evaluators, as well as two case studies, is employed to provide insight into the factors that determine an organization’s evaluation capacity. The overarching goal of this research is to impart this information to stakeholders interested in program evaluation, by analyzing elements for capacity beyond the more common, narrow scope of financial resources and evaluation skills. This purposeful approach intends to broaden our understanding of evaluation capacity building to encompass developing the necessary resources, culture, leadership and environments in which meaningful evaluations can be conducted for nonprofit human service programs. Results indicated that effective evaluation capacity building requires more than just funds, personnel and expertise. Some of the important factors that impacted this process included leadership; value orientations; congruence among stakeholders for their perceptions of evaluation terms and concepts; resource dependency; quality signaling; stakeholder involvement and understanding of their role in program evaluation; organizational culture; organizational learning; personal preferences; and the utilization of available evaluation tools. This study suggests that stakeholders interested in effectively building capacity to evaluate programs should be cognizant of these political, financial, social, intellectual, practical, structural, cultural and contextual implications.

Page generated in 0.1592 seconds