Spelling suggestions: "subject:"evaluatuation capacity building"" "subject:"evaluatuation capacity cuilding""
1 |
The Role of Evaluation Policy in Organizational Capacity to Do and Use EvaluationAl Hudib, Hind 14 September 2018 (has links)
Despite the recent calls made by scholars in the evaluation field regarding the importance of evaluation policy and its influence on evaluation practice, there remains a lack of empirical evidence pertaining to the relationship between evaluation policy and evaluation capacity building (ECB). This study sought to explore the role of evaluation policy in building, or in impeding, organizational capacity to do and use evaluation. Through three interconnected studies—a review of an extensive sample of evaluation policies; interviews with scholars and practitioners from Canada, the United States, and Europe; and focus groups held with evaluation community members in Jordan and Turkey—the research identified a set of 10 categories of evaluation policy and proceeded to develop and validate an ecological framework depicting the relationship between evaluation policy and organizational capacity to do and use evaluation. The findings suggest that the role of evaluation policy in building
organizational capacity for evaluation is moderated by a number of variables operating at the contextual, organizational and individual levels and that an in-depth understanding of the dynamic, unfolding and ongoing connections between ECB, on the one hand, and the broader social, economic, political and cultural systems associated with an organization, on the other, is essential in focusing ECB efforts. While the findings reveal that the role of evaluation policy in leveraging organizational evaluation capacity has been limited, they also show some evidence that if an evaluation policy is carefully designed to privilege learning as a central and desirable function of evaluation it will be more likely to have a positive influence on the organizational capacity to do and use evaluation. The investigation helps to advance
understanding of these connections and provides some insight into the components of
evaluation policies and the role that they might play in shaping the future of evaluation
practice. This thesis makes an important contribution to the body of knowledge on
organizational evaluation capacity. Although much has been published in the evaluation literature on ECB, its relationship to evaluation policy has not been explored or described based on empirical data. The main practical implication of the research is the possibility for organizations seeking to develop evaluation policies that are ECB-oriented to use the ecological framework and the set of evaluation policy categories as guides. Similarly, organizations that are seeking to review and update their current policies to make them more ECB-friendly stand to benefit in this way. Future research may focus on expanding the scope of the framework and its applicability for different types of organizations in different contexts. Finally, it is argued that the development of policies designed to promote learning is a necessary step towards the advancement of evaluation practice.
|
2 |
Evaluation in Competence by Design Medical Education ProgramsMilosek, Jenna D. 29 March 2023 (has links)
To ensure medical residents are prepared to work in complex and evolving settings, postgraduate medical education is transitioning to competency-based medical education, which is known as Competence by Design (CBD) in Canada. To understand how CBD is operationalized within specific residency programs and how it contributes to patient, faculty, and learner outcomes, there is a need to engage in program evaluation. However, the actual extent that, reasons for, and methods in which CBD programs are engaging in program evaluation remain unclear. Furthermore, minimal attention has been given to building program evaluation capacity within medical education programs (i.e., doing evaluation and using evaluation findings).
In this research project, I explore and formally document: (a) the extent that and the ways in which CBD programs are engaging in program evaluation, (b) the reasons why these programs are engaging or not engaging in program evaluation, (c) the actual and potential positive and negative consequences of these programs engaging in program evaluation, (d) the ways that these programs build their capacities to do program evaluation and use evaluation findings, (e) the ways that program evaluators currently support these programs, and (f) the ways that program evaluators can help stakeholders build their capacities to do program evaluation and use evaluation findings. Through this research, I contribute to the limited body of empirical research on program evaluation in medical education. Confirming how CBD programs are engaging in program evaluation can advise stakeholders and program evaluators on how best to support CBD programs in building their capacities to do program evaluation and use evaluation findings, inform the design and implementation of other medical education programs, and, ultimately, enlighten program evaluation research on authentic and current evaluation practices in medical education.
To meet the objectives of this study, I used a three-phase, sequential mixed methods approach. In Phase 1, I conducted a survey of Canadian program directors whose programs have transitioned to CBD to determine: (a) the extent to which CBD programs are engaging in program evaluation, and (b) the reasons why CBD programs are engaging or not engaging in program evaluation. In Phase 2, I interviewed interested program directors to explore: (c) how CBD programs are engaging in program evaluation, and (d) the ways in which CBD programs can build their capacities to do program evaluation and use evaluation findings. In Phase 3, I interviewed Canadian program evaluators to investigate: (e) how program evaluators are currently supporting CBD programs in program evaluation, and (f) how program evaluators can help CBD programs build their capacities to do program evaluation and use evaluation findings.
Overall, the Phase 1 findings show that: (a) over three quarters of respondents indicated that their program does engage in program evaluation and most invite stakeholders to participate. However, most programs rarely leverage the expertise of a program evaluator and acknowledge interpreting quantitative program evaluation data is a challenge. Additionally, (b) most programs engage in program evaluation to improve their program and make decisions. However, most programs do not have an employee whose primary responsibility is program evaluation. They do not receive funding for program evaluation which affects their abilities to engage in program evaluation. Moreover, some programs do not engage in program evaluation because they do not know how to do program evaluation. The Phase 2 findings show that: (c) when program directors do engage in program evaluation, they are using ad hoc evaluation methods and a team-based format. However, program directors of CBD programs are struggling to engage in program evaluation because of limited available resources (i.e., time, financial, human resources, and technology infrastructure) and buy-in. Additionally, (d) program directors are building their capacity to do evaluation and use the findings from their specialty/subspecialty program evaluation. The Phase 3 findings show that: (e) program evaluators are supporting CBD programs by responding in a reactive way as temporary and external evaluation consultants. Finally, (f) program evaluators can help CBD programs build their capacities to do program evaluation and use the findings by using a participatory evaluation approach, leveraging existing data, encouraging the use of program evaluation approaches that are appropriate to the CBD implementation context, or encouraging programs to share findings which establishes an accountability cycle. In light of these findings, I discuss ways to engage in program evaluation, build capacity to do evaluation, and build capacity to use evaluation findings in CBD programs.
|
3 |
Evaluation that empowers : an iterative investigation of how organisations can become evaluation-mindedGreenaway, Lesley January 2016 (has links)
This research grew out of my concern that the dominant discourse about evaluation in the UK limits how it is defined, recognised and practised. It is a discourse which primarily values performance, accountability, results and value for money. In this research, ‘Evaluation that Empowers’ (EtE) aims to present a different discourse about evaluation that recognises other voices within the evaluation mix. This perspective embraces a broader definition of evaluation where: learning and development are a priority, and where the roles of evaluator and participants are collaborative and mutually recognised. The purpose of this research was to explore, develop, test and refine the EtE theoretical model against the real-life evaluation experience and practice in organisations. The EtE Model develops the notion of ‘evaluation-mindedness’ as the capacity for an organisation to create a deep and sustainable change in how it thinks about and embeds evaluation practices into its day to day actions. The research used a theory building approach over four distinct iterative studies. The literature review provided a guiding framework for future empirical studies; the EtE Model was applied and refined in the context of a single longitudinal case study; and further literature provided a critical review of the EtE Model in relation to current Evaluation Capacity Building literature. Finally, the EtE Model was developed into an evaluative conversation (The EtE Toolkit) and was field tested in two organisations. Findings suggest that organisations benefited from staff and volunteers engaging in critical discussion and self-assessment of their evaluation practices. For one organisation, the EtE conversation highlighted broader organisational issues, another organisation planned to adapt the EtE process to support self-evaluation across its service teams, and for one participant an emerging story of professional development was generated. This research has made an original contribution to the theory and practice of evaluation by developing a model and toolkit for engaging key evaluation stakeholders in a process of critical review of evaluation policy and practice or a meta-evaluation of evaluation. It has explored and developed the concept of evaluation-mindedness which can be applied to organisations, teams and individuals.
|
4 |
Design and Validation of an Evaluation Checklist for Organizational Readiness for Evaluation Capacity DevelopmentWalker-Egea, Connie F. 09 October 2014 (has links)
Evaluation capacity development (ECD) has been acknowledged as a system of processes to help organizations achieve sustainable evaluation practice. Examining the existing evaluation capacity of an organization before starting an ECD process is necessary and will increase the possibilities of success, determined by the establishment or strengthening of an evaluation system into the organization. In response to this need, this study involved the designing of the Organizational Readiness for Evaluation Capacity Development (ORECD) checklist and its initial validation, using a mixed method research design. The study was conducted in four phases, including: (a) the design of the ORECD checklist based on a review of the literature; (b) a review of the ORECD checklist by five experts to obtain face and content validity evidences, with emphasis on relevance and clarity of the items and how well the items fit the corresponding component; (c) a pretesting about the appropriateness of the wording of the items and format of the ORECD checklist by a sample of doctoral graduate students with formal training in evaluation and professional evaluators; and (d) a field study with 32 nonprofit organizations to determine the utility and benefits of using the ORECD checklist and potential improvements to the instrument. This phase generated information about the psychometric properties as well as consequential validity evidence. Findings indicated that the ORECD checklist has great potential to determine the readiness of an organization to develop evaluation capacity, as demonstrated by the feedback received from various groups of participants, establishing face, content, and consequential validity. Results from the psychometric analysis showed correlations that, for the most part, suggested that the components are measuring aspects of the same construct. In addition, the alpha for most of the components supported the reliability of the ORECD checklist. The two components with alphas close to but below .70 required modifications in order to improve their reliability. Also, it was necessary to modify or reword some of the items. Ongoing efforts should provide information about how the changes made to the ORECD checklist are working and additional validity evidences as the one that can be obtained through factor analysis. This will allow the exploration of the underlying structure of the ORECD checklist and its components. It is expected that the ORECD checklist can be a contribution to the body of literature about ECD helping to address organizational readiness in order to support and sustain the development of evaluation capacity within organizations.
|
5 |
How Do Data Dashboards Affect Evaluation Use in a Knowledge Network? A Study of Stakeholder Perspectives in the Centre for Research on Educational and Community Services (CRECS)Alborhamy, Yasmine 02 November 2020 (has links)
Since there is limited research on the use of data dashboards in the evaluation field, this study explores the integration of a data dashboard in a knowledge network, the Centre for Research on Educational and Community Services (CRECS) as part of its program evaluation activities. The study used three phases of data collection and analysis. It investigates the process of designing a dashboard for a knowledge network and the different uses of a data dashboard in a program evaluation context through interviews and focus group discussions. Four members of the CRECS team participated in one focus group; two other members participated in individual interviews. Data were analyzed for thematic patterns. Results indicate that the process of designing a data dashboard consists of five steps that indicate the iterative process of design and the need for sufficient consultations with stakeholders. Moreover, the data dashboard has the potential to be used internally, within CRECS, and externally with other stakeholders. The data dashboard is also believed to be beneficial in program evaluation context as a monitoring tool, for evaluability assessment, and for evaluation capacity building. In addition, it can be used externally for accountability, reporting, and communication. The study sheds light on the potentials of data dashboards in organizations, yet prolonged and broader studies should take place to confirm these uses and their sustainability.
|
6 |
Examining the Evaluation Capacity, Evaluation Behaviors, and the Culture of Evaluation in Cooperative ExtensionVengrin, Courtney Ahren 28 January 2016 (has links)
Evaluation is a burgeoning field and remains fairly young by most standards. Within Cooperative Extension, evaluation practices have been implemented at a variety of levels given that evaluation is mandatory for much of the funding Cooperative Extension receives. With evaluation in high demand, it is expected that most Extension educators are performing some levels of evaluation as a routine part of their jobs. In order to perform the required evaluations, an Extension educator must exhibit some level of knowledge and skill regarding evaluation. While much research to date has been done on the level of evaluation within the organization, there is a lack of understanding regarding the evaluation competencies that Extension educators must possess and the culture of evaluation within the organization. This study set out to examine the evaluation competencies, culture, and evaluation behaviors within Cooperative Extension. Utilizing an online survey format and quantitative methodology, a widely accepted set of evaluation competencies were examined for their importance within Cooperative Extension. A panel of 13 experts was selected to examine the competencies and it was determined than all competencies in the list were necessary for Extension educators to exhibit in their jobs. The list of competencies was then combined with a subscale regarding culture and a subscale based on the Theory of Planned Behavior (Ajzen, 1991). A total of 419 Extension educators in four Extension systems participated in the study, with 222 generating usable data for a response rate of 13%. The highest and lowest skill level for the competencies were determined by Extension educators self-reporting. Perception of importance of each competency was examined and the highest and lowest importance rankings were determined. These were compared to the rankings of importance by the expert panel. A path analysis was conducted by modifying the Theory of Planned Behavior model and multiple regression analysis. Mean weighted discrepancy scores were calculated to determine the differences in skill level and perception for each of the competencies. The subscale of culture was examined for potential areas of Evaluation Capacity Building (ECB) within the organization. Results show that while there was much agreement between the expert panel and Extension educators regarding the importance of competencies, experts ranked all competencies as important while Extension educators did not. The results of the path analysis determined intention and perceived behavioral control explained 3.9% of the variance in the evaluation behavior exhibited by skill. Subjective norm and attitude explained 11.8% of the variance within intention. Perceived behavioral control, attitude and culture accounted for 13.1% of the variance in subjective norm. Culture and perception accounted for 7.1% of the variance in attitude. Perception, program area, college major, location, training in evaluation, degree level and years of experience explained 28% of the variance within evaluation culture. Finally, recommendations for practice and future research were made based on these findings. / Ph. D.
|
7 |
Aligning Cultural Responsiveness in Evaluation and Evaluation Capacity Building: A Needs Assessment with Family Support ProgramsCook, Natalie E. 08 January 2016 (has links)
Family support programs serve vulnerable families by providing various forms of support, such as education, health services, financial assistance, and referrals to community resources. A major feature of evaluation involves assessing program effectiveness and learning from evaluation findings (Mertens and Wilson, 2012). Collaboration and cultural responsiveness are important topics in evaluation which remain largely distinct in the literature. However, evaluation capacity building provides a context for exploring possible intersections.
Data about seven programs were collected via semi-structured interviews and document analysis. This study revealed that the program leaders feel that their programs are unique, complex, and misunderstood. The findings also suggest that program leaders believe that evaluation is important for program improvement and funding. Although participants did not anticipate evaluation capacity building and did not readily express a desire to develop their own evaluation skills, participants from all seven programs enthusiastically expressed interest in evaluation capacity building once explained.
Although participants did not discuss cultural responsiveness as it relates to race, they expressed a need to overcome a community culture of reluctance to participate in programs and aversion to educational pursuits. Given the programs' shared population of interest, similar outcomes, and common challenges, evaluation capacity building in a group setting may give Roanoke family support program leaders the evaluation knowledge, skills, and peer support to engage in program evaluation that is both collaborative and culturally responsive. / Master of Science in Life Sciences
|
8 |
Evaluation Capacity Building (ECB) as a Vehicle for Social Transformation: Conceptualizing Transformative ECB and Kaleidoscopic ThinkingCook, Natalie E. 18 February 2020 (has links)
Program evaluation has become an increasingly urgent task for organizations, agencies, and initiatives that have the obligation or motivation to measure program outcomes, demonstrate impact, improve programming, tell their program story, and justify new or continued funding. Evaluation capacity building (ECB) is an important endeavor not only to empower program staff to understand, describe, and improve their programs, but also to enable programs to effectively manage limited resources. Accountability is important as public funds for social programs continue to dwindle and program administrators must do their best to fulfill their program missions in ethical, sustainable ways despite insufficient resources. While ECB on its own valuable, as it can promote evaluative thinking and help build staff's evaluation literacy and competency, ECB presents a ripe opportunity for program staff to understand the principles of equity and inclusivity and to see themselves as change agents for societal transformation. In the present study, I developed, tested, and evaluated the concept of transformative ECB (TECB), a social justice-oriented approach, rooted in culturally responsive evaluation, critical adult education, and the transformative paradigm, which promotes not only critical and evaluative thinking, but also kaleidoscopic thinking. Kaleidoscopic thinking (KT) is thinking that centers social justice and human dignity through intentional consideration (turning of the kaleidoscope) of multiple perspectives and contexts while attending to the intersectional planes of diversity, such as culture, race, gender identity, age, belief system, and socioeconomic status. KT involves reflexivity, creativity, respect for diversity, compassion and hope on the part of the thinker when examining issues and making decisions. / Doctor of Philosophy / Program evaluation has become increasingly important for organizations seeking to measure program outcomes, demonstrate impact, improve programming, tell their program story, and make the case for new or continued funding. Evaluation capacity building (ECB) includes training that is important not only to help program staff to understand, describe, and improve their programs, but also to allow programs to successfully "do more" with less. While ECB on its own is valuable, as it can help program staff become more evaluation-minded and skilled, ECB presents a ripe opportunity for program staff to understand the principles of equity and inclusivity and to see themselves as drivers of social change. In this study, I developed, tested, and evaluated the idea of transformative ECB (TECB), a social justice-oriented approach, rooted in culturally responsive evaluation, critical adult education, and the transformative (social justice-related) framework. The TECB approach promotes not only critical thinking and evaluative thinking, but also kaleidoscopic thinking, which focuses on social justice and human dignity. KT involves reflexivity, creativity, respect for diversity, compassion, and hope on the part of the thinker when examining issues and making decisions.
|
9 |
Building Evaluation Capacity in SchoolsMaras, Melissa Ann 15 July 2008 (has links)
No description available.
|
10 |
An Empirical Study of the Process of Evaluation Capacity Building in Higher EducationMahato, Seema 23 September 2020 (has links)
No description available.
|
Page generated in 0.142 seconds