Return to search

Evaluation in Competence by Design Medical Education Programs

To ensure medical residents are prepared to work in complex and evolving settings, postgraduate medical education is transitioning to competency-based medical education, which is known as Competence by Design (CBD) in Canada. To understand how CBD is operationalized within specific residency programs and how it contributes to patient, faculty, and learner outcomes, there is a need to engage in program evaluation. However, the actual extent that, reasons for, and methods in which CBD programs are engaging in program evaluation remain unclear. Furthermore, minimal attention has been given to building program evaluation capacity within medical education programs (i.e., doing evaluation and using evaluation findings).

In this research project, I explore and formally document: (a) the extent that and the ways in which CBD programs are engaging in program evaluation, (b) the reasons why these programs are engaging or not engaging in program evaluation, (c) the actual and potential positive and negative consequences of these programs engaging in program evaluation, (d) the ways that these programs build their capacities to do program evaluation and use evaluation findings, (e) the ways that program evaluators currently support these programs, and (f) the ways that program evaluators can help stakeholders build their capacities to do program evaluation and use evaluation findings. Through this research, I contribute to the limited body of empirical research on program evaluation in medical education. Confirming how CBD programs are engaging in program evaluation can advise stakeholders and program evaluators on how best to support CBD programs in building their capacities to do program evaluation and use evaluation findings, inform the design and implementation of other medical education programs, and, ultimately, enlighten program evaluation research on authentic and current evaluation practices in medical education.

To meet the objectives of this study, I used a three-phase, sequential mixed methods approach. In Phase 1, I conducted a survey of Canadian program directors whose programs have transitioned to CBD to determine: (a) the extent to which CBD programs are engaging in program evaluation, and (b) the reasons why CBD programs are engaging or not engaging in program evaluation. In Phase 2, I interviewed interested program directors to explore: (c) how CBD programs are engaging in program evaluation, and (d) the ways in which CBD programs can build their capacities to do program evaluation and use evaluation findings. In Phase 3, I interviewed Canadian program evaluators to investigate: (e) how program evaluators are currently supporting CBD programs in program evaluation, and (f) how program evaluators can help CBD programs build their capacities to do program evaluation and use evaluation findings.

Overall, the Phase 1 findings show that: (a) over three quarters of respondents indicated that their program does engage in program evaluation and most invite stakeholders to participate. However, most programs rarely leverage the expertise of a program evaluator and acknowledge interpreting quantitative program evaluation data is a challenge. Additionally, (b) most programs engage in program evaluation to improve their program and make decisions. However, most programs do not have an employee whose primary responsibility is program evaluation. They do not receive funding for program evaluation which affects their abilities to engage in program evaluation. Moreover, some programs do not engage in program evaluation because they do not know how to do program evaluation. The Phase 2 findings show that: (c) when program directors do engage in program evaluation, they are using ad hoc evaluation methods and a team-based format. However, program directors of CBD programs are struggling to engage in program evaluation because of limited available resources (i.e., time, financial, human resources, and technology infrastructure) and buy-in. Additionally, (d) program directors are building their capacity to do evaluation and use the findings from their specialty/subspecialty program evaluation. The Phase 3 findings show that: (e) program evaluators are supporting CBD programs by responding in a reactive way as temporary and external evaluation consultants. Finally, (f) program evaluators can help CBD programs build their capacities to do program evaluation and use the findings by using a participatory evaluation approach, leveraging existing data, encouraging the use of program evaluation approaches that are appropriate to the CBD implementation context, or encouraging programs to share findings which establishes an accountability cycle. In light of these findings, I discuss ways to engage in program evaluation, build capacity to do evaluation, and build capacity to use evaluation findings in CBD programs.

Identiferoai:union.ndltd.org:uottawa.ca/oai:ruor.uottawa.ca:10393/44766
Date29 March 2023
CreatorsMilosek, Jenna D.
ContributorsMoreau, Katherine
PublisherUniversité d'Ottawa / University of Ottawa
Source SetsUniversité d’Ottawa
LanguageEnglish
Detected LanguageEnglish
TypeThesis
Formatapplication/pdf
RightsAttribution 4.0 International, http://creativecommons.org/licenses/by/4.0/

Page generated in 0.0023 seconds