Return to search

The Development and Testing of a Measurement System to Assess Intensive Care Unit Team Performance

Teamwork is essential for ensuring the quality and safety of healthcare delivery in the intensive care unit (ICU). Complex procedures are conducted with a diverse team of clinicians with unique roles and responsibilities. Information about care plans and goals must also be developed, communicated, and coordinated across multiple disciplines and transferred effectively between shifts and personnel. The intricacies of routine care are compounded during emergency events, which require ICU teams to adapt to rapidly changing patient conditions while facing intense time pressure and conditional stress. Realities such as these emphasize the need for teamwork skills in the ICU. The measurement of teamwork serves a number of different purposes, including routine assessment, directing feedback, and evaluating the impact of improvement initiatives. Yet no behavioral marker system exists in critical care for quantifying teamwork across multiple task types. This study contributes to the state of science and practice in critical care by taking a (1) theory-driven, (2) context-driven, and (3) psychometrically-driven approach to the development of a teamwork measure. The development of the marker system for the current study considered the state of science and practice surrounding teamwork in critical care, the application of behavioral marker systems across the healthcare community, and interviews with front line clinicians. The ICU behavioral marker system covers four core teamwork dimensions especially relevant to critical care teams: Communication, Leadership, Backup and Supportive Behavior, and Team Decision Making, with each dimension subsuming other relevant subdimensions. This study provided an initial assessment of the reliability and validity of the marker system by focusing on a subset of teamwork competencies relevant to subset of team tasks. Two raters scored the performance of 50 teams along six subdimensions during rounds (n=25) and handoffs (n=25). In addition to calculating traditional forms of reliability evidence [intraclass correlations (ICCs) and percent agreement], this study modeled the systematic variance in ratings associated with raters, instances of teamwork, subdimensions, and tasks by applying generalizability (G) theory. G theory was also employed to provide evidence that the marker system adequately distinguishes teamwork competencies targeted for measurement. The marker system differentiated teamwork subdimensions when the data for rounds and handoffs were combined and when the data were examined separately by task (G coefficient greater than 0.80). Additionally, variance associated with instances of teamwork, subdimensions, and their interaction constituted the greatest proportion of variance in scores while variance associated with rater and task effects were minimal. That said, there remained a large percentage of residual error across analyses. Single measures ICCs were fair to good when the data for rounds and handoffs were combined depending on the competency assessed (0.52 to 0.74). The ICCs ranged from fair to good when only examining handoffs (0.47 to 0.69) and fair to excellent when only considering rounds (0.53 to 0.79). Average measures ICCs were always greater than single measures for each analysis, ranging from good to excellent (overall: 0.69 to 0.85, handoffs: 0.64 to 0.81, rounds: 0.70 to 0.89). In general, the percent of overall agreement was substandard, ranging from 0.44 to 0.80 across each task analysis. The percentage of scores within a single point, however, was nearly perfect, ranging from 0.80 to 1.00 for rounds and handoffs, handoffs, and rounds. The confluence of evidence supported the expectation that the marker system differentiates among teamwork subdmensions. Yet different reliability indices suggested varying levels of confidence in rater consistency depending on the teamwork competency that was measured. Because this study applied a psychometric approach, areas for future development and testing to redress these issues were identified. There also is a need to assess the viability of this tool in other research contexts to evaluate its generalizability in places with different norms and organizational policies as well as for different tasks that emphasize different teamwork skills. Further, it is important to increase the number of users able to make assessments through low-cost, easily accessible rater training and guidance materials. Particular emphasis should be given to areas where rater reliability was less than ideal. This would allow future researchers to evaluate team performance, provide developmental feedback, and determine the impact of future teamwork improvement initiatives.

Identiferoai:union.ndltd.org:ucf.edu/oai:stars.library.ucf.edu:etd-5780
Date01 January 2014
CreatorsDietz, Aaron
PublisherSTARS
Source SetsUniversity of Central Florida
LanguageEnglish
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceElectronic Theses and Dissertations

Page generated in 0.0013 seconds