A Systematic Review of Team Training in Health Care

6 downloads 0 Views 146KB Size Report
detailing the evaluation of team training programs within the health care context; 1,764 measures ... HTT programs that incorporated practice (k = 163, 82.7%) often employed high- .... represented disciplines and professions, that will be re- ..... Incorporate HTT into medical school training to reach diverse student disciplines.
ARTICLE IN PRESS The Joint Commission Journal on Quality and Patient Safety 2016; ■■:■■–■■

A Systematic Review of Team Training in Health Care: Ten Questions Shannon L. Marlow, MS; Ashley M. Hughes, PhD; Shirley C. Sonesh, PhD; Megan E. Gregory, PhD; Christina N. Lacerenza, MS; Lauren E. Benishek, PhD; Amanda L. Woods, BS; Claudia Hernandez, BS; Eduardo Salas, PhD

Background: As a result of the recent proliferation of health care team training (HTT), there was a need to update previous systematic reviews examining the underlying structure driving team training initiatives. Methods: This investigation was guided by 10 research questions. A literature search identified 197 empirical samples detailing the evaluation of team training programs within the health care context; 1,764 measures of HTT effectiveness were identified within these samples. Trained coders extracted information related to study design and training development, implementation, and evaluation to calculate percentages detailing the prevalence of certain training features. Results: HTT was rarely informed by a training needs analysis (k = 47, 23.9%) and most commonly addressed communication strategies (k = 167, 84.8%). HTT programs that incorporated practice (k = 163, 82.7%) often employed highfidelity patient simulators (k = 38, 25.2%) and provided participants with feedback opportunities (k = 107, 65.6%). Participants were typically practicing clinicians (k = 154, 78.2%) with a lower prevalence of health care students (k = 35, 17.8). Evaluations primarily relied on repeated measures designs (k = 123, 62.4%) and self-reported data (k = 1,257, 71.3%). Additional trends were identified and are discussed. Conclusions: Many trends in HTT practice and evaluation were identified. The results of this review suggested that, in the literature, HTT programs are more frequently following recommendations for training design and implementation (for example, providing feedback) in comparison to findings from previous reviews. However, there were still many areas in which improvement could be achieved to improve patient care.

T

he nature of patient care has changed; having once involved the coordination of a single nurse and physician, it now requires as many as at least 15 clinicians to care for a typical patient.1 Despite the increased prevalence of teambased care, health care teams face stressful conditions that can hinder teamwork. The Joint Commission found that breakdowns in teamwork factors (for example, communication) are the root cause in as many as 68.3% of preventable sentinel events.2 To improve patient safety, many health care organizations have adopted team training as a tactic for improving teamwork.3 Team training is defined as a set of instructional activities used to foster requirements (that is, knowledge, skills, and attitudes [KSAs]) necessary for effective teamwork.4 Recent evidence suggests that health care team training (HTT) is effective for improving health care provider teamwork.5,6 Previous investigations of HTT have shed light on the state of the literature through their examination of training design and evaluation components7–12 and its practice across medical disciplines.13–17 However, the use of HTT continues to expand; 75% of American medical students now undergo team training.18 To stay abreast of current trends associated with this in1553-7250/$-see front matter © 2016 The Joint Commission. Published by Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.jcjq.2016.12.004

crease, our work replicates and updates a review conducted by Weaver et al.7 We expand on this review in several ways. First, we applied a more comprehensive approach, broadening our review to encompass all available evaluations of HTT programs and incorporating an additional 155 samples; this was necessary to determine if findings from the original review can be applied to the broader HTT literature. Second, we sought to uncover additional insight into the questions posed in the original review7 by, in some instances, focusing on additional information or reporting information in a different manner; specifically, we examine the type of study design employed in each sample, identify the exact number of measures used across all samples, calculate percentages regarding information about measures by the number of measures rather than by the number of studies, and report duration differently, providing a separate range for programs that report duration in hours, weeks, and months. We also took a deeper dive in exploring the role of simulation technologies in HTT by describing the types of technologies employed during HTT programs. This review also drew from a HTT meta-analysis by Hughes et al.5 Our overarching goal was to update the HTT literature such that practitioners can leverage findings to inform team development initiatives, as well as to provide an updated summary of the current state of HTT to inform research needs. Our investigation was guided by 10 research questions.

ARTICLE IN PRESS 2

Shannon L. Marlow, MS, et al

Design, Delivery, and Evaluation Features of HTT

A training needs analysis can uncover information pertaining to who needs training, teamwork competencies necessary for effective team performance, and other task requirements that can be leveraged to inform training features.19,20 Using this technique is purported to enhance training effectiveness.21 Thus, we investigated the following question: (1) Are HTT programs designed using a training needs analysis? HTT is applicable to clinicians and students in a variety of settings, as teamwork has been identified as integral for effective patient care across multiple medical contexts.16,22 More importantly, health care teams often consist of members with diverse disciplinary and professional backgrounds. Siassakos and colleagues23 reviewed the literature and noted that teams perform more effectively following training when the training mirrors actual situations team members will encounter in terms of team composition. They emphasized that trained teams should be organized into the same composition, in terms of represented disciplines and professions, that will be reflected on-the-job to maximize training effectiveness. To determine if HTT is delivered to trainees from diverse organizations and teams organized in an interprofessional fashion, we examine the following: (2) Who is trained in HTT? Conducting training on-site (for example, conducted on the unit where trainees are located) is argued to enhance training transfer, as it increases the extent to which training conditions reflect the actual job environment.24 However, conducting training on-site may also limit access to additional resources. Providing training at both on-site and offsite locations may be a viable solution. Therefore, we investigated the following: (3) Where is HTT held? Several research initiatives have sought to identify teamwork competencies associated with effective teamwork that generalize across contexts.25,26 Thereby, the teamwork competencies embedded in HTT programs are of critical importance. To determine if training programs are providing content consistent with these findings, we asked the following: (4) What is the content of HTT? Findings indicate that when training is delivered across multiple sessions, rather than provided in one session, it may result in increased learning.27,28 However, the effect of training duration on the effectiveness of outcomes is unclear. A meta-analysis conducted on team training suggested that there is little effect of duration on training outcomes; however, an overall longer duration of training may result in weaker outcomes for subjective measures of performance.29 Moreover, a longer training duration generally requires additional resources to implement, making this issue salient to practitioners, frontline care providers who may engage in

A Systematic Review of Team Training in Healthcare

HTT, and health care organizations. Consequently, we examined the following: (5) What is the duration of HTT? Research demonstrates the potential effectiveness of training,30 particularly when an instructional strategy is coupled with feedback mechanisms grounded in the science of training21 and learning.31,32 Researchers have categorized training instructional strategies into the following broad categories: (1) information (for example, lecture), (2) demonstration (for example, video), and (3) practice (for example, simulation).33 Various forms of practice have been incorporated into HTT.7 Research indicates that feedback should also be delivered to assist trainees with improvement in teamwork KSAs when practice is provided.31,32 To understand how participants engage in HTT, we investigated the following questions: (6) What instructional strategies are being used to teach HTT? (7) What practice strategies are used in HTT? (8) Is feedback provided after HTT practice sessions? To accurately evaluate training effectiveness, a repeated measures or independent groups design is recommended.34 Similarly, evaluations of HTT rely on the measurement source used to collect data; the source can affect the accuracy of the findings. For example, a high reliance on self-report measures may artificially inflate scores.35 To examine evaluation practices, we asked the following: (9a) What study designs are used? (9b) How are measurements collected? Kirkpatrick’s widely adopted training evaluation framework categorizes outcomes into the following categories: (1) reactions (the degree to which trainees enjoyed training and perceived it to be useful), (2) learning (the degree to which trainees acquired targeted knowledge and skills), (3) transfer (the degree to which trained competencies are exhibited on-the-job after training has ceased), and (4) results (organizational outcomes).36,37 Although no formal time line is recommended for evaluation of these criteria, Kirkpatrick recommends evaluating more distal criteria (for example, organizational results) at a time that is distal to the dissemination of the training intervention, such that use of training can first occur.36,37 Kirkpatrick also suggests that measures should be collected shortly after training if they align with a more proximal outcome.34,36,37 We thus examined the following questions: (10a) What types of outcomes are measured? (10b) When are outcomes evaluated? METHODS

We conducted a literature search of relevant journals and databases using representative search terms (Table 1) from the earliest available start date of each database through April 2015. We also contacted authors with relevant work to identify additional unpublished manuscripts. Moreover, we

ARTICLE IN PRESS Volume ■■, No. ■■, ■■ 2016

3

Table 1. Search Methodology* Databases Searched Academic Search Premier Business Source Premier Google Scholar MEDLINE PubMed Ovid PsycINFO ScienceDirect

Search Terms

Journals

Hospital Health Care Medical Medicine Medical Facility Medical Students Nursing Students Team Teamwork Non-Technical Skills Training Education Teamstepps Intervention Crew Resource Management

Academic Medicine Academy of Management Journal Academy of Management Learning & Education Cognitive Modeling Ergonomics in Design Human Factors International Journal of Training and Development Journal of Applied Psychology Medical Education Simulation in Healthcare

*Search terms were used in various combinations (e.g., Hospital AND Team AND Training).

searched the reference list of Weaver et al.7 but were unable to include all of the articles evaluated within their review because we used different criteria for evaluating the relevance of studies. Specifically, we only included studies using quantitative methods of evaluation. We also only included articles written in English. Third, a team training program had to be administered to health care personnel and/or health care students; samples comprised of clinical personnel not directly involved in health care (for example, social workers) were excluded. We included all programs regardless of the setting (for example, primary care, outpatient/ambulatory) if they met these criteria. Finally, we removed samples that dedicated less than 50% of training content to nontechnical teamwork KSAs. Teamwork training content was calculated by dividing the number of teamwork-focused competencies by the total number of competencies trained; this step was completed to ensure that primary studies primarily reflected health care teamwork training content because HTT often trains mixed content. A flow diagram depicting the article identification process is presented in Figure 1. Our final sample included 184 articles, with 197 independent samples (Appendixes S1 and 2, available in online article). We defined an independent sample as a study using a unique sample and HTT program. For example, one article may present two independent studies with two different sets of participants, which we would conceptualize as two samples. We also reviewed the quality of each study using the Effective Public Health Practice Project (EPHPP) quality assessment tool38 and found that the majority of the studies had either moderate or strong quality (k = 122, 63.08%). We extracted information relevant to the following: (a) article overview (for example, sample description), (b) training design (for example, training needs analysis), (c) implementation (for example, training strategy), and (d) training evalua-

tion (for example, whether the study design was repeated measures, post-only, independent groups, or a combination of repeated measures and independent groups). A more detailed description of the variables coded is available upon request. Percentages were calculated using the total independent sample, unless otherwise specified. Six coders independently extracted information from each of the articles included. Any discrepancies were resolved through discussion. Inter-coder agreement was calculated as the percentage of training features coded in agreement and was 85.8%. RESULTS Question 1. Are HTT Programs Designed Using a Training Needs Analysis?

A training needs analysis was conducted in 47 (23.9%) of the 197 independent samples included in our review. Of these 47 samples, 13 (27.7%) focused the training needs analyses on identifying areas of teamwork requiring improvement, 9 (19.1%) on organizational needs, 5 (10.6%) on team task requirements, 4 (8.5%) on task requirements, and 4 (8.5%) on identifying people who require training (that is, person analysis). Combinations of different types of training needs analyses were used in 12 (25.5%) samples. Question 2. Who Is Trained in HTT?

The majority of HTT trainees were practicing clinicians (k = 154, 78.2%), followed by health care students (k = 35, 17.8%), and a mix of both (k = 8, 4.1%). Of the training programs administered in a hospital (k = 151, 76.6%), most trainees came from the surgery department/operating suite (k = 26, 17.2%), followed by the emergency department (k = 17, 11.3%), intensive and critical care units

ARTICLE IN PRESS 4

Shannon L. Marlow, MS, et al

A Systematic Review of Team Training in Healthcare

The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Flow Diagram Records identified through database searching (n = 58,862)

Records after duplicates removed (n = 25,609)

Records screened (n = 1,631)

Records excluded (n = 1,150)

Full-text articles assessed for eligibility (n = 481)

Full-text articles excluded, with reasons (n = 297)

Articles included in qualitative synthesis (n = 184)

Figure 1: A PRISMA flow diagram depicting the article identification process is shown. Prepared in accordance with Moher D, et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J Clin Epidemiol. 2009;62:1006–1012.

(k = 11, 7.3%), trauma units (5, 3.3%), labor and delivery units (k = 4, 2.6%), and general medical wards (k = 4, 2.6%). Many programs engaged trainees from multiple hospital units (k = 32, 21.2%) and unspecified units or units that did not align with broader categories (k = 52, 34.4%). Following hospitals, a majority of training samples were drawn from academic settings (for example, from a BSN program) (k = 21, 10.7%), followed by clinics (k = 5, 2.5%), elderly care and nursing homes (k = 3, 1.5%), and unspecified facilities (k = 5, 2.5%). Finally, 3 (1.5%) samples trained participants from more than one type of facility and 9 (4.6%) samples were drawn from facilities that did not align with broader categories. Within samples that contained students (k = 43), medical students were most commonly included (k = 29, 67.4%), followed by nursing students (k = 22, 51.2%), nurse practitioner students (k = 4, 9.3%), pharmacy students (k = 2, 4.7%), physician assistant students (k = 2, 4.7%), nurse anesthetist students (k = 2, 4.7%), physical therapy students (k = 2, 4.7%), respiratory therapy students (k = 1, 2.3%), and multidisciplinary women’s health students (k = 1, 2.3%). In addition, 75.3% (k = 149) of samples included interprofessional teams (that is, teams composed of individuals with different professions, such as a team of nurses and physicians), and 24.7% (k = 49) of samples did not include interprofessional teams.

Question 3. Where Is HTT Held?

Training was most commonly provided at on-site locations (k = 102, 51.8%), followed by off-site (k = 24, 12.2%), and both on- and off-site locations (k = 8, 4.1%). However, 63 (32.0%) of the samples did not report enough information to determine training location. Question 4. What Is the Content of HTT?

Communication was the most frequently trained teamwork competency (k = 167, 84.8%), followed by situation awareness (that is, awareness of environmental conditions that will affect task completion) (k = 101, 51.3%), leadership (k = 95, 48.2%), shared understanding (for example, shared cognition) (k = 79, 40.1%), mutual support (k = 45, 22.8%), debriefing skills (k = 42, 21.3%), and decision making (k = 37, 18.8%). Additional trained competencies included coordination (k = 30, 15.2%), team role knowledge (k = 35, 17.8%), conflict management (k = 25, 12.7%), and goal setting (k = 7, 3.6%). Question 5. What Is the Duration of HTT?

The duration of HTT programs widely varied. Therefore, training duration was reported in denominations of hours, weeks, and months across all programs. To avoid overinflating estimates, we report the ranges for each of these time

ARTICLE IN PRESS Volume ■■, No. ■■, ■■ 2016

measurements separately. Of the 108 (54.8%) samples that reported duration in hours, HTT ranged in length from 0.27 to 34 hours and averaged 4.8 hours. Duration was reported in days in 43 (21.8%) samples; within these samples, training ranged in length from 1 to 10 days. An additional 5 (2.5%) programs reported duration in weeks, and these ranged in length from 3 to 12 weeks. Duration was noted in months or years in 9 (4.6%) interventions; these programs ranged in length from 2.5 to 36 months. Finally, 32 (16.2%) programs did not report training duration. Question 6. What Instructional Strategies Are Being Used to Teach HTT?

Many HTT programs (k = 73, 37.1%) relied on a mixture of information-, demonstration-, and practice-based instructional methods. An additional 57 (28.9%) samples employed HTT that integrated both information- and practicebased methods. Moreover, 8 (4.1%) programs combined information- and demonstration-based methods and 1 (0.5%) program combined demonstration- and practice-based methods. An additional 15 (7.6%) programs solely relied on information-based methods, and 32 (16.2%) programs incorporated only practice-based methods. Finally, 11 (5.6%) programs did not specify the instructional strategies that were employed. Question 7. What Practice Strategies Are Used in HTT?

The majority of HTT programs reviewed in the current study reported using some form of practice (k = 163, 82.7%). Of these programs, 38 (25.2%) used high-fidelity human patient simulators (HPS), 18 (11.0%) incorporated lower-fidelity manikins, 20 (12.3%) reported use of role play (that is, engagement in a scenario without the use of a manikin or simulator), and 3 (1.8%) used patient actors. Finally, 57 (35.0%) samples used practice strategies that were either not fully specified or could not be categorized in a distinct category of simulators, and 27 (16.6%) used a combination of techniques. Question 8. Is Feedback Provided After HTT Practice Sessions?

The majority of HTT programs that employed practice opportunities used feedback or debriefing (k = 107, 65.6%). Of the 107 HTT programs using debriefing, 72 (67.3%) samples reported using verbal methods for providing feedback, while 35 (32.7%) relied on video-based methods. Question 9a. What Study Designs Are Used?

Most samples included in our review employed a repeated measures design (k = 123, 62.4%) to evaluate HTT, followed by a posttest-only design (k = 38, 19.3%), a combination of repeated measures and independent groups design (k = 21, 10.7%), and a solely independent groups design (k = 15, 7.6%).

5

Question 9b. How Are Measurements Collected?

Of 1,764 separate measures used within the 197 samples, the majority of measures were completed via self-report strategies (k = 1,257, 71.3%), followed by observer reports (k = 341, 19.3%), reporting systems (for example, logged case start times; k = 87, 4.9%), patient chart reviews (k = 48, 2.7%), automated/objective performance reports (for example, type of anesthesia administered; k = 28, 1.6%), and lastly, a combination of methods (for example, self-report data are averaged with observer data to produce one score; k = 3, 0.2%). Question 10a. What Types of Outcomes Are Measured?

We categorized the type of evaluations by level of Kirkpatrick’s36,37 training evaluation framework. Out of 1,764 evaluations, we found 311 (17.6%) evaluations of trainee reactions to HTT, 603 (34.2%) learning evaluations, 538 (30.5%) transfer evaluations, 254 (14.4%) evaluations of organizational outcomes, and 58 (3.3%) patient care outcome evaluations. Question 10b. When Are Outcomes Evaluated?

On average, HTT outcomes were assessed 165.6 days after training, with days to evaluation spanning a range of 0 days after training (same day as training) to 1,460 days (that is, 4 years) after training with a median of 0 days. We found that 31 (15.7%) programs collected evaluations at more than one time point, while 125 (63.5%) programs did not. In 41 (20.8%) programs it was unclear or unreported whether data were collected at multiple time points. DISCUSSION

Our findings offer several insights into the current state of HTT and serve as an update to the review conducted by Weaver et al.7 Our overarching goal was to update the HTT literature such that these results identify common deficiencies in HTT interventions, providing practitioners implementing HTT programs awareness of problems that should be minimized. These results also highlight areas where future research is most needed. We summarize the key implications of our findings for practice and future evaluation efforts in Table 2. First, approximately a quarter of HTT programs were informed by a training needs analysis (k = 47, 23.9%). It is critical to conduct a training needs analysis before implementing training, as the results of such analyses can guide multiple aspects of training.19 Further, we found that training content primarily focuses on several core teamwork principles, including communication, situation awareness, and shared understanding, which are critical for team effectiveness.26 However, research suggests other competencies (for example, leadership) also foster effective teamwork.39 Future work might examine the utility of embedding additional teamwork competencies into HTT

ARTICLE IN PRESS 6

Shannon L. Marlow, MS, et al

A Systematic Review of Team Training in Healthcare

Table 2. Summary of Implications of Findings for Future Evaluations of Health Care Team Training (HTT) Implement HTT programs in additional facilities. Majority of HTT programs are implemented in hospitals and academic settings. Develop HTT program for patient-centered medical home model of primary care. Incorporate HTT into medical school training to reach diverse student disciplines. Majority of students receiving HTT are medical students. Evaluate impact of HTT using diverse teamwork competencies. Communication is trained in majority of HTT programs. Shift focus from singular training sessions to distributed training sessions. Incorporate control groups in addition to using repeated measures in HTT evaluations. Majority of HTT evaluations use repeated measures design only. Leverage additional sources of measurement. Self-report dominates HTT evaluation methods. Assess changes due to HTT using observer reports, reporting systems, patient chart reviews, and automated/objective performance reports for a better understanding of the multilevel impact of HTT. Evaluate HTT and teamwork’s impact on patient care outcomes. Incorporate measures of patient care quality and clinical outcomes.

programs and how specific competencies can most effectively be implemented within training simultaneously. Next, we found that HTT is primarily taking place with staff and students from hospitals and academic settings. Based on a recent examination of HTT effectiveness5 and the current findings, there is an increased need to further expand the types of health care facilities receiving HTT. For instance, we found few clinics/office-based practices offering HTT in the published literature. As such institutions are moving toward a patient-centered medical home model,40 HTT is becoming even more critical for the effectiveness of these practices. Our review thus suggests that there is an opportunity for the development of an HTT program for this model of primary care. These results also suggest that there is an opportunity for HTT to be increasingly implemented within the context of primary care. Moreover, the majority of students receiving HTT were medical students. With the increased need for interprofessional teamwork on the job,1 it is essential for all types of health care students (for example, nurses) to receive teamwork training. This must begin with attaining buy-in from the administration of these institutions. The location in which HTT takes place is consistent with the findings of Weaver et al.7 in that the majority of HTT programs are administered on-site (k = 102, 51.8%). We also found that the majority of samples used practice-based methods. We encourage widespread adoption of opportunities to practice so that trainees can engage in realistic scenarios in which teamwork skills can be used; further, we recommend the continued use of feedback to shape trainee behavior.31,32 We also suggest that information and demonstration be implemented to increase learning opportunities and minimize information overload.41 Concerning HTT duration, we suggest that more HTT programs should leverage distributed sessions rather than one-time singular sessions to instill the importance of teamwork in clinicians and stu-

dents over time.42 Taken together, findings inform areas for future practice in the design and implementation of HTT initiatives. Finally, HTT was typically evaluated via self-report strategies using a repeated measures design; evaluations focused primarily on learning as an outcome. Although these methods of evaluation may be a trend, they may not be the most optimal for proving HTT effectiveness. Specifically, these approaches may limit evaluations in that (1) self-report strategies typically produce artificially inflated scores of evaluation (for example, perceived training transfer),35 (2) repeated measures metrics may not have as much buy-in with the medical community as randomized controlled trials,43 and (3) learning and reaction evaluations, although necessary for subsequent transfer and use on-the-job, are not sufficient criteria for determining the effectiveness of HTT, as other factors and conditions play a role in facilitating transfer.44 Particularly, use of teamwork on-the-job is also necessary to ensure that learned competencies impact organizational outcomes.5 As such, we advocate for the use of more robust evaluation methods, criteria, and design strategies to gain buy-in for the utility of HTT. Future work should use more objective measures and evaluate both transfer and organizational results (for example, return on investment [ROI]) to more effectively measure the effect of HTT. We further suggest that future evaluations of HTT effectiveness assess patient outcomes; we found that only 3.3% of measures assessed results pertaining to patients. As one of the primary goals of HTT programs is to mitigate medical error and improve patient safety, it is critical that future HTT researchers evaluate outcomes consistent with these goals and ensure that the patient is a primary focus of HTT initiatives. Coupling the findings of this review with recent quantitative evidence on HTT’s impact on patient clinical care outcomes,5 we strongly encourage widespread adoption of HTT as a patient safety strategy.3

ARTICLE IN PRESS Volume ■■, No. ■■, ■■ 2016

Limitations

Although the present review offers a contribution by summarizing the current state of the HTT literature, there are several limitations. First, we were only able to assess HTT programs that were reported in published and unpublished literature. Consequently, trends that may be apparent in practice may not be reflected in the current review (for example, a particular hospital implements an HTT program periodically in a prespecified and consistent way). We were also unable to test for publication bias, as this effort does not statistically combine effect sizes. However, we note that the results of several publication bias tests recently conducted on similar literature (that is, HTT) indicated that publication bias did not have an impact on results.5 As the current review focuses on the same literature base, we believe the results of these publication bias tests can be applied to this effort with some confidence. Second, we were only able to include articles written in English. Articles written in additional languages may offer further insight into HTT initiatives. Finally, some studies included within our review were classified as having low quality, as rated with the EPHPP quality assessment tool.38 However, we chose to include weak studies because we are interested in the trends evident across all HTT interventions, not just those apparent in the higherquality studies. This is in line with recommendations from Petticrew who suggests that, in the case of complex interventions, “it may be particularly valuable to see the whole range of evidence” and not exclude studies on the basis of 45 quality. (p. 4) CONCLUSIONS

Our update of the HTT literature review by Weaver et al.7 demonstrates that the use of HTT in practice widely varies. There is not a silver bullet to effective HTT; rather, this review begins to uncover the various strategies and approaches to HTT that practitioners might adopt given their particular needs. In future research, it is important for researchers to report descriptions of their training in a way that core details can be determined. This will ensure that design and delivery features that contribute to HTT effectiveness are identified and tested. Reporting key details will also allow for more systematic examinations of the effectiveness of HTT features in facilitating outcomes such as reduced patient mortality. We advocate for future HTT programs to leverage the trends highlighted within this review, the science of team training, and robust training evaluation measures to better understand the impact of HTT on critical outcomes relevant to patient care.

Disclaimer. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veteran Affairs or the United States government. Conflicts of Interest. All authors report no conflicts of interest.

7

Shannon L. Marlow, MS, is Graduate Research Associate, Department of Psychology, Rice University, Houston. Ashley M. Hughes, PhD, is Health Science Specialist, Houston Center for Innovations in Quality, Effectiveness and Safety (IQuESt), Michael E. DeBakey VA Medical Center, Baylor College of Medicine, Houston. Shirley C. Sonesh, PhD, is Adjunct Professor, AB Freeman School of Business, Tulane University, New Orleans. Megan E. Gregory, PhD, is Advanced Fellow in Educational Leadership, Houston Center for Innovations in Quality, Effectiveness and Safety (IQuESt), Michael E. DeBakey VA Medical Center, Baylor College of Medicine, Houston. Christina N. Lacerenza, MS, is Graduate Research Associate, Department of Psychology, Rice University, Houston. Lauren E. Benishek, PhD, is Research Fellow, Armstrong Institute for Patient Safety and Quality, Johns Hopkins University School of Medicine, Baltimore. Amanda L. Woods, BS, is Graduate Research Associate, Department of Psychology, Rice University, Houston. Claudia Hernandez, BS, is Graduate Research Associate, Institute for Simulation and Training (IST), Department of Psychology, University of Central Florida, Orlando. Eduardo Salas, PhD, is Professor and Allyn R. & Gladys M. Cline Chair in Psychology, Department of Psychology, Rice University, Houston, and Member, Editorial Advisory Board, The Joint Commission Journal on Quality and Patient Safety. Please address correspondence to Eduardo Salas, [email protected].

ONLINE-ONLY CONTENT See the online version of this article for Appendix 1. Study Description Table. Appendix 2. Articles Included in Review.

REFERENCES 1. Gawande A. Atul Gawande: how do we heal medicine? Mar 2012. Accessed Dec 19, 2016. https://www.ted.com/talks/ atul_gawande_how_do_we_heal_medicine?language=en. 2. The Joint Commission. Sentinel event data: Root causes by event type 2004–2014. Accessed Jan 1, 2017. http:// www.tsigconsulting.com/tolcam/wp-content/uploads/ 2015/04/TJC-Sentinel-Event-Root_Causes_by_Event_Type _2004-2014.pdf. 3. Shekelle PG, et al. The top patient safety strategies that can be encouraged for adoption now. Ann Intern Med. 2013 Mar 5;158:365–368. 4. Salas E, Cannon-Bowers JA. Methods, tools, and strategies for team training. In: Quinones MA, Ehrenstein A, eds. Training for a Rapidly Changing Workplace: Applications of Psychological Research. Washington, DC: American Psychological Association, 1997:249–279. 5. Hughes AM, et al. Saving lives: a meta-analysis of team training in healthcare. J Appl Psychol. 2016;101:1266–1304. 6. Salas E, et al. Does team training work? Principles for health care. Acad Emerg Med. 2013;15:1002–1009. 7. Weaver SJ, et al. The anatomy of health care team training and the state of practice: a critical review. Acad Med. 2010;85:1746–1760. 8. Weaver SJ, Dy SM, Rosen MA. Team-training in healthcare: a narrative synthesis of the literature. BMJ Qual Saf. 2014;23:359–372. 9. Weaver SJ, et al. Simulation-based team training at the sharp end: a qualitative study of simulation-based team training design, implementation, and evaluation in healthcare. J Emerg Trauma Shock. 2010;3:369–377. 10. Baker D, et al. Medical team training programs in health care. In: Henriksen K, et al., eds. Advances in Patient Safety: From Research to Implementation, vol. 4: Programs, Tools, and Products. Rockville, MD: Agency for Healthcare Research and Quality, 2005. Accessed Dec 19, 2016. https://www.ncbi .nlm.nih.gov/books/NBK20580/.

ARTICLE IN PRESS 8

Shannon L. Marlow, MS, et al

11. Buljac-Samardzic M, et al. Interventions to improve team effectiveness: a systematic review. Health Policy (New York). 2010;94:183–195. 12. Gordon M, Darbyshire D, Baker P. Non-technical skills training to enhance patient safety: a systematic review. Med Educ. 2012;46:1042–1054. 13. Cumin D, et al. A systematic review of simulation for multidisciplinary team training in operating rooms. Simul Healthc. 2013;8:171–179. 14. Eppich W, et al. Simulation-based team training in healthcare. Simul Healthc. 2011;6(suppl):S14–S19. 15. Gough S, et al. A review of undergraduate interprofessional simulation-based education (IPSE). Collegian. 2012;19:153– 170. 16. Manser T. Teamwork and patient safety in dynamic domains of healthcare: a review of the literature. Acta Anaesthesiol Scand. 2009;53:143–151. 17. Merién A, et al. Multidisciplinary team training in a simulation setting for acute obstetric emergencies: a systematic review. Obstet Gynecol. 2010;115:1021–1031. 18. Association of American Medical Colleges. Press release: annual medical school graduation survey shows gains in team training. Aug 2 2013. Accessed Dec 19, 2016. https://www .aamc.org/newsroom/newsreleases/351120/080213.html. 19. Goldstein IL. Training in Organizations: Needs Assessment, Development, and Evaluation. Belmont, CA: Brookes/Cole, 1986. 20. Gould D, et al. Training needs analysis: a literature review and reappraisal. Int J Nurs Stud. 2004;41:471–486. 21. Salas E, Cannon-Bowers JA. The science of training: a decade of progress. Annu Rev Psychol. 2001;52:471–499. 22. Neily J, et al. Association between implementation of a medical team training program and surgical mortality. JAMA. 2010 Oct 20;304:1693–1700. 23. Siassakos D, et al. The active components of effective training in obstetric emergencies. BJOG. 2009;116:1028–1032. 24. Baldwin TT, Ford JK. Transfer of training: a review and directions for future research. Pers Psychol. 1988;41:63–105. 25. Salas E, et al. The wisdom of collectives in organizations: an update of the teamwork competencies. In: Salas E, Goodwin GF, Burke CS, eds. Team Effectiveness in Complex Organizations: Cross-Disciplinary Perspectives and Approaches. New York City: Taylor & Francis; 2009:39–79. 26. Salas E, Sims DE, Burke CS. Is there a “Big Five” in teamwork? Small Group Res. 2005;36:555–599. 27. Bahrick H, Hall LK. The importance of retrieval failures to long-term retention: a metacognitive explanation of the spacing effect. J Mem Lang. 2005;52:566–577. 28. Schmidt RA, Bjork RA. New conceptualizations of practice: common principles in three paradigms suggest new concepts for training. Psychol Sci. 1992;3:207–217.

A Systematic Review of Team Training in Healthcare 29. Salas E, et al. The effect of team building on performance: an integration. Small Group Res. 1999;30:309–329. 30. Arthur W Jr, et al. Effectiveness of training in organizations: a meta-analysis of design and evaluation features. J Appl Psychol. 2003;88:234–245. 31. Hysong SJ. Meta-analysis: audit and feedback features impact effectiveness on care quality. Med Care. 2009;47:356– 363. 32. Kluger AN, DeNisi A. The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol Bull. 1996;119:254–284. 33. Cannon-Bowers JA, et al. Toward an integration of training theory and technique. Hum Factors. 1991;33:281– 292. 34. Kirkpatrick DL. How to start an objective evaluation of your training program. J Am Soc Train Dir. 1956;10:18–22. 35. Blume BD, et al. Transfer of training: a meta-analytic review. J Manage. 2010;36:1065–1105. 36. Kirkpatrick DL. Evaluation of training. In: Craig R, ed. Training and Development Handbook: A Guide to Human Resource Development. 2nd ed. New York City: McGraw-Hill, 1976:181–18-27. 37. Kirkpatrick D. Great ideas revisited: revisiting Kirkpatrick’s four-level model. Train Dev. 1996;50:54–59. 38. Effective Public Health Practice Project. Quality Assessment Tool for Quantitative Studies. Hamilton, ON: Effective Public Health Practice Project, 1998. 39. Morgeson FP, DeRue DS, Karam EP. Leadership in teams: a functional approach to understanding leadership structures and processes. J Manage. 2010;36:5–39. 40. Rittenhouse DR, Shortell SM. The patient-centered medical home: will it stand the test of health reform? JAMA. 2009 May 20;301:2038–2040. 41. Low R, Sweller J. The modality principle in multimedia learning. In: Mayer RE, ed. The Cambridge Handbook of Multimedia Learning. 2nd ed. New York City: Cambridge University Press, 2009:227–246. 42. Nakata T. English vocabulary learning with word lists, word cards and computers: implications from cognitive psychology research for optimal spaced learning. ReCALL. 2008;20:3– 20. 43. Smith GC, Pell JP. Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials. BMJ. 2003 Dec 20;327:1459– 1461. 44. Alliger GM, et al. A meta-analysis of the relations among training criteria. Pers Psychol. 1997;50:341–358. 45. Petticrew M. Time to rethink the systematic review catechism? Moving from “what works” to “what happens”. Syst Rev. 2015 Mar 28;4:36.