The National Emergency Department Safety Study - Wiley Online Library

3 downloads 960 Views 98KB Size Report
staffing domain, respondents were asked whether staff- ing is sufficient to .... ED registration. Nurses ..... may offer comparatively inexpensive and rapid methods.
The National Emergency Department Safety Study: Study Rationale and Design Ashley F. Sullivan, MS, MPH, Carlos A. Camargo Jr., MD, DrPH, Paul D. Cleary, PhD, James A. Gordon, MD, MPA, Edward Guadagnoli, PhD, Rainu Kaushal, MD, MPH, David J. Magid, MD, MPH, Sowmya R. Rao, PhD, David Blumenthal, MD, MPP

Abstract The significance of medical errors is widely appreciated. Given the frequency and significance of errors in medicine, it is important to learn how to reduce their frequency; however, the identification of factors that increase the likelihood of errors poses a considerable challenge. The National Emergency Department Safety Study (NEDSS) sought to characterize organizational- and clinician-associated factors related to the likelihood of errors occurring in emergency departments (EDs). NEDSS was a large multicenter study coordinated by the Emergency Medicine Network (EMNet; www.emnet-usa.org). It was designed to determine if reports by ED personnel about safety processes are significantly correlated with the actual occurrence of errors in EDs. If so, staff reports can be used to accurately identify processes for safety improvements. Staff perceptions were assessed with a survey, while errors were assessed through chart review of three conditions: acute myocardial infarction, acute asthma, and reductions of dislocations under procedural sedation. NEDSS also examined the characteristics of EDs associated with the occurrence of errors. NEDSS is the first comprehensive national study of the frequency and types of medical errors in EDs. This article describes the methods used to develop and implement the study. ACADEMIC EMERGENCY MEDICINE 2007; 14:1182–1189 ª 2007 by the Society for Academic Emergency Medicine Keywords: emergency department, patient safety, methodology

he significance of medical errors is well known.1–6 While it is essential to reduce the frequency of errors rapidly and cost-effectively, the identification of unsafe health care processes poses analytic and logistical challenges. Moreover, the development of systems for the detection of preventable adverse events is expensive and difficult. These challenges suggest the value of developing tools for rapid and direct identification of unsafe clinical processes and associated systemic factors, without mandating extensive data collection in every health care organization. The National Emergency Department Safety Study (NEDSS) responded to the need for methods to reduce error with a focus on the correction of suboptimal safety processes in emergency health care. Using the

T

emergency department (ED) as the study environment, the study aimed to determine if reports by personnel about processes of care and attributes of the ED and its clinical environment are significantly correlated with the occurrence of errors. If so, this reporting system could be used to determine an ED’s level of risk of errors and identify processes for quality improvement. Should the survey information prove helpful in the identification of EDs at higher or lower risk of errors, we anticipate that most EDs will use the information to minimize errors and improve the quality of the delivered care. Although the survey was designed specifically for the ED, other specialties could revise and refine the survey to meet their specialtyspecific needs.

From the Department of Emergency Medicine (AFS, CAC, JAG), Biostatistics Unit (SRR), and Department of Medicine (DB), Massachusetts General Hospital, Harvard Medical School, Boston, MA; Institute for Health Policy, Massachusetts General Hospital (CAC, JAG, SRR, DB), Boston, MA; Yale School of Public Health (PDC), New Haven, CT; Department of Health Care Policy, Harvard Medical School (EG), Boston, MA; Weill Medical College of Cornell University and New York-Presbyterian Hospital (RK), New York, NY; and Clinical Research Unit, Colorado Permanente Medical Group (DJM), Denver, CO.

Received May 24, 2007; revision received July 18, 2007; accepted July 23, 2007. An overview of the National ED Inventory, created to assist with sampling for this study, was presented in poster form at the 2004 SAEM annual meeting, Orlando, FL, May 2004, and partial survey results from this study were presented in poster form at the 2007 SAEM annual meeting, Chicago, IL, May 2007. Supported by grant 5 R01 HS013099 from the Agency for Healthcare Research and Quality. Contact for correspondence: Ashley F. Sullivan, MS, MPH; e-mail: [email protected].

1182

ISSN 1069-6563 PII ISSN 1069-6563583

ª 2007 by the Society for Academic Emergency Medicine doi: 10.1197/j.aem.2007.07.014

ACAD EMERG MED



December 2007, Vol. 14, No. 12



www.aemj.org

METHODS Study Design We surveyed ED personnel to assess perceptions regarding potentially unsafe processes and important safetyrelated ED attributes and conducted chart reviews to measure the actual occurrence of errors in EDs. Study implementation required a two-stage sample. First, we identified a sample of 85 EDs. Next, within each ED, data collection included 1) administration of a survey to ED personnel, 2) data collection on rates of errors through chart review, and 3) collection of other relevant site-specific data using a key informant survey. The institutional review board at all participating institutions approved the study. Sample of ED Sites The 85 EDs recruited for data collection consisted largely of sites affiliated with Emergency Medicine Network (EMNet), an ED-based research collaboration.7 We excluded military and Veterans Administration hospitals, as well as hospitals in U.S. territories. Children’s hospitals were excluded because acute myocardial infarction (AMI) is uncommon in children and is one of the study conditions of interest. Because many EMNet sites are affiliated with an emergency medicine residency program (i.e., are academic EDs), we sought to increase the generalizability of the study by also recruiting nonacademic, nonmetropolitan EDs. To accomplish this goal, we created the 2001 National ED Inventory.8 We then selected EDs with annual visit volumes between 28,000 and 45,000. We excluded EDs with fewer than 28,000 annual visits (the median annual visit volume of EDs that see at least one patient per hour), because these EDs were unlikely to see the volume of cases needed for the chart review component of the study. Academic EDs have a median annual visit volume of 48,920,8 so the cutoff of 45,000 visits was used to capture EDs more likely to be nonacademic. We gauged an ED’s potential interest in a research collaboration by the presence of at least one published article. We performed a MEDLINE search for all hospitals with annual ED visit volumes between 28,000 and 45,000 to determine if any member of the ED published an article from 1996 to April 2004. Because fewer than ten nonmetropolitan hospitals had publications, we expanded the MEDLINE search to hospitals in metropolitan statistical areas. Community hospitals in both nonmetropolitan and metropolitan areas with at least one publication were invited to participate in the study (n = 18). None of these hospitals participated. Of the 241 sites invited to participate in the study, 102 agreed, 49 declined, and 90 did not respond. Over the course of the study, 32 sites dropped out, leaving a total of 70 sites that participated in the study. Sites withdrew due to attrition of key research personnel, the prohibition of participation by the site’s institutional review board or management, or the absence of administrative or research support. Survey Design and Population. All 70 participating EDs administered the NEDSS survey to members of their staff. To develop the survey, the study team revised a previously

1183

developed instrument that was based on a human factors framework. Although piloted in two institutions, the survey had not been subjected to psychometric testing and required modification for use in the ED setting. We added questions to assess specific ED process failures and attributes that might contribute to errors. Items in each domain probed whether principles of human factors were applied in the ED. For example, within the staffing domain, respondents were asked whether staffing is sufficient to handle the patient care load during busy periods. To further refine the survey, investigators conducted confidential, in-depth personal interviews with key informants in three EDs. Key informants included ED medical directors, nurse managers, physicians, nurses, and administrators. Interviews followed a structured protocol that covered 1) specific clinical processes that interviewees observed to be associated with medical errors in ED care and 2) systemic factors that may be generally associated with the occurrence of errors in the ED. Investigators conducted focus groups at the three EDs that followed the same protocol as the structured interviews. In addition, the survey underwent cognitive testing within the focus groups to explore the different ways in which emergency medicine providers interpreted and responded to survey questions. Questions were then revised to assure consistent and accurate interpretation of survey items. Participants included emergency physicians (EPs), nurses, administrators, and other ED staff to ensure that a variety of perspectives were represented. Both the interviews and focus groups were recorded, transcribed, and reviewed by the investigators, but we did not undertake further qualitative analysis of these data. Ten EDs served as sites for psychometric testing. We administered a paper-based version of the survey to all eligible ED staff at these ten sites. Data were used to establish the psychometric properties of the survey with particular attention to clustering of systemic factors. Based on preliminary factor analyses, the investigators (see Appendix A) deleted certain questions from the survey and analyzed substantively coherent clusters of items. Decisions about which items to drop also were based on face validity. Items then were organized into prespecified domains, scales were developed (using the items expected to represent the domains), and reliability statistics were calculated. In some cases, when the domain of a variable was ambiguous, the variable was tested in more than one scale. The process resulted in a revised survey with nine psychometrically coherent domains: physical environment, equipment, triage and monitoring, staffing, nursing, teamwork, culture, information coordination, and inpatient coordination. Based on these data, we were able to decrease the survey length by approximately 20%. We administered the final survey (available as an online Data Supplement at http://www.aemj.org/cgi/con tent/full/j.aem.2007.07.014/DC1) to a random sample of 80 ED staff at each of the 60 remaining NEDSS sites. Sites with fewer than 80 eligible staff administered the survey to all eligible personnel. Potential respondents were informed of their right not to complete the survey and of the measures taken to assure their confidentiality. Informed consent was implied by completion of the survey.

1184

Sullivan et al.



NEDSS: STUDY DESIGN AND RATIONALE

The final survey was administered to ED staff who worked at least one shift per week and provided clinical care. Eligible survey respondents (Table 1) were ED staff who had been employed in the ED for at least three months, with the exception of residents, who were required to have worked in the ED for at least one month.

cards, or application of these funds to an ED-related event or project).

Content and Administration. The survey assessed perceptions of working conditions and clinical care in the ED, with a focus on the integrity of certain generic processes and systemic properties that are important to the safety of all ED patients. The survey asked about equipment, staffing, organizational factors, and the coordination of care in their ED. In addition, the survey asked respondents about the management of AMI, asthma, and dislocations that were reduced using procedural sedation (i.e., the three conditions chosen for chart review). Although some individual survey questions concerned specific practice patterns, most questions were designed to capture broad issues relevant to safety of care in the ED (e.g., staffing). The survey was not designed to capture specific types of error. Respondents replied to statements using a five-point Likert scale. Respondents were also asked to provide personal background information, including their position and length of employment in that ED, because personal attributes might influence perceptions. Although the distributed surveys were paper-based instruments, staff could complete the revised survey online. Site coordinators distributed surveys to staff, but respondents returned completed surveys directly to the EMNet Coordinating Center. Nonrespondents received two additional surveys at two-week intervals, for a total of three surveys over six weeks. To improve response rates, the site stipend included funds for a modest honorarium for survey respondents. The choice of honorarium was at the discretion of the site (e.g., cash, gift

Definition of Key Terms. Following the work of Reason9–12 and the Institute of Medicine,4 we defined an error as the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim. We broadly defined an adverse event to be an injury (pathologic alteration in condition) that might have resulted from a medical intervention (or lack thereof) in the ED. In this study, an adverse event could not be solely and definitively due to the natural progression of the underlying condition of the patient and could not have existed before ED presentation. We defined a preventable adverse event as an adverse event associated with an error. Examples of a preventable adverse event might include respiratory arrest in an asthmatic patient who failed to receive b agonists in a time frame consistent with national guidelines, development of bradycardia and hypotension in a patient with AMI who received an excessive dose of a b-blocker, or hypotension in a patient undergoing procedural sedation whose blood pressure was not monitored in compliance with established guidelines. We defined a near miss as an error that had the potential to cause injury but did not do so because of specific circumstances, patient characteristics, or chance or because the error was intercepted before the injury occurred. For the purpose of this study, we used the failure to comply with established guidelines and the identification of preventable adverse events as approaches to detection of error and actual harm. Near misses were also of interest and were analyzed both separately and in combination with potential adverse events and failure to comply with guidelines. Accordingly, participating EDs conducted on-site, explicit chart review for 1) failure to comply with condition-specific ED guidelines for our three target conditions (e.g., not giving an aspirin in the ED to eligible patients with AMI), 2) the occurrence of potential adverse events (e.g., allergic reaction), and 3) the occurrence of near misses (e.g., an order for a medication cancelled for a patient with a documented allergy to that medication). The chart review included response options that enabled chart abstractors to communicate differences in care so that we could accurately judge guideline violations. A physician review panel (described in the following text) made implicit decisions about the occurrence and preventability of each adverse event identified through chart review. We also defined a subset of process failures that, based on the literature and in the consensus view of the EMNet Steering Committee (see Appendix A), represent lapses in care sufficiently serious to be classified as errors (Table 2). These were lapses that violated well-established guidelines and that were highly likely to adversely affect patient outcomes, even though adverse events may not have been documented or may not have occurred during the particular ED stay (e.g., failure to provide aspirin to patients with AMI who had no contraindications to aspirin therapy). Given the time-limited nature of the ED experience, the measurement of serious lapses in care was

Table 1 Emergency Department Staff by Survey Eligibility Eligible for Survey Physicians Attendings Residency director Fellows Residents Interns Nurses Registered nurses Licensed practical nurses

Ineligible for Survey Administrative staff Nurse administrators Clinical care managers Medical director (if not physician) ED administrator Unit coordinators ED registration

Other staff Technicians (e.g., radiology, orthopedic) Nursing assistants (or equivalent) Phlebotomists Interns ED volunteers Social workers/ counselors/mental health workers Midlevel providers Nurse practitioners Emergency medical technicians Physician assistants Research director (if not physician)

Chart Review The chart review identified the prevalence and characteristics of errors that occurred in our sample of EDs.

ACAD EMERG MED



December 2007, Vol. 14, No. 12



www.aemj.org

1185

Table 2 Condition-specific Care Guidelines and Error Criteria for Serious Guideline Violations Measure AMI Timely ECG

Delivery of aspirin/antiplatelet agent

Delivery of b-blocker

Delivery of reperfusion therapy

Timely thrombolytic therapy Timely percutaneous coronary intervention therapy Asthma b-agonist administration Corticosteroid administration

Oral corticosteroids at discharge

Dislocation Pain medication

Assessment of vital signs in patients receiving procedural sedation Success of reduction

Meperidine given to patient with monoamine oxidase inhibitor

Patient Population All patients evaluated in ED who present with symptoms suggestive of MI (e.g., chest, jaw, neck, shoulder or arm pain) All patients who present with symptoms suggestive of MI, have a confirmed MI, or those with admitting diagnosis of AMI, acute coronary syndrome, r/o AMI/acute coronary syndrome All ED patients with confirmed AMI (positive cardiac enzymes or diagnostic ECG) All ED AMI patients who meet criteria for reperfusion therapy (i.e., ECG with ST-segment elevation or left bundle branch block not known to be old)

All AMI patients evaluated in the ED who are given a thrombolytic All AMI patients evaluated in the ED who are sent for primary angioplasty

Error Criteria Door-to-ECG time >15 minutes

Not giving an aspirin in the ED to eligible patients

Not giving an b-blocker in the ED to eligible patients Not delivering reperfusion therapy to eligible ED patients (e.g., not providing fibrinolytic therapy or not transferring to catheterization laboratory for percutaneous coronary intervention) Door-to-needle time >45 minutes Door-to-ED disposition time >60 minutes

All patients presenting to the ED with an exacerbation of asthma ED patients with asthma: 1) with moderate to severe exacerbations (with peak expiratory flow w70% predicted, 54 years  History of chronic obstructive pulmonary disease or emphysema  No history of asthma before index visit  ED visit not prompted, in large part, by exacerbation of asthma

* A partial chart abstraction was performed during screening for this exclusion criterion.

Dislocations  Age < 14 or > 89 years  No dislocated joint of interest  Acromioclavicular shoulder dislocation  No joint relocation procedure*  No intravenous or intramuscular sedative or anesthetic administered*

ACAD EMERG MED



December 2007, Vol. 14, No. 12



www.aemj.org

physicians consisting of at least one EP reviewed each chart that screened positive on the adverse event screening form. Each physician reviewed the chart independently and then discussed the case with a paired colleague to reach consensus on the case. Where reviewers could not reach consensus, a third EP made a final decision on the classification of the event. Charts were randomly distributed to physician reviewers on a rolling basis. Reviewers were not permitted to review charts from their ED. In cases where a guideline violation or lapse in care was responsible for a preventable adverse event, the data point was entered into the analysis only as a preventable adverse event to prevent double counting. Other Site-specific Data Attributes of EDs may influence either perceptions of errors or actual error rates. To explore these possible relationships, we distributed a key informant survey at each site. This survey assessed ED attributes, including volume of ED visits in the past year, number of full-time equivalent ED staff during the past year, average numbers of hours of ED divert per month over past year, average patient waiting times over the past year, and the proportion of patients arriving by ambulance. Data Analysis In testing all hypotheses and study questions, we are interested in institution-level inferences. We will examine the relationship between reports of safety processes (as determined on the survey) and the occurrence of medical errors (as determined by chart abstraction). We will calculate the means, medians, and distributions of key variables. Correlation coefficients will be calculated to relate chart review measures and ED personnel reports of safety processes across sites. We will account for attributes of the staff (average time of service in ED, turnover), ED workload (volume of visits/year, visits per full-time equivalent), and patient acuity (proportion of patients arriving by ambulance). We will assess relationships among the institution-level variables in an attempt to identify any collinear groups of variables in constructing a well-defined model. Statistical assumptions will also be assessed (e.g., normality, heteroscedasticity), and necessary adjustments will be made to make these assumptions more plausible. We will adjust for the survey design and also attempt to reduce any nonresponse bias after examining differences in the covariate distributions among the respondents and nonrespondents. For the main analysis, to validate the survey instrument by correlating to errors at the site level, the dependent variables will be the number of preventable adverse events, near misses, and serious deviations from condition-specific guidelines. The independent variables will be the factors generated from survey responses, as determined through psychometric testing. DISCUSSION System and legal barriers, among other obstacles, can obstruct open disclosure of errors in the ED.26 In addition, documentation of errors and the development of systems for detecting errors are expensive and difficult. NEDSS intended to refine and validate a tool that iden-

1187

tifies unsafe clinical processes and the factors underlying them, without relying exclusively on error disclosure or extensive data collection. Limited prior work suggests that survey data reflect quality measures in the ED context.27 If staff reports are related to the occurrence of adverse events, the survey could be used to determine an ED’s level of risk of committing errors and thereby allow prospective interventions. The survey questions are designed to capture information about broad factors, so we do not believe the survey will be used to distract attention from problems not specifically identified in the survey. In addition, we do not anticipate that staff will manufacture responses to get a good overall result. The process of validating staff reports of safety problems will also enable us to address questions that are highly relevant to understanding and reducing the occurrence of medical errors in the ED: 1) Can reports by ED personnel be useful in setting priorities for addressing unreliable processes and systemic factors that may be causing errors? 2) How common are errors in a national sample of EDs? 3) What characteristics of EDs are associated with occurrence of errors? NEDSS has the potential to contribute significantly to understanding rates and types of errors occurring in EDs and the factors associated with such errors. Focused studies of ED care have identified areas of suboptimal care, yet the research experience is limited.19,28,29 In a study of five EDs, Burstin et al. found that care conformed with guidelines in only 59% of shortness of breath cases and 65% of chest pain episodes.16 More comprehensive studies dedicated to ED errors do not exist. Research by Brennan et al. focused on the hospital experience generally1,2; their work revealed that 2.9% of adverse events in hospitalized patients in New York State occurred in EDs. This made EDs the third most common site of occurrence for adverse events, after operating rooms (41% of adverse events) and patient care rooms (27% of adverse events).30 Negligent adverse events (defined as adverse events resulting from ‘‘care that fell below the standard expected of physicians’’) were most common in EDs (70% of adverse events, compared with 14% in the operating room and 41% in the patient’s hospital room).2 A similar study in Utah and Colorado found that EDs had the highest percentage of negligent adverse events (53%).23 Although only 1.7% of all adverse events in the study were attributed to EPs, 95% of these adverse events were judged negligent. In comparison, surgeons were associated with 46% of all adverse events in the study (22% were negligent) and internists were associated with 23% of events (45% were negligent). There is reason to believe that prior studies either underestimated or overestimated the incidence of errors in EDs. For example, adverse events and negligent adverse events were detected through reviewing charts of hospitalized patients; problems that might have occurred among ED patients discharged without admission were not included. Conversely, the methodology for classifying inpatient error may not recognize mitigating factors unique to ED-based care. NEDSS promises a more definitive review of such care nationwide.

1188

LIMITATIONS Our sample consisted predominantly of EDs affiliated with an emergency medicine residency program. This is highly relevant from a policy standpoint, because these institutions train the majority of EPs. These institutions thus have a disproportionate impact on the quality of current and future ED care. All of the participating EDs were in metropolitan statistical areas, so rural EDs are not represented in this study; however, the majority (72%) of U.S. EDs are in an urban setting.8 Because no rural EDs participated in the study, we cannot generalize our findings to these sites. Securing the participation of sites was difficult due to the nature of the study. Some hospitals prohibited their ED from participating because the study collected data on medical errors. While the confidentiality of participating sites was protected and no identifiable individual ED data will be reported, some hospitals believed the legal risk was unacceptable. The retention of sites and on-site study personnel was a challenge. Due to the rigorous training requirements for chart abstractors, many individuals never completed the necessary training. Study administration, training, and data collection lasted for more than one year at most sites, so many EDs encountered personnel changes, which caused interruptions and delays, as well as the loss of some sites (e.g., in cases where the site principal investigator moved). Some EDs with less research experience did not have dedicated research staff available, so they required additional time to complete the project. Despite these difficulties, approximately 60 sites are on track to complete both the survey and chart reviews. The chart review at most sites has yielded fewer than 70 dislocation cases reduced using procedural sedation. Although this possibility was discussed during study planning, we selected this condition because a possible decrease in charts might be offset by a higher error rate.31 To further mitigate the lower numbers of cases, we asked sites to abstract data from dislocation cases managed without procedural sedation, including the patient’s level of pain and analgesics administered. Quality of care issues (i.e., assessment of vascular status in the extremity with the dislocated joint) will be examined using data gathered on patients with dislocations who did not receive procedural sedation. Some sites did not have 70 eligible AMI and/or asthma cases. While this limits the available data, inclusion of EDs that saw a lower volume of visits (e.g., 50 cases) yields increased generalizability and statistical power. Our study may underestimate the incidence of errors in EDs and their preventability. For example, because adverse events are detected through the review of ED records and hospital discharge summaries, preventable adverse events manifested after discharge from the ED would not have been included. In addition, our criteria for determining the preventability of events are conservative. Physician reviewers were instructed to judge events as preventable only if they were clearly preventable. Given the limited chart materials available to make this determination, and that routine complications of standard procedures were assumed to be nonpreventable unless there was evidence to the contrary, our study

Sullivan et al.



NEDSS: STUDY DESIGN AND RATIONALE

may underestimate preventable adverse events. Finally, there are inherent limitations of chart review in re-creating the actual care that a patient may have received. Failure to adequately document care given or outcomes limits the ability to ascertain the actual care given. For the purpose of this study, we assumed that in the absence of documentation, care was not given.

CONCLUSIONS NEDSS could contribute to significantly reducing medical errors not only in EDs but also in other medical settings. If reports by ED personnel prove valid indicators of the occurrence and causes of ED errors, then surveys may offer comparatively inexpensive and rapid methods for managers to identify concrete patient safety interventions throughout the hospital. NEDSS should provide the most comprehensive existing data on the epidemiology of errors in the nation’s EDs. The project also constitutes a first step toward exploring whether reports of health care personnel have widespread application beyond the ED in identifying and correcting causes of errors. The authors thank the site principal investigators and local chart abstractors for their ongoing dedication to this national study, the 40 physicians serving on the physician review panel, and several others who consulted on study design and implementation (Michael Ho, Amal Mattu, Lisa Morse, Laura Peterson, Pamela Peterson, Jeff Souza, David Studdert, Jeffrey Tabas, and Eric Thomas) or data management (Uchechi Acholonu, Sarah Kunz, David Murman, and Stefan Vanderweil).

References 1. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991; 324:370–6. 2. Leape LL, Brennan TA, Laird NM, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991; 324:377–84. 3. Bates DW, Spell N, Cullen DJ, et al. The costs of adverse drug events in hospitalized patients. JAMA. 1997; 277:307–11. 4. Institute of Medicine. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press, 2000. 5. Weingart SN, Wilston RM, Gibberd RW, Harrison B. Epidemiology of medical error. BMJ. 2000; 320: 774–7. 6. Kaushal R, Bates DW, Landrigan C, et al. Medication errors and adverse drug events in pediatric inpatients. JAMA. 2001; 285:2114–20. 7. Emergency Medicine Network. Emergency Medicine Network (EMNet) website. Available at: http://www. emnet-usa.org. Accessed Dec 17, 2006. 8. Sullivan AF, Richman IB, Ahn CJ, et al. A profile of U.S. emergency departments in 2001. Ann Emerg Med. 2006; 48:694–701. 9. Reason J. Human Error. Cambridge, England: Cambridge University Press, 1990.

ACAD EMERG MED



December 2007, Vol. 14, No. 12



www.aemj.org

10. Reason JT. Foreword. In: Bogner MS, (ed). Human Error in Medicine. Hillsdale, NJ: Lawrence Erlbaum Associates, 1994, pp vii–xv. 11. Reason JT. Managing the Risks of Organizational Accidents. Brookfield, VT: Ashgate Publishing Company, 1997. 12. Reason J. Human error: models and management. BMJ. 2000; 320:768–70. 13. Ryan TJ, Antman EM, Brooks NH, et al. 1999 update: ACC/AHA guidelines for the management of patients with acute myocardial infarction: executive summary and recommendations. A report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee on Management of Acute Myocardial Infarction). Circulation. 1999; 100:1016–30. 14. National Heart Lung and Blood Institute. National Asthma Education and Prevention Program Expert Panel Report 2: Guidelines for the Diagnosis and Management of Asthma. Bethesda, MD: National Heart, Lung, and Blood Institute, Jul 1997. 15. American Society of Anesthesiologists Task Force on Sedation and Analgesia by Non-Anesthesiologists. Practice guidelines for sedation and analgesia by non-anesthesiologists. Anesthesiology. 2002; 96: 1004–17. 16. Burstin HR, Conn A, Setnick G, et al. Benchmarking and quality improvement: the Harvard Emergency Department Quality Study. Am J Med. 1999; 107:437–9. 17. Saketkhou BB, Conet FJ, Noris M, et al. Emergency department use of aspirin in patients with possible acute myocardial infarction. Ann Intern Med. 1997; 127:126–9. 18. Magid DJ, Calonge BN, Rumsfeld JS, et al. Relation between hospital primary angioplasty volume and mortality for patients with acute MI treated with primary angioplasty vs thrombolytic therapy. JAMA. 2000; 284:131–8. 19. Pope JH, Aufderheide TP, Ruthazer R, et al. Missed diagnoses of acute cardiac ischemia in the emergency department. N Engl J Med. 2000; 342:1163–70. 20. Mehta RH, Eagle KA. Missed diagnoses of acute coronary syndromes in the emergency room—continuing challenges. N Engl J Med. 2000; 342:1207–10. 21. Gurwitz JH, Gore JM, Goldberg RJ, et al. Risk for intracranial hemorrhage after tissue plasminogen activator treatment for acute myocardial infarction. Ann Intern Med. 1998; 129:597–604. 22. Emond SD, Woodruff PG, Lee EY, Singh AK, Camargo CA Jr. Effect of an emergency department

23.

24.

25.

26.

27.

28.

29.

30.

31.

1189

asthma program on acute asthma care. Ann Emerg Med. 1999; 34:321–5. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care. 2000; 38:261–71. Bates DW, Cullen DJ, Laird N, et al. Incidence of adverse drug events and potential adverse drug events: implications for prevention. JAMA. 1995; 274:29–34. Thomas EJ, Studdert DM, Newhouse JP, et al. Costs of medical injuries in Utah and Colorado. Inquiry. 1999; 36:261–71. Moskop JC, Geiderman JM, Hobgood CD, Larkin GL. Emergency physicians and disclosure of medical errors. Ann Emerg Med. 2006; 48:523–31. Crain EF, Clark S, Woodruff PG, Camargo CA Jr. How do reports of emergency department pediatric asthma practice compare with actual practice? [abstract]. Abstract Book. J Ambul Pediatr Assoc, 1999. Pierce JM, Kellermann AL, Oster C. ‘‘Bounces’’: an analysis of short-term return visits to a public hospital emergency department. Ann Emerg Med. 1990; 19:752–7. Trautlein JJ, Lamber RL, Miller J. Malpractice in the emergency department—review of 200 cases. Ann Emerg Med. 1984; 13:709–11. Leape LL. The preventability of medical injury. In: Bogner MS, (ed). Human Error in Medicine. Hillsdale, NJ: Lawrence Erlbaum Associates, 1994, pp 13–24. Miller MA, Levy P, Patel MM. Procedural sedation and analgesia in the emergency department: what are the risks? Emerg Med Clin North Am. 2005; 23: 551–72.

APPENDIX A Principal Investigator: David Blumenthal, MD, MPP Coinvestigators: Carlos A. Camargo Jr., MD, DrPH, Paul D. Cleary, PhD, James A. Gordon, MD, MPA, Edward Guadagnoli, PhD, Rainu Kaushal, MD, MPH, David J. Magid, MD, MPH, Sowmya R. Rao, PhD Project Director: Ashley F. Sullivan, MS, MPH EMNet Steering Committee: Edwin D. Boudreaux, PhD, Carlos A. Camargo Jr., MD, DrPH (Chair), Jonathan M. Mansbach, MD, Steven Polevoi, MD, Michael S. Radeos, MD, MPH, Ashley F. Sullivan, MS, MPH