Advanced Statistics: Applying Statistical Process Control Techniques ...

6 downloads 384444 Views 172KB Size Report
Advanced Statistics: Applying Statistical Process. Control Techniques to ... and an increased emphasis on the ED as a diagnostic center have contributed to poor ..... placing control for activating the on-call cardiac. Figure 4. Determining a ...
ACAD EMERG MED

d

August 2003, Vol. 10, No. 8

d

883

www.aemj.org

SPECIAL CONTRIBUTIONS Advanced Statistics: Applying Statistical Process Control Techniques to Emergency Medicine: A Primer for Providers Charles D. Callahan, PhD, David L. Griffen, MD, PhD Abstract Emergency medicine faces unique challenges in the effort to improve efficiency and effectiveness. Increased patient volumes, decreased emergency department (ED) supply, and an increased emphasis on the ED as a diagnostic center have contributed to poor customer satisfaction and process failures such as diversion/bypass. Statistical process control (SPC) techniques developed in industry offer an empirically based means to understand our work processes and manage

by fact. Emphasizing that meaningful quality improvement can occur only when it is exercised by ‘‘front-line’’ providers, this primer presents robust yet accessible SPC concepts and techniques for use in today’s ED. Key words: emergency medicine; statistical process control; process failures; quality improvement. ACADEMIC EMERGENCY MEDICINE 2003; 10:883–890.

The modern health care environment demands everincreasing efficiency and effectiveness, yet a number of indicators suggest that our industry is failing the challenge. The Institute of Medicine, in its groundbreaking report, To Err Is Human: Building a Safer Health System,1 has concluded that ‘‘at least 44,000 and perhaps as many as 98,000 Americans die in hospitals each year as a result of medical errors.’’ Further, due to national shortages of essential clinical staff, escalating costs of medical high-technology (machinery and medications), decreased reimbursement, and admittedly less-than-nimble management in some cases, it is reported that by 2005, nearly 60% of America’s hospitals will be losing money.2 In many respects, the paradigm shifts that befell other core U.S. industries in prior decades now challenge health care. Like the automotive and electronics industries before it, the U.S. health care industry appears to be in a transitional phase marked by shrinking cash and customer satisfaction, amid increasing competition for a share of the pie. Emergency medicine (EM) faces unique challenges in the effort to improve efficiency and effectiveness. The causes and effects of emergency department (ED) overcrowding have been well articulated,3,4 as have

the unique aspects of EM that contribute to error.5 Increased patient volume demand, decreased ED supply, and heightened expectation for a rapid yet definitive diagnosis for each patient have resulted in common place occurrences of service delays and process failures (e.g., diversion and bypass). As a result, ED patients as a whole are the least satisfied of any group of hospital patients, with only 60% indicating they would recommend that ED to their family or friends.6 It has been suggested that a potential solution to these challenges is the application of principles derived from the manufacturing industry that focus on reducing unwanted variation in health care production processes, leading to improved quality and profitability.7 Simply put, quality is achieved through the elimination or reduction of process and outcome variation. Total Quality Management (TQM), also referred to as Continuous Quality Improvement (CQI), is defined as:

From Emergency Medical Services, Memorial Medical Center, Springfield, IL (CDC, DLG). Received March 15, 2002; revision received January 27, 2003; accepted February 6, 2003. Series editor: Roger J. Lewis, MD, PhD, Department of Emergency Medicine, Harbor–UCLA Medical Center, Torrance, CA. Address for correspondence and reprints: Charles D. Callahan, PhD, ABPP, Memorial Medical Center, 701 North First Street, Springfield, IL 62781. Fax: 217-788-6459; e-mail: callahan.chuck@ mhsil.com.

Berwick,9 perhaps the leading proponent of applying TQM principles to health care, states that efforts at quality improvement in our industry too often boil down to attempts to discover and remove the ‘‘bad apples’’ via inspection against some threshold of acceptable performance. The fallacy of such approach, Berwick argues, is that most problems are built right into our complex production processes and that

A leadership philosophy that demands a relentless pursuit of quality and the stamina for continuous improvement in all aspects of operations: product, service, processes, and communications. The major components of quality management are leadership, a customer focus, continuous improvement, employee empowerment, and management by fact.’’8 (italics added).

884 ‘‘defects in quality could only rarely be attributed to a lack of will, skill, or benign intention among the people involved in the processes.’’ Weeding out socalled ‘‘bad apples’’ only produces a culture of fearful, demoralized, and underproductive staff vigilant only to self-preservation, not excellence in clinical care. Alternately, adoption of a TQM management philosophy focused on continuous improvement reframes energy on reduction of waste, rework, and complexity. It becomes clear that the scope of the problem is beyond the individual ED care provider, and that ‘‘working harder’’ is not the answer.5 Such was the mantra of W. Edwards Deming10: individuals work within a system process, and it is that process which determines how they perform, not their individual skills. Understanding and simplifying the complex production processes, which produce errors from well meaning, well-trained professionals are key if we are to effect lasting meaningful change. Within this overall management philosophy, a set of analytical techniques known as statistical process control (SPC) has been developed that offers a powerful and pragmatic mechanism for advancing this cause of ‘‘managing by fact’’ through the development of process knowledge. While there has been some initial advocacy for the application of SPC to EM,11,12 published reports of SPC for ED processes remain uncommon. A central premise of this paper is that successful transformation to a total quality culture requires that front-line service providers function as ‘‘knowledge workers’’13 in that they not only provide intervention, but also are the keepers of real-time data necessary for ongoing performance calibration and improvement. Initiatives that rely upon external experts or ‘‘quality gurus’’ to impose change upon the clinical department are doomed to short-lived success or even decline. It is the knowledge workers internal to the process, who in fact operate and are operated upon by that process, who will be best able to achieve sustainable quality gains. The resulting improvements in employee and customer satisfaction, as well as productivity and clinical outcomes, can be dramatic. Unfortunately, few providers are proficient in the methods of measurement or applied statistical analysis, due to lack of knowledge, disinterest, or outright aversion. Traditional ‘‘null hypothesis’’ data analysis techniques typically used in academic research are poorly matched to current applied requirements for reasons of training, sampling, relevance, and cost. Simply, front-line knowledge workers don’t have the time, skills, or inclination to collect large samples of randomized data for eventual manipulation by a statistician months or years after the fact. Rather, what is needed is a real-time, analytically robust, graphic portrayal of information that quickly and meaningfully identifies trends, problems, and issues. To be utilized, data tools need to pass the cost–benefit thres-

Callahan and Griffen

d

STATISTICAL PROCESS CONTROL AND EM

hold of being worth the resources they consume to deploy. A focused SPC approach provides such tools, and can be effectively adopted by front-line knowledge workers in EM.

INTRODUCING THE INDIVIDUAL CONTROL CHART All processes exhibit variation, which can be defined as random ups and downs in values around some central tendency, or average. This expected variation is a defining feature of the process itself, and can be thought of as its background ‘‘noise.’’ Deming14 refers to this noise as ‘‘common cause variation.’’ All processes contain noise and, therefore, common cause. Some processes also contain notable, unusual events, which emerge as distinct ‘‘signals’’ against this background noise. Deming referred to these extraordinary signals that stand out from the noise as ‘‘special cause variation.’’ Common cause variation is predictable (within a range of expected limits), random, and inherent to that process. Special cause variation is unpredictable, intermittent, and not part of the process. When special cause events are recognized, they are typically attributable to one of four situations: 1) operator change, 2) input change, 3) equipment change, or 4) process/ method change. Ishikawa has alternately referred to these as his famous 4 Ms: manpower, material, machine, and method.15 Notably, the ability to recognize true special cause variation in real time fosters effective management of the process in one of several ways: prevent the special cause from happening, fix or repair the special cause, standardize/repeat desirable special causes (i.e., process improvements), or even rationally choose to do nothing at all. Thus, SPC techniques offer an empirically based means to examine the inherent variation that exists within any work process, and to determine whether that variation is of sufficient magnitude to warrant interpretation or intervention. An important aspect of processes that exhibit only common cause variation is that they are said to be stable, predictable, and in control (hence the acronym SPC also serves as a mnemonic). Conversely, a process containing special cause variation is by definition unstable, unpredictable, and out of control. Few reliable predictions about future performance can be made from systems that are out of control. Wheeler16 emphasizes that ‘‘while every data set contains noise, some data sets may contain signals. Therefore, before you can detect a signal within any given data set, you must first filter out the noise.’’ A control chart is a graphical representation of data that serves as a robust decision-making tool to filter out such noise. Control charts were developed by Shewhart and his team at Bell Laboratories in the 1920s as a means to understand and reduce variability in manufacturing

ACAD EMERG MED

d

August 2003, Vol. 10, No. 8

d

885

www.aemj.org

processes.17 All control charts begin as a line graph, with the independent variable (often a time variable such as hour or date) on the horizontal (x) axis and the dependent variable (laboratory values, deaths, patient visits, etc.) on the vertical (y) axis. Such a line graph is often referred to as a run chart, to reflect its nature as a run of consecutive data points. The control chart is produced by the addition of two horizontal lines symmetrically placed an equal distance above and below the mean of the run of data points. These ‘‘control limits’’ specify a statistically defined confidence interval representing the variance of that process, and form the basis of the SPC analytic strategy for ‘‘managing by fact.’’ The data that will be encountered in ED management problems come in two forms: attributes and variables. Attributes are discrete ‘‘count’’ data indicating a yes/no or on/off decision of whether the process conforms to some standard. Examples include proportion of medical charts deemed ‘‘complete,’’ percentage of patients presenting to the ED with nonurgent complaints, and conformance with utilization of care pathways. Variables reflect ‘‘continuous’’ data measured along a scale of values beginning with zero and allowing for partial increments between whole numbers, such as patient weight, length of a laceration measured in centimeters (e.g., 1.4 cm), a laboratory value reflecting a physiological process, or turnaround time. Some data, such as number of trauma-related deaths, visit volumes, or patient age, share some characteristics of both: they are expressed as frequency count data, yet are measured on a scale with true zero point (i.e., one can have zero occurrences of the variable). Over the years, many types of control charts have been developed to address these different types of data and special situations encountered in management. All share the essential features described above, though the statistical rules governing the placement of the control limits vary considerably. As in other areas of statistics, detailed decision trees or flow diagrams have been designed to assist the end user in matching the situation to the most appropriate control chart ‘‘test’’ of that data.18,19 (see Carey and Lloyd, 1995, p 72, and Hart and Hart, 2002, p 285, for examples). Unfortunately, the statistical rationale underlying the deployment of these various control charts, and reliably weighing their pros and cons in addressing problems of concern in daily health care operations, are far beyond the training, interest, or time allotment of even the most dedicated front-line ED staff. Again, TQM and SPC principles will only transform the practice of health care to the extent that they can be ‘‘drilled down’’ to the level of the front-line knowledge worker. The level of statistical complexity in many SPC texts and presentations arguably contributes to the apparent lack of penetration of these powerful data management tools in health care.7 In

addition, it can be shown that such complexity (i.e., variability) is not necessary for fostering ‘‘management by fact’’ in EM. This primer will focus on the Average Moving Range (also referred to as the XmR) individual control chart. The XmR is particularly suited to health care applications in which data are collected intermittently and in clusters of one (i.e., each data point represents one patient, or one discrete performance/score achieved by a patient). In addition, while other control charts (such as np, p, c, and u) use theoretical control limits, the XmR chart uses empirical (natural) limits. When the theoretical model (e.g., binomial, Poisson) used is inappropriate to the data it is applied to, the limits will be incorrect, while the empirical approach will still yield appropriate results. Further, interpreting control charts based on theoretical limits can be difficult, made more so by base rate issues inherent to some applied situations. For a variety of practical and theoretical reasons, Wheeler20 concludes that one should always use the XmR individual control chart. This broad-based utility suggests the XmR as the ‘‘Swiss Army knife’’ of control charts.

DETERMINING SPECIAL CAUSE WITH CONTROL CHARTS The calculation of the ‘‘confidence interval’’ defined by these empirical control limits, the area in which the common cause variation naturally resides, is a mathematical function of the arithmetic mean of the data points adjusted by a multiple of the range (or ‘‘spread) of the data around that mean. The boundaries or limits of this confidence interval are known as the upper and lower control limits (or UCL and LCL, respectively), and are calculated according to formulas designed to always yield a confidence interval of 3 standard deviations around the ‘‘spread’’ of the data. As most contemporary users generate their control charts via personal computer software, the reader interested in the mathematical details of these calculations is referred to other sources.18,19,21 The implications of selecting ‘‘3 sigma’’ as the limits of common cause is important, and stems from the statistical fact that 99.73% of the normal distribution falls within 3 standard deviations of its arithmetic average. This means that the likelihood of an observation’s falling outside 3 sigma control limits is only 0.27%.22 This identifies these low-likelihood events as truly exceptional statistically, hence reflecting the influence of some special cause. A typical control chart is portrayed in Figure 1. One will notice that Figure 1 actually contains two charts, a chart of individual observations (X bar chart), and a moving range chart (R chart). The former is typically described as representing the process position of the data; the latter its dispersion. While many treatments of SPC advise interpretation of both charts,

886

Callahan and Griffen

d

STATISTICAL PROCESS CONTROL AND EM

Figure 1. Control chart basics.

this is arguably unnecessary and in our opinion only contributes to front-line provider confusion and resistance. As stated by Nelson,23 and supported by others,19 ‘‘advice varies on this point, but I recommend that the moving ranges not be plotted. . .the chart of the individual observations actually contain all the information available.’’ A number of very specific rules for empirically determining the presence of special cause in a process data set have been published, the core of which are commonly referred to as ‘‘the Western Electric rules,’’ in homage to the seminal work of Shewhart and colleagues.18–20,24 The most direct and useful of these rules are summarized by the mnemonic ‘‘ones, runs, and trends,’’ defined as: 1. 2.

3.

Any one point falls outside the control limits (i.e., above the UCL or below the LCL). Seven or more consecutive points all above or below the center line (the mean); this situation is referred to as a ‘‘run.’’ Seven or more consecutive points moving up or down bisecting the center line; this situation is referred to as a ‘‘trend.’’

By utilizing these three rules, graphically portrayed in Figures 2–4, the knowledge worker can robustly identify special causes in their work processes. Why is identifying ‘‘special cause’’ so important for CQI? To intervene without process knowledge, reacting as if common cause is in fact special cause demonstrably weakens the process. It drives the process to disorganization, or entropy. Deming14 referred to this tendency as ‘‘tampering.’’ The use of 3 sigma control limits is important in this regard. Control limits based on less than 3 sigma leads us to waste resources (time, money, energy) by looking for nonexistent special causes. Further, needless adjustments (tampering) lead to increased process variation, in-

creased problems and defects, increased cost, and very likely staff demoralization (‘‘We tried to fix the problem, and have made things worse!’’). The use of 3 sigma control limits sets a defensible boundary between attending to those variations that are important, namely, those with special cause, and those that are not.10 Stated in another, more formal way, applying empirically defined control limits assists one in minimizing Type I and Type II errors. Type I error is defined as labeling a finding as significant when in fact it is not (what statisticians refer to as ‘‘rejecting the null hypothesis when in fact it is correct’’). In most research situations, great efforts are expended to reduce the risk of Type I error. For example, one would not want to erroneously conclude that a new drug treatment is efficacious when in fact it yields no clinical effect. Thus, in SPC terms, Type I error refers to concluding that that a special cause exists when in fact it does not. Conversely, Type II error refers to the situation where excessive conservatism leads to ignoring of important special causes. The risk of Type I error is that we tamper too much and too often. The risk of Type II is that we do nothing. Clearly, effective leadership requires balance between these extremes. Control charts give an objective basis to mitigate these extremes, and manage by fact.

ADDITIONAL ADVANTAGES OF SPC Despite the need for formal analytic tools for today’s applied health care data, the application of ‘‘null hypothesis’’ statistics common to academic research is fraught with difficulty. Such statistics are based upon core assumptions (such as normality of data distributions and random sampling) that may not be met in many applied situations. A continually raging debate in academic research statistics is ‘‘what N is

ACAD EMERG MED

d

August 2003, Vol. 10, No. 8

d

887

www.aemj.org

Figure 2. Determining a special cause: ‘‘one.’’

needed?’’—how many subjects are enough? Typically, the answer is ‘‘more is better,’’ with a common understanding that 10–15 subjects/observations per variable is desirable. This means that many traditional research projects require dozens or hundreds of data points to achieve an acceptable level of statistical power. Studies with small Ns (typically defined as \30) are regarded as preliminary at best, and suspect at worst. In contrast, in the applied statistics world of SPC, 25 or more observations gathered through rational sampling give a robust test for stability, providing a meaningful portrayal of process position and dispersion. Practically speaking, 12–15 data points provide a good test for stability.20 Data collection, analysis, and interpretation are expensive. Adopting data management strategies that allow efficient use of resources enhances the likelihood of their use in applied situations such as the ED. The objective of ‘‘managing by fact’’ argues for more widespread deployment of efficient cost-effective analytic strategies such as SPC. Finally, ‘‘null hypothesis’’ research statistics are based on the central limit theorem. Here, individual observations/subject data may be overshadowed or obscured by the overall group tendency, particularly in skewed distributions (an underlying reason for the push for large sample sizes as a means to reduce the effect of skewness on overall interpretation). In applied situations, however, even one outlier (for example, one death due to adverse drug reaction or one defective ‘‘O-ring’’ on a booster rocket) may be

unacceptable and cause for action. Applied SPC statistics allow detection of the individual data point, through emphasis on small samples, and real-time analysis conducted by the front-line workers (as opposed to post-hoc analysis months or years after data were collected). These features of traditional academic and applied SPC approaches are summarized in Table 1.

LISTENING TO THE VOICE OF THE PROCESS Another implication of central limit theorem emerges in the way that many health care providers have approached outcome assessment. In the manufacturing industry, ‘‘benchmarking’’ refers to comparing one’s own performance with the best in class performance in the field—the world leader. Unfortunately, health care’s analytic roots are from academic medicine, and thus the tendency has been to compare one’s own performance with the population (industry) average. For example, mortality rate at hospital A is said to be better than the national average of ‘‘x percent,’’ leading to conclusion that hospital A is a safe/successful provider. In this example, the outcome assessment is binary in that one is either better or worse than the average. Obviously, such binary outcome determination means that 50% of cases will be better than the average, and 50% will be worse. This does not make for particularly finegrained analysis, nor prompt much process-improvement action.

Figure 3. Determining a special cause: ‘‘run.’’

888

Callahan and Griffen

d

STATISTICAL PROCESS CONTROL AND EM

Figure 4. Determining a special cause: ‘‘trend.’’

In this way of thinking, half the providers will be lulled into a state of security that things are OK, while the other half are likely in a state of panic and are being called upon to make some explanation to their superiors.16 In fact, real quality differences between some providers on either side of the line won’t be very much, prompting rapid cycling between states of confidence and panic. That this is true can be seen in the typically poor year-to-year correlation in named providers on nationally advertised health care provider report cards. In addition, since the ‘‘benchmark’’ is defined as the average, one’s aspirations are not toward superiority, but rather mediocrity (i.e., the middle of the pack). Nonetheless, health care providers seem to take comfort (and want the consumer to do so as well) in beating the industry ‘‘benchmark.’’ Such an orientation is antithetical to true process improvement. If one is better than the benchmark (the average), there is little perceived impetus to change. When one is worse, there are often a variety of rationalizations for why the benchmark is an inappropriate mark of comparison, or for why the data themselves are suspect.16 Neither stance is likely to pave the way for efforts to improve the system itself. Wheeler16 argues that many approaches to ‘‘quality improvement’’ never achieve their desired goal because they do not reveal any insights into how the process works. Comparisons with some external specification (whether an industry average, or often an arbitrary figure altogether) may tell you where you are, but they will not tell you how you got there. Rather, only through trend-line analysis and control

TABLE 1. Comparison between Traditional Statistical Methods and Statistical Process Control (SPC) Methods of Analysis Traditional

SPC

Random sampling Large Ns Arbitrary p-values Data tables Post-hoc Conceptual Outcome-oriented

Acceptance sampling Small Ns 3 sigma Graphic plots Real-time Applied Process-oriented

charts does one get the true ‘‘voice of the process,’’ which is needed to instantly distinguish signal from noise, and point to conditions requiring attention. Control charts characterize the behavior of the time series of observations as either ‘‘in control’’ or ‘‘out of control’’ relative to the control limits and decision rules presented previously. As Wheeler16 states, A process which does not display a reasonable degree of statistical control is unpredictable. This distinction between predictability and unpredictability is important because prediction is the essence of doing business. . .. In fact, attempting to make plans using a time series that is unpredictable results in more frustration than success. Prediction requires knowledge, explanation does not.

Listening to the voice of the process fosters internal focus and gives direction to sustained honest efforts at system improvement.

AN ILLUSTRATIVE EXAMPLE FROM EMERGENCY MEDICINE Many errors in the ED are not classic errors of commission, but more often errors of omission, when suboptimal care occurs due to time delays. As Adams and Bohan correctly point out: There is growing evidence that time to treatment affects outcome (time to open coronary artery; time to open cerebral artery; time to antibiotics in pneumonia; time to pain relief). We currently characterize delays in care as a ‘‘deficiency’’ but it is clear that a failure to perform an action that leads to an adverse outcome fits under the definition of error.25

Figure 5 presents control chart data from an interdisciplinary EM/cardiology improvement initiative focused on reducing door-to-reperfusion times in patients with acute ST myocardial infarction. Consistent with ACC/AHA guidelines for such patients,26 a goal of upper control limit (UCL) # 90 minutes was desired. Operational changes made in the ED included: 1) requiring that stat electrocardiograms (ECGs) be obtained on all patients aged 30 years or more presenting with chest pain, with the ECG taken directly to an emergency physician for stat reading; 2) placing control for activating the on-call cardiac

ACAD EMERG MED

d

August 2003, Vol. 10, No. 8

d

www.aemj.org

889

Figure 5. Acute myocardial infarction door-to-reperfusion time individual control chart. UCL ¼ upper control limit; LCL ¼ lower control limit.

catheterization staff with the emergency physician; and 3) institution of a single pager number for all cardiac catheterization on-call team members. As can be seen from these data, there is a run below the mean of 14 data points after the operational changes were made, which signals a significant change in process. From the control chart, it is apparent that the efforts were successful in changing the process in the expected direction (i.e., reducing door-to-reperfusion times). The UCL for the entire data series is 206 minutes. Examination of the data collected since process change was implemented (beginning with patient 19) indicates the new UCL is 140 minutes. The XmR chart supports the conclusion that the data show a statistically significant change in process, and that the new process is stable, predictable, and in control. Most importantly, the new process produces consistent UCL reperfusion times that are one third less than those previously produced. Efforts will continue to address other aspects of the process to further reduce the UCL and then maintain those improvements.

CONCLUSIONS Control charts are an affordable, trainable, reproducible, and analytically robust methodology that fosters ongoing examination of the very processes that produce the outputs. Their simplicity and visual nature make communicating important insights into health care processes possible in ways that traditional statistical data tables do not. Finally, a real-time focus allows the opportunity for intervention while the process is still occurring (i.e., while the patient is still alive; while the program is still open; while the fiscal year is still ongoing). To be successful, these powerful SPC tools must be delivered to front-line provider staff in a clear-cut manner that demonstrates their utility and impact on daily operations. Many health care providers are

attracted to clinical care, and away from academic research and complicated statistics, due to its applied nature. We must remember this as we attempt to train staff in methods of TQM and SPC, and distill for them the key tools they can realistically use to impact their world. Thus, this primer has attempted to present a very succinct yet empirically supported model for the use of control charts in EM. Most contemporary control chart users rely upon personal computers for database management and chart construction. A number of specific SPC software packages are available, and SPC modules are contained in the major statistical software suites. However, consistent with our emphasis on making SPC accessible to the frontline providers, it should be noted that excellent control charts can be generated from common, easyto-use, and inexpensive spreadsheet programs, which are now ubiquitous on hospital computers.27 Improved outcomes in health care can come only as a result of thoughtful analysis and understanding of the processes that produce those outcomes. Unfortunately, health care has largely ignored lessons from other industrial sectors that have learned that true lasting quality improvement comes only from listening to the ‘‘voice of the process.’’ While EM has embraced some elements of the TQM movement (such as self-directed multidisciplinary work teams and standardized objective outcome data elements), incorporation of powerful statistical process control techniques has not been widespread. As presented in this primer, control charts allow the operator to recognize significant events within their processes, and to identify in real time when and where they occurred. At that point, root cause analysis of changes in manpower, machine, materials, or methods can occur. In the current health care environment, scarce resources are increasingly strained amid competing simultaneous objectives. It is imperative that these resources not be wasted in fruitless

890 and unfocused quality assurance activities. Rather, true quality improvement occurs when signals are filtered from common noise, and when empirically based tools are deployed so that we may manage by fact. It is concluded that control charts offer a timetested, scientifically validated, and pragmatically appealing way in which to gain control over EM processes so that we may more effectively, efficiently, and reliably assist the persons we seek to serve. The authors thank James Bente’ for his insight and comment during the development of the manuscript.

References 1. Kohn LT, Corrigan JM, Donaldson MS (eds). Institute of Medicine. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press, 1999, p 22. 2. Lewin Group. The Impact of the Medicare Balanced Budget Refinement Act on Medicare Payments to Hospitals. Washington, DC: American Hospital Association, Feb 1, 2000. 3. Derlet RW, Richards JR. Overcrowding in the nation’s emergency departments: complex causes and disturbing effects. Ann Emerg Med. 2000; 35:63–8. 4. McCabe JB. Emergency department overcrowding: a national crisis. Acad Med. 2001; 76:672–4. 5. Vincent C, Simon R, Sutcliffe K, Adams JG, Biros MH, Wears RL. Errors conference: executive summary. Acad Emerg Med. 2000; 7:1180–2. 6. Advisory Board Company. The Clockwork ED, Volume I: Expediting Time to Physician. Washington, DC: Advisory Board Company, 1999. 7. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press, 2000. 8. U.S. General Accounting Office. Management Practices: U.S. Companies Improve Performance through Quality Efforts. Washington, DC:1991. GAO/OCG91. 9. Berwick DM. Continuous improvement as an ideal in health care. N Engl J Med. 1989; 320:53–6.

Callahan and Griffen

d

STATISTICAL PROCESS CONTROL AND EM

10. Walton M. The Deming Management Method. New York, NY: Perigree, 1986. 11. Schwab RA, DelSorbo SM, Cunningham MR, Craven K, Watson WA. Using statistical process control to demonstrate the effect of operational interventions on quality indicators in the emergency department. J Healthc Qual. 1999; 21(4):38–41. 12. Mayer TA. Industrial models of continuous quality improvement: implications for emergency medicine. Emerg Med Clin North Am. 1992; 10:523–47. 13. Gates B. Business @ the Speed of Thought: Succeeding in the Digital Economy. New York: Warner Books, 1999. 14. Deming WE. Out of the Crisis. Cambridge, MA: MIT, 1986. 15. Ishikawa K. Guide to Quality Control. Tokyo, Japan: Productivity Press, 1976. 16. Wheeler DJ. Understanding Variation: The Key to Managing Chaos. Knoxville, TN: SPC Press, 1993. 17. Berwick DM. Controlling variation in health care: a consultation from Walter Shewhart. Med Care. 1991; 29:1212–25. 18. Carey RG, Lloyd RC. Measuring Quality Improvement in Health care: A Guide to Statistical Process Control Applications. New York, NY: Quality Resources, 1995. 19. Hart MK, Hart RF. Statistical Process Control for Health Care. Pacific Grove, CA: Duxbury, 2002. 20. Wheeler DJ. Understanding Variation: The Key to Managing Chaos, 2nd ed. Knoxville, TN: SPC Press, 2000. 21. Wheeler DJ. Advanced Topics in Statistical Process Control: The Power of Shewhart’s Charts. Knoxville, TN: SPC Press, 1995. 22. Plsek PE. Tutorial: Introduction to control charts. Qual Manage Health Care. 1992; 1(1):65–74. 23. Nelson LS. Control charts for individual measurements. J Qual Technol. 1982; 14(3):172–3. 24. Shewhart WA. Economic Control of Quality of Manufactured Product. New York, NY: Van Nostrand Company, 1931. 25. Adams JG, Bohan JS. System contributions to error. Acad Emerg Med. 2000; 7:1189–93. 26. Ryan TJ, Antman EM, Brooks NH, et al. 1999 update: ACC/ AHA guidelines for the management of patients with acute myocardial infarction. A report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee on Management of Acute Myocardial Infarction). J Am Coll Cardiol. 1999; 34:890–911. 27. Zimmerman SM, Icenogle ML. Statistical Quality Control: Using Excel. Milwaukee, WI: ASQ Quality Press, 1999.