Can the Emergency Department Algorithm ... - Wiley Online Library

21 downloads 7997 Views 97KB Size Report
aid program) underwent cutbacks affecting access to care in 2003, the authors tested the ability of the ..... paring the proportion of ED visits in each payer class.
Can the Emergency Department Algorithm Detect Changes in Access to Care? Robert A. Lowe, MD, MPH, Rongwei Fu, PhD

Abstract Objectives: The ‘‘emergency department algorithm’’ (EDA) uses emergency department (ED) diagnoses to assign probabilities that a visit falls into each of four categories: nonemergency, primary care–treatable emergency, preventable emergency needing ED care, and nonpreventable emergency. The EDA’s developers report that it can evaluate the medical safety net because patients with worse access to care will use EDs for less urgent conditions. After the Oregon Health Plan (OHP, Oregon’s expanded Medicaid program) underwent cutbacks affecting access to care in 2003, the authors tested the ability of the EDA to detect changes in ED use. Methods: All visits to 22 Oregon EDs during 2002 were compared with visits during 2004. For each payer category, mean probabilities that ED visits fell into each of the four categories were compared before versus after the OHP cutbacks. Results: The largest change in mean probabilities after the cutbacks was 2%. Attempts to enhance the sensitivity of the EDA through other analytic strategies were unsuccessful. By contrast, ED visits by the uninsured increased from 6,682 ⁄ month in 2002 to 9,058 ⁄ month in 2004, and the proportion of uninsured visits leading to hospital admission increased by 51%. Conclusions: The EDA was less useful in demonstrating changes in access to care than were other, simpler measures. Methodologic concerns with the EDA that may account for this limitation are discussed. Given the widespread adoption of the EDA among health policy researchers, the authors conclude that further refinement of the methodology is needed. ACADEMIC EMERGENCY MEDICINE 2008; 15:506–516 ª 2008 by the Society for Academic Emergency Medicine Keywords: access to care, emergency department, Medicaid, measurement, methodology

T

he concept of ambulatory care–sensitive hospitalizations1,2 led to a valuable tool for monitoring access to primary care. This methodology was developed by Billings and colleagues,1,2 who demonstrated that hospitalization rates for conditions such as asthma, diabetes mellitus, and hypertension were strongly associated with poor access. Their work has been replicated and expanded by others.3–10 Ambulatory

care–sensitive hospitalization rates have become a widely recognized tool for assessing access to care; this approach has been incorporated into a toolkit developed by the Agency for Healthcare Research and Quality (AHRQ) for monitoring the safety net.11,12 This same toolkit includes a methodology developed by Billings and colleagues for studying emergency department (ED) visits as a measure of access to

From the Department of Emergency Medicine (RAL, RF), Center for Policy and Research in Emergency Medicine (RAL, RF), Department of Public Health and Preventive Medicine (RAL, RF), and Department of Medical Informatics and Clinical Epidemiology (RAL), Oregon Health & Science University, Portland OR; and the University of Pennsylvania, Leonard Davis Institute of Health Economics (RAL), Philadelphia PA. Received June 1, 2007; revision received July 20, 2007; accepted July 28, 2007. This work has been presented at the Society for Academic Emergency Medicine Western Regional Research Forum, Portland, OR, March 16, 2007; the Society for Academic Emergency Medicine Annual Meeting, Chicago, IL, May 16, 2007; and the Academy Health Annual Research Meeting, Orlando, FL, June 3, 2007. This project was supported by the Office for Oregon Health Policy and Research through grants from the Robert Wood Johnson Foundation State Coverage Initiatives and the Robert Wood Johnson Foundation Changes in Health Care Financing & Organization Initiative. This work incorporates the intellectual contribution of Keith William Neely, PhD, MPA (January 27, 1950–January 28, 2001), whose pioneering work on ED utilization by underinsured populations laid the groundwork for key aspects of this investigation. Address for correspondence and reprints: Robert A. Lowe, MD, MPH; e-mail [email protected].

506

ISSN 1069-6563 PII ISSN 1069-6563583

ª 2008 by the Society for Academic Emergency Medicine doi: 10.1111/j.1553-2712.2008.00108.x

ACAD EMERG MED • June 2008, Vol. 15, No. 6



www.aemj.org

care.13–15 ED utilization rates vary by insurance status, socioeconomic status, race, and other economic and social factors,16–25 confirming the clinical experience of many emergency physicians (EPs)—that patients often use the ED as a safety net for lack of access to care elsewhere. A common perception is that patients with poor access to care often use the ED for ‘‘nonurgent’’ or ‘‘inappropriate’’ problems that could be treated in less expensive outpatient settings. Therefore, Billings et al. developed the ‘‘emergency department algorithm’’ (EDA) for classifying ED visits as to level of urgency. This algorithm (described below under Methods) can be applied to large data sets of ED visits to measure differences in access between populations or changes in access over time. The EDA holds great promise as such a tool. It has been used in several research studies11,14,15,26,27 and is publicly available on the Internet.12 However, our previous experience in applying the EDA28 raised concerns that methodologic limitations of the algorithm might compromise its validity. Few opportunities exist to test such an instrument against a criterion standard, i.e., to study differences in EDA values in populations that clearly experience differences in access to care. This project took advantage of a natural experiment, in which access to care deteriorated because of cutbacks in the Oregon Health Plan (OHP, Oregon’s Medicaid expansion program). In the early 1990s, the OHP had added about 100,000 enrollees to the 300,000 Oregon residents previously covered by Medicaid.29,30 After March 1, 2003, strict enforcement of premiums for the 103,000 Oregonians in the OHP ‘‘expansion population’’ led to a fall to 51,000 enrollees by late 2003, approximately a 12% decrease in the entire OHP population.31 Copayments (including a $50 copayment for ED visits) and reduced scope of services for the expansion population32 created barriers to traditional sources of care.33–35 As summarized in one review, Many of the people affected by the Medicaid waiver changes had very limited incomes and serious physical and mental health problems . . . Premiums led to significant Medicaid coverage losses and most of those who lost Medicaid became uninsured . . . People who lost coverage faced significant difficulties obtaining care. An early survey of people who disenrolled following the premium changes found significant access problems, with 60% reporting an unmet health need and nearly 80% reporting an unmet mental health need. Those with chronic conditions were particularly adversely affected.35 These cutbacks also affected ED utilization. We examined ED use in 22 representative Oregon EDs, finding that ED visits by the uninsured underwent an abrupt and sustained increase in volume, from 6,682 ⁄ month in 2002 to 9,058 ⁄ month in 2004.36 Therefore, we reasoned that if the EDA functioned as a valid measure of changes in access to care, it should detect the impact of the OHP cutbacks. The purpose of this study was to assess the ability of the EDA to detect a known change in access to care and ED utilization patterns.

507

METHODS Study Design This study used administrative data to compare EDA classifications between payer categories before versus after the March 1, 2003, OHP cutbacks. The study was approved by the Institutional Review Board of Oregon Health and Science University. Study Setting and Population We obtained electronic claims data from a purposive sample of 22 of Oregon’s 58 EDs, for all ED visits from August 1, 2001, through February 28, 2005 (a total of 43 months). The selected EDs represent approximately 55% of all Oregon ED visits. Further information about selection of EDs and patient visits has been published elsewhere.36 Measurements The EDA. To recognize both the potential value of the EDA and its potential limitations, one must understand the algorithm in detail. It was developed with input from a panel of EPs, based on data abstracted from 5,700 ED records.11,12,14,15 Patients who were admitted to the hospital or who had principal diagnoses reflecting trauma, drug, alcohol, or psychiatric conditions were excluded. The panel of EPs reviewed data on chief complaint, age, gender, duration of symptoms, temperature, respiratory rate, pulse rate, and medical history. Based on this information, they classified each case as ‘‘emergent’’ (needing care in less than 12 hours) or ‘‘nonemergent.’’ Then, each emergent case was classified as to whether the procedures performed and resources used are typically available in a primary care setting. Based on this determination, cases were classified as ‘‘emergency, primary care–treatable’’ or ‘‘emergency, ED needed.’’ Because the system was designed to be used with administrative data sets, the chief complaints in the sample were then ‘‘mapped’’ to principal diagnosis at ED discharge (International Classification of Disease– ninth revision [ICD-9] codes). For instance, multiple patients in the sample were discharged with ICD-9 code 789.00 (abdominal pain, unspecified site). All were deemed to require care within 12 hours and were classified as emergent. Two-thirds of these patients were managed with resources available in primary care settings, while one-third received interventions not available outside the ED. Therefore, the ICD-9 code 789.00 is assigned a 0.67 probability of emergency, primary care–treatable and a 0.33 probability of emergent, ED needed (Table 1). If there were not sufficient cases with a given discharge diagnosis in the sample data, that diagnosis was left unclassified. Finally, all emergent, ED needed cases were classified as to whether the emergency was potentially preventable or avoidable with timely and effective outpatient care, using the classification system developed by Billings et al. to identify ambulatory care sensitive hospital admissions.1,2,6 Examples of the probabilities assigned to selected ED diagnoses are shown in Table 1. The code to implement the EDA in SAS and Microsoft Access is updated periodically and made available on the Internet, as part of AHRQ’s Safety Net Monitor-

508

Lowe and Fu



EMERGENCY DEPARTMENT ALGORITHM

Table 1 Examples of the EDA Assigned Probabilities ICD-9 Code 381.00 428.0 493.92 522.5 525.9 692.9 780.2 786.05 789.00

Description Acute nonsuppurative otitis media not otherwise specified Congestive heart failure, not otherwise specified Asthma not otherwise specified, with acute exacerbation Periapical abscess Dental disorder not otherwise specified Dermatitis, not otherwise specified Syncope and collapse Shortness of breath Abdominal pain, unspecified site

Nonemergency

Emergency, Primary Care–Treatable

Emergency, ED Needed, Preventable ⁄ Avoidable

Emergency, ED Needed, Not Preventable ⁄ Avoidable

0

1

0

0

0

0.04

0.96

0

0

0.02

0.98

0

0 0.90

0.76 0.10

0.24 0

0 0

0.75

0.21

0

0.04

0 0 0

0.33 0.40 0.67

0 0 0

0.67 0.60 0.33

ED = emergency department; EDA = emergency department algorithm; ICD-9 = International Classification of Disease–9 revision.

ing Initiative.12 This study used SAS code dated September 2003. Data Analysis Because of evidence that the March 1, 2003, OHP cutbacks substantially impacted access to care and ED use, we compared classifications before versus after March 1, 2003. If the EDA is a measure of access to care, one would expect a change in overall EDA scores as the number and proportion of uninsured ED visits increased. Therefore, our primary analysis was of changes in overall EDA scores after the OHP cutbacks. We also examined differences in EDA scores between payer categories and changes after the cutbacks for each payer category separately. The EDA assigns a probability that a given ED visit falls into each of the four categories. We calculated the mean probabilities for all classifiable visits and for subgroups including payer categories and time periods before and after the OHP cutbacks. Mean probabilities were compared using t-tests to compare before versus after the OHP cutbacks and analysis of variance to compare payers. To avoid the impact of seasonal variation, we compared visits for calendar year 2002 to visits for 2004. Because of the possibility of time trends in ED use that would not be reflected by a simple before ⁄ after analysis, using data from the total 43 months, we graphed the mean probability of ED visits falling into each of the EDA categories by month, for all payer groups combined and for each payer group separately. Also, we assessed the change in mean monthly probabilities after the OHP cutbacks using an interrupted autoregressive integrated moving average (ARIMA) model with an indicator variable for the postcutback period (on or after March 1, 2003).37 The advantage of an interrupted ARIMA model is that it can look at the effect of events (such as the policy change here) while taking autocorrelation and seasonality into account.

Because the concept of ‘‘mean probability’’ that ED visits fell into one of the categories is difficult for some clinicians and policy-makers to incorporate into decision-making, we also created three dichotomous outcomes: 1) £75% versus >75% probability of being an emergency, neither primary care–treatable nor preventable; 2) £75% vs. >75% probability of being an emergency requiring ED care but potentially preventable or avoidable; and 3) £75% vs. >75% probability of being either nonemergency or emergency, primary care– treatable. These variables can be interpreted as flagging visits that have a substantial probability of being unavoidable emergencies, probably preventable emergencies, or probably primary care–treatable conditions.28 Associations between payer type and each of these dichotomous outcomes were assessed with chi-square tests; associations between time period and each outcome were calculated as relative risks (RRs) with 95% confidence intervals (CIs). In addition to calculating unadjusted RRs for each of these outcomes in 2004 versus 2002, we developed a logistic regression model that compared proportions of ED visits in each category before versus after March 1, 2003, while adjusting for patient age and gender. Additional covariates in this model included monthly and secular time trends, as well as a binary variable to indicate whether the observations occurred in the postcutback period (on or after March 1, 2003). The correlation structure was modeled as first-degree autoregressive [AR(1)], which allows for correlation in the error term of sequential observations. Results were qualitatively insensitive to less restrictive assumptions about the correlation structure. This approach allowed us to use all 43 months of data. We also considered the possibility that the EDA’s methodology of omitting admitted patients from the analysis might bias the results. For instance, consider two groups of patients, one of which had a 30% admis-

ACAD EMERG MED • June 2008, Vol. 15, No. 6



www.aemj.org

sion rate from ED visits and the other of which had a 10% admission rate. The EDA scores assigned to the nonadmitted patients might be identical between the two groups, but we would still view the group with more admissions as having more emergency, ED care needed visits. Therefore, in the analyses of dichotomous outcomes, we supplemented the EDA analysis by comparing the proportion of ED visits in each payer class leading to hospital admission before versus after March 1, 2003. RESULTS There were 2,265,769 ED visits to the study EDs by Oregon residents during the 43-month data collection period. Of these visits, 1,864 (0.08%) had missing insurance information and were omitted from analysis, leaving 2,263,905 visits. Of the 2,263,905 visits, the EDA could not assign probabilities to 1,141,158 (50%). Visits to which the EDA did not apply included 335,832 (15%) admissions and—among nonadmitted patients—588,401 injuries, 14,933 alcohol-related visits, 5,233 drug abuse–related visits, 55,857 visits for other psychiatric problems, and 140,902 visits with diagnoses that had occurred too rarely in the EDA derivation data set for probabilities to be assigned. Thus, 1,122,747 (50%) visits remained for classification into the categories of urgency. Only 817,888 (36%) visits were assigned at least a 75% probability of falling into one of the EDA categories. For those visits that the EDA could classify, there were only small differences between payers (Table 2). The largest differences were between Medicare patients and other payer classes, a finding that may be explained better by differences in age and disease patterns than by differences in access to care. Comparing 2004 vs. 2002, changes in mean probabilities after versus before the cutbacks never exceeded 0.02 (Table 2). Because of the large sample size, most differences were statistically significant even when the results shown in Table 2 appear identical because of rounding. Review of graphs of EDA scores by month failed to reveal meaningful patterns (graphs are available as an online Data Supplement at http://www.blackwellsynergy.com/doi/suppl/10.1111/j.1553-2712.2008.00108.x/ suppl_file/acem_108_sm_FigureS1.pdf). Results from interrupted ARIMA analysis using the full 43 months of data are shown in Table 3, including lags for autoregressive (AR) terms, moving average (MA) terms, and estimates for changes in mean probabilities after versus before the OHP cutbacks. Because all EDA probabilities by month for all payers and by each payer category were stationary series, there was no integrating process and actually ARMA models were fitted. Estimates for changes in mean probabilities from the interrupted ARMA model were consistent with results from comparing 2002 and 2004 data. Although statistically significant differences were often detected, the maximum change was only 0.025. In examining the dichotomous outcomes by payer class, OHP-sponsored and uninsured visits were more likely to be classified as nonemergency or emergency,

509

primary care–treatable and were least likely to be assigned to the categories of ED needed, potentially preventable or ED needed, not preventable (Table 4). For all payers combined, there was a 1% decrease in the proportion of visits with ‡75% probability of being nonemergency or emergency, primary care–treatable and a 12% increase in the proportion likely to be ED care needed, not preventable after the OHP cutbacks (Table 4), the opposite of what the EDA’s conceptual framework predicts with a deterioration in access to care. Among uninsured patients, the increase in the proportion of uninsured visits classified as ED needed, not preventable (RR = 1.16) was also greater than the increase in the proportion classified as ED care needed, potentially preventable (RR = 1.09), and in nonemergency or emergency, primary care–treatable (RR = 0.99). There was an unexpected 22% increase in the proportion of commercially insured visits classified as ED needed, not preventable. There were statistically significant increases in the proportion of ED visits leading to admission for all payer groups (Table 5). The largest proportional increase was for the uninsured, rising from 4.7% to 7.0% (RR = 1.51). The differences in these dichotomous outcomes remained small after adjusting for patient characteristics and seasonal and secular trends (Table 6) – with the exception of the pattern for hospital admission. For all payers, there was a 7% increase in the odds that an ED visit would lead to hospitalization, a 2% decrease in the odds that a visit would be classified as ‘‘non-emergency or emergency, primary care treatable,’’ a 6% decrease in the odds that a visit was ‘‘ED care needed, potentially preventable,’’ and a 5% increase in the odds that a visit was ‘‘ED needed, not preventable.’’ As with the unadjusted analyses, these findings are the opposite of what the EDA’s conceptual framework predicts with deterioration in access to care. For uninsured patients, the adjusted odds that a visit led to hospital admission rose by 51% after the cutbacks. There were also increases in the adjusted odds of admission for OHP and Medicare, though of smaller magnitude. In looking at adjusted analyses by payer, no odds ratios for changes in the EDA were significantly different from 1.0, except for commercially-insured patients, for whom there was a decrease in the odds that a visit was classified ‘‘ED needed, potentially preventable’’ and an increase in the odds that a visit was classified as ‘‘ED needed, not preventable.’’ DISCUSSION Summary of Key Findings Application of the EDA to data from 22 Oregon EDs detected some differences between payer groups but failed to detect substantial differences before versus after the OHP cutbacks, after adjusting for confounding variables. The changes that we found were small in magnitude, inconsistent in direction, and difficult to interpret. Why Did the EDA Fail to Find Important Differences? One possible explanation for our inability to detect differences using the EDA was that there were no

510

Lowe and Fu



EMERGENCY DEPARTMENT ALGORITHM

Table 2 Changes in ED Algorithm before versus after the OHP Cutbacks Mean Probability That an ED Visit Falls into Each Category Emergency, Primary Care–Treatable

Nonemergency Payer All payers (N = 630,328)  Uninsured (n = 106,407) OHP (n = 187,041) Commercial (n = 206,607) Medicare (n = 102,061) p-Value (comparing payers, both years combined)

Both Years

2002

2004

p-Value*

Both Years

0.36

0.37

0.35

2002

2004

p-Value