Improving Wait Times and Patient Satisfaction in ... - Wiley Online Library

58 downloads 65289 Views 754KB Size Report
Calls com- ing into the patient registration area were redi- ... the public domain, available in English and in Spanish ... they like best about the practice, what they.
50

Journal for Healthcare Quality

Improving Wait Times and Patient Satisfaction in Primary Care Melanie Michael, Susan D. Schaffer, Patricia L. Egan, Barbara B. Little, Patrick Scott Pritchard Abstract: A strong and inverse relationship between patient satisfaction and wait times in ambulatory care settings has been demonstrated. Despite its relevance to key medical practice outcomes, timeliness of care in primary care settings has not been widely studied. The goal of the quality improvement project described here was to increase patient satisfaction by minimizing wait times using the Dartmouth Microsystem Improvement Curriculum (DMIC) framework and the Plan-Do-Study-Act (PDSA) improvement process. Following completion of an initial PDSA cycle, significant reductions in mean waiting room and exam room wait times (p = .001 and p = .047, respectively) were observed along with a significant increase in patient satisfaction with waiting room wait time (p = .029). The results support the hypothesis that the DMIC framework and the PDSA method can be applied to improve wait times and patient satisfaction among primary care patients. Furthermore, the pretest–posttest preexperimental study design employed provides a model for sequential repetitive tests of change that can lead to meaningful improvements in the delivery of care and practice performance in a variety of ambulatory care settings over time.

Keywords ambulatory/physician office community/public health patient satisfaction performance improvement models quality improvement

Ambulatory healthcare is the largest and most widely used segment of the American healthcare system (Schappert & Rechtsteiner, 2008). Office visits in these settings account for 25% of U.S. healthcare expenditures (Centers for Medicare and Medicaid, 2010) fueling pressure for increased accountability from consumers, employers, and payers. In Crossing the Quality Chasm, the Institute of Medicine’s (2001) Committee on Quality of Healthcare in America defines six aims for improving healthcare in the United States including safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity. Despite its relevance to practice outcomes and patient satisfaction, timeliness of care in office and other ambulatory care settings is among the least studied (Leddy, Kaldenberg, & Becker, 2003). A strong and inverse relationship between patient satisfaction and wait times in primary care and specialty care physician offices has been demonstrated (Leddy et al., 2003; Press

Journal for Healthcare Quality Vol. 35, No. 2, pp. 50–60

The authors have disclosed they have no significant relationships with, or financial interest in, any commercial companies pertaining to this article.

 C 2013 National Association for

Healthcare Quality

Journal for Healthcare Quality

Ganey Associates, Inc. [Press Ganey], 2009). A large proportion of these studies focused on wait time in waiting rooms, however, the amount of time patients spend waiting in an exam room is also important. A large-scale survey of 2.4 million patients across the United States conducted by Press Ganey in 2008 revealed that among a list of key variables associated with patient satisfaction, the fifth strongest correlation was between exam room wait time and the likelihood of recommending the practice to others (Press Ganey, 2009). Medical practices that are continually working to minimize wait times can expect to see significant improvement in the overall satisfaction of their patients (Press Ganey, 2009) and associated medical practice outcomes (Drain, 2007; Press Ganey, 2007a; Saxton, Finkelstein, Bavin, & Stawiski, 2008; Stelfox, Gandhi, Orav, & Gustafson, 2005). Furthermore, reducing wait times can lead to improved financial performance of the practice (Drain, 2007; Garman, Garcia, & Hargreaves, 2004; Nelson et al., 2007; Press Ganey, 2007b).

Intended Improvement The purpose of the quality improvement (QI) pilot project described here was to increase patient satisfaction by minimizing wait times in a Florida county health department (CHD) Adult Primary Care Unit (APCU) practice using the Dartmouth Microsystem Improvement Curriculum (DMIC) framework (Nelson, Batalden, & Godfrey, 2007) and the Plan-Do-Study-Act (PDSA) improvement process (Institute for Healthcare Improvement, n.d.). Key study objectives included (a) identification of factors that contribute to long waiting room and exam room wait times, (b) identification of opportunities for improvement, (c) implementation of one or more process improvements using the PDSA model for improvement, and (d) evaluation of the impact on patient wait times, patient satisfaction with wait times, and overall satisfaction with the care experience. Project approval was obtained from the Florida Department of Health and the University of Florida Behavioral/NonMedical Institutional Review Boards.

Vol. 35 No. 2 March/April 2013

Methods Setting The CHD where the pilot was conducted is the principal primary care safety net provider in the community, with three practice sites and an aggregate practice panel of more than 35,000 patients (Florida Department of Health, 2010). The study was conducted in the APCU at the Health Department’s central practice location. In a typical month the practice team in this unit, consisting of two physicians and two advanced practice nurses (APN), provides care for approximately 1,500 patients. Approximately 79% of patients are White, 16% are Black/African American, 2% are Asian, and 23% are Hispanic. Prevalent health problems include hypertension, diabetes, hyperlipidemia, depression, and chronic pain. Patient satisfaction survey scores in the wait time category have historically lagged other satisfaction measures by six to ten percentage points. Clinic managers reported complicated visit routines involving too many steps and delays as key obstacles to timely patient care.

Planning the Intervention The DMIC framework represents a systemsbased approach to clinical QI. It is based on the premise that the clinical microsystem is the smallest replicable healthcare unit, which is, in turn, the essential building block of larger health systems. Each microsystem consists of “a small group of people who work together on a regular basis to provide care and the subpopulation of patients who receive that care” (Nelson et al., 2007, p. 233). Four key principles for improving the performance of all microsystems are fundamental to the DMIC framework: (a) engagement of everyone in the microsystem in continuous process and work improvement, (b) intelligent use of data, (c) establishment of an intimate understanding of the needs of patients served by the microsystem, and (d) development and maintenance of positive and productive connections with other related microsystems (Nelson et al., 2008). The PDSA model for improvement is the method of choice within the DMIC for testing ideas for improvement that can lead to higher performance. It represents a series of structured activities, organized cyclically in four phases, which can be used to conduct repetitive tests of change (i.e., process improvements) in rapid sequence.

Establishing a functional relationship between process change and healthcare outcome variation is fundamental to the PDSA QI methodology (Speroff & O’Connor, 2004). The pretest/posttest preexperimental design is consistent with these objectives and was selected for this project. It is also consistent with iterative learning, which is fundamental to the PDSA method. An eight-phase implementation plan, based on principles and concepts of the DMIC as described by Nelson et al. (2007), was followed. The project unfolded over a period of 6 months. Key objectives, activities, tools, and methods used in each phase are summarized in Table 1. Members of the project study team and APCU staff members met initially to complete the tasks associated with project phases one through three, which include defining, measuring, and analyzing drivers of patient dissatisfaction with wait times. The results are summarized in the Ishikawa diagram shown in Figure 1. Four main categories of causes emerged: front-end operations, back-end operations, patient work-up, and ancillary services. Within this study design, as outlined in Table 1, the first phase of the PDSA cycle was launched in project phase four. Key tasks associated with this phase included selection of specific test of change strategies and collection of baseline data for future comparison. The highly participative multivoting method enabled the group to establish a clear set of priorities. Using this method, APCU team members decided to focus the intervention on front-end operations. In addition to tasks associated with patient registration, team members working in this area were also responsible for performing reception duties, answering phones, and responding to inquiries from patients and staff. These additional tasks resulted in a continuous stream of interruptions and delays in completion of registration processes. Baseline data collected during the preintervention data collection period revealed a mean waiting room wait time of 28 min and a mean exam room wait time of 14 min; initial wait time targets for each category were set at 20 and 10 min, respectively.

Implementation In project phase five, three specific strategies were implemented with the goal of reducing interruptions for the front office staff and allowing them to focus on patient registration

51

52

Journal for Healthcare Quality

Table 1. Project Implementation Plan Key Objectives Phase 1

Phase 2

Phase 3

Phase 4

Increase knowledge of APCU clinical microsystem and opportunities for improvement Identification and selection of a theme for improvement Focus and align improvement efforts with improvement theme; connect theme to daily work processes Plan: Define and focus improvement activities; connect improvement theme and aims to daily work processes

Phase 5

Do: Implement test of change/improvement

Phase 6

Study: Evaluate impact of test of change/ improvement

Phase 7

Act: Prepare for next PDSA cycle

Phase 8

Follow through on improvement



Key Activities

Tools/Methods

Assessment of clinical microsystem using the 5Ps (Nelson et al., 2007) framework

Primary Care Practice Profile∗ assessment instrument

Group selection of a theme for improvement; alignment with the IOM (2001) six aims and the APCU mission Document global aim statement (Nelson et al., 2007); flow chart current patient flow process

Brainstorming; multivoting

Conduct cause and effect analysis; select and define hypothesis and initial test of change; define improvement/action plan strategies and data management plan; assign roles/responsibilities; collect pretest baseline wait time and patient satisfaction data for future comparison Operationalize test of change; collect posttest of change wait time and patient satisfaction data Conduct data analysis; compare/analyze pre- and postimplementation wait times and patient satisfaction; create summary documents and reports; compare hypothesis to what actually happened Determine next steps; modify or abandon process improvement; define next test of change; plan and prepare for next PDSA cycle Document/communicate results; retain gains

Specific Aim Statement∗ template; Ishikawa diagram template; data collection instruments

Global Aim Statement∗ template; patient flow analysis diagram

Data collection instruments

Microsoft Access databases; Epi Info; data summary tables and graphs

Based on evaluation and plan for next PDSA cycle

Project reports; data display tables and graphs; storyboard/data wall

Accessible at www.clinicalmicrosystem.org.

tasks. A temporary reception station was created in the main hallway just outside the entrance to the APCU. A receptionist from the clerical float pool was assigned to greet, assist,

and direct patients and to field questions. The single multipatient sign-in sheet was replaced with a simple half-page form that allowed each patient to sign in on a separate sheet. This

Vol. 35 No. 2 March/April 2013

Figure 1. Causes and Effects: Patient Dissatisfaction With Wait Times

allowed the receptionist to deliver completed forms to the registration team in real time continuously throughout the day. Calls coming into the patient registration area were redirected to other staff members in the unit who are not responsible for direct patient care or services.

Methods of Evaluation The evaluation plan included two key wait time process measures: waiting room wait time and exam room wait time. Waiting room wait time was defined as the time elapsed between requesting that the patient be seated in the waiting room and the time he/she was called to be placed in an exam room. Exam room wait time was defined as the amount of time elapsed from the time the patient was seated in an exam room and the time the physician or APN entered the room. Convenience sampling was employed. Waiting room and exam room wait time data were collected for all APCU patients seen during the preimplementation and postimplementation wait time data collection periods. An instrument developed by study team members allowed APCU staff to record (a) time of patient arrival, (b) time the patient was seated in the

waiting room, (c) time the patient was seated in an exam room, and (d) time the provider entered the exam room. Staff in the reception and registration areas were responsible for recording patient arrival and waiting room seating times. Clinical support staff, responsible for taking each patient’s vital signs, recorded the exam room seating time for each patient. Each provider was responsible for recording his/her own exam room entry times. APCU staff members received training in data collection procedures prior to implementation of the first data collection exercise. Wait time and patient satisfaction data were collected before and after implementing changes to the front-end operations. Each wait time data collection period lasted 1 week concurrent with the first week of the pre- and postimplementation patient satisfaction survey data collection periods that each lasted 2 weeks. Each of the wait time data collection periods were limited to 1 week based on an average weekly visit volume of approximately 375 encounters and the assumption that data would be collected on all, or nearly all, visits. Accordingly, both wait time and patient satisfaction data were collected for patients visiting the APCU during the first week of each data collection period while only patient satisfaction data were

53

54

Journal for Healthcare Quality

collected for patients seen during the second week of each data collection period.

Patient Satisfaction Instrument Patient satisfaction was defined as (a) patient satisfaction with waiting room wait time, (b) patient satisfaction with exam room wait time, and (c) the likelihood of referring friends and relatives to the practice as a proxy measure associated with overall satisfaction and the likelihood of returning for care in the future. A patient satisfaction survey instrument developed and selected by the Health Resources and Services Administration (n.d.) for use in Federally Qualified Community Health Centers was used. The survey is in the public domain, available in English and in Spanish, and is accessible at http://bphc. hrsa.gov/patientsurvey/patients.htm. It is well suited to this project given the similarities of the safety net populations served by Community Health Centers and the CHD’s primary care patient population. The instrument provides for anonymous collection of data and includes a total of 29 items in three response formats, including items that allow for collection of data specifically relating to patient satisfaction with wait times. A summary question asks patients to rate the likelihood of referring friends and relatives. Three open-ended questions provide an opportunity for patients to comment on what they like best about the practice, what they like least about the practice, and make suggestions for improvement. Disadvantages associated with the use of the instrument include lack of historical and comparison information on the instrument’s psychometric properties. Cronbach’s alpha was calculated for questions in the Likert-scale category and the instrument was found to have high internal reliability (25 items; α = .98). Patient satisfaction surveys were distributed at the point of care by study team members. Trained medical interpreters were available for non-English speaking patients. The oral invitation to participate was guided by a standard script that covered all relevant elements of informed consent. Study team members received information on the informed consent process prior to interacting with patients. Written informed consent was not required as no unique patient identifier information was collected and participation was voluntary. Patients were able to return completed surveys via a secure lock

box located in the APCU checkout area or via the U.S. Postal Service using a stamped, self-addressed envelope. Approximately, 86% (1,259/1,470) of patients seen in the APCU during the first and second data collection periods verbally accepted the initial invitation to participate. The remaining 14% either declined to participate or to speak with a study team member regarding participation. Both patient satisfaction survey data collection periods lasted 2 weeks. A parallel cut off point of 20 calendar days was established for return of patient satisfaction surveys for both data collection periods. Surveys returned after the cutoff were excluded from data analysis.

Analysis All data were initially entered into two Microsoft Office Access databases created exclusively for the purpose of managing patient wait time data and patient satisfaction data, respectively. The data were subsequently imported into Microsoft Excel and analyzed using Excel and Centers for Disease Control’s Epi Info. Two primary analyses were conducted. The t-test was used to compare mean wait times prior to and following implementation of the test of change intervention. Chi-square was used to examine and compare patient satisfaction with waiting room and exam room wait times, as well as the likelihood of referring friends and family, for the pre- and postimplementation periods. An alpha level of .05 was used for all statistical tests.

Results Sample Description A comparison of the age and gender characteristics of the sample population and the entire APCU population are summarized in Table 2. The proportion of patients in the 18- to 44year age group was significantly lower and the proportion in the 45- to 64-year age group was significantly higher in the sample population when compared to the entire APCU patient population. Wait time data were captured for 98% (349/355) of patients seen by APCU providers during the preimplementation wait time data collection period and for 97% (365/375) of patients seen during the postimplementation wait time data collection period. Missing and ambiguous data elements were identified in one or both data categories for 6% of visits sampled

Vol. 35 No. 2 March/April 2013

Table 2. Comparison of Demographic Characteristics of Patient Satisfaction Survey Participants and the APCU Population* Preimplementation Participants

Postimplementation Participants

APCU Population

Age Groups

n = 262

n = 285

N = 10,057

18–44 45–64 >65

124 (47%) 119 (45%) 19 (7%)

133 (47%) 135 (47%) 17 (6%)

6,258 (62%) 3,240 (32%) 559 (6%)

n = 263

n = 284

N = 10,057

111 (42%) 152 (58%)

126 (64%) 158 (56%)

4,046 (40%) 6,011 (60%)

Gender Male Female ∗

χ2

p

χ2 = 52.99 df = 4