Objective Assessment and Thematic ... - Wiley Online Library

5 downloads 3026 Views 218KB Size Report
discussions that included confidential, protected health information and discernible quantities of nonclinical .... ability Act (HIPAA) patient identifiers (e.g., patient.
BRIEF REPORT

Objective Assessment and Thematic Categorization of Patient-audible Information in an Emergency Department Xiao C. Zhang, MD, MS, Leo Kobayashi, MD, Markus Berger, MArch, Pranav M. Reddy, Darin B. Chheng, Sara A. Gorham, Shivany Pathania, Sarah P. Stern, Elio Icaza Milson, Gregory D. Jay, MD, PhD, and Jay M. Baruch, MD

Abstract Objectives: The objective was to assess and categorize the understandable components of patientaudible information (e.g., provider conversations) in emergency department (ED) care areas and to initiate a baseline ED soundscape assessment. Methods: Investigators at an academic referral hospital accessed 21 deidentified transcripts of recordings made with binaural in-ear microphones in patient rooms (n = 10) and spaces adjacent to nurses’ stations (n = 11), during ED staff sign-outs as part of an approved quality management process. Transcribed materials were classified by speaker (health care provider, patient/family/friend, or unknown). Using qualitative analysis software and predefined thematic categories, two investigators then independently coded each transcript by word, phrase, clause, and/or sentence for general content, patient information, and HIPAA-defined patient identifiers. Scheduled reviews were used to resolve any data coding discrepancies. Results: Patient room recordings featured a median of 11 (interquartile range [IQR] = 2 to 33) understandable words per minute (wpm) over 16.2 (IQR = 15.1 to 18.4) minutes; nurses’ station recordings featured 74 (IQR = 47 to 109) understandable wpm over 17.0 (IQR = 15.4 to 20.3) minutes. Transcript content from patient room recordings was categorized as follows: clinical, 44.8% (IQR = 17.7% to 62.2%); nonclinical, 0.0% (IQR = 0.0% to 0.0%); inappropriate (provider), 0.0% (IQR = 0.0% to 0.0%); and unknown, 6.0% (IQR = 1.7% to 58.2%). Transcript content from nurses’ stations was categorized as follows: clinical, 86.0% (IQR = 68.7% to 94.7%); nonclinical, 1.2% (IQR = 0.0% to 19.5%); inappropriate (provider), 0.1% (IQR = 0.0% to 2.3%); and unknown, 1.3% (IQR = 0.0% to 7.1%). Limited patient information was audible on patient room recordings. Audible patient information at nurses’ stations was coded as follows (median words per sign-out sample): general patient history, 116 (IQR = 19 to 206); social history, 12 (IQR = 4 to 19); physical examination, 39 (IQR = 19 to 56); imaging results, 0 (IQR = 0 to 21); laboratory results, 7 (IQR = 0 to 22); other results, 0 (IQR = 0 to 3); medical decision-making, 39 (IQR = 10 to 69); management (general), 118 (IQR = 79 to 235); pain management, 4 (IQR = 0 to 53); and disposition, 42 (IQR = 22 to 60). Medians of 0 (IQR = 0 to 0) and 3 (IQR = 1 to 4) patient name identifiers were audible on in-room and nurses’ station sign-out recordings, respectively. Conclusions: Sound recordings in an ED setting captured audible and understandable provider discussions that included confidential, protected health information and discernible quantities of nonclinical content.

From the Department of Emergency Medicine (XCZ, LK, PMR, DBC, SAG, SP, SPS, GDJ, JMB), Alpert Medical School of Brown University, Providence, RI; Lifespan Medical Simulation Center (LK), Providence, RI; the Department of Interior Architecture, Department of Industrial Design (MB, EIM), Rhode Island School of Design, Providence, RI; and the School of Engineering of Brown University (GDJ), Providence, RI. Received March 10, 2015; revisions received April 17, May 10, and May 14, 2015; accepted May 15, 2015. The PEAP program was supported by the Department of Emergency Medicine at the Alpert Medical School of Brown University and a 2014–2015 Brown-RISD Committee for Institutional Collaboration grant. The authors have no relevant financial information or potential conflicts to disclose. This material is based on work supported by the Department of Emergency Medicine, Alpert Medical School of Brown University; Lifespan Risk Management; Rhode Island Hospital (RIH); and University Emergency Medicine Foundation (UEMF). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Department, Lifespan, RIH, or UEMF. Supervising Editor: Richard Griffey, MD, MPH. Address for correspondence and reprints: Leo Kobayashi, MD; e-mail: [email protected].

1222 1222

ISSN 1069-6563 1222 PII ISSN 1069-6563583

© 2015 by the Society for Academic Emergency Medicine doi: 10.1111/acem.12762

ACADEMIC EMERGENCY MEDICINE • October 2015, Vol. 22, No. 10 • www.aemj.org

1223

ACADEMIC EMERGENCY MEDICINE 2015;22:1222–1225 © 2015 by the Society for Academic Emergency Medicine

T

he soundscapes of health care environments factor into the patient experience, the healing process, and provider effectiveness.1–3 Patient and staff discussions, routine and urgent announcements, device sounds, and ambient noise all factor into what can be heard, some of which compromises patient privacy and confidentiality. Surveys have found that up to 45% of patients overhear provider conversations and that 10% do not have their expectations of privacy met.4–6 Examining and understanding what the patient hears and perceives in these settings may help promote provider awareness and emphasize behaviors that ensure optimal clinical communication, patient safety, and staff professionalism. Investigators therefore initiated the Patient Environment Audibility and Perception (PEAP) program to 1) objectively examine the understandable information that patients (and staff) can hear in emergency department (ED) settings and 2) conduct a soundscape (a soundscape is the collective reference to all the component sounds heard in a particular environment) assessment of a specific health care setting. The investigative team consisted of ED personnel; faculty, resident physicians, and medical students from the affiliated medical school; and industrial design faculty and students at an art and design college. To study the informational content of ED soundscapes and attendant risks to patient confidentiality, the program first targeted provider “sign-out” periods as defined, scheduled episodes of concentrated information exchange.4,7 This article describes the conduct and results of the objective assessment and qualitative analysis of what patients can potentially hear and understand in their ambient ED soundscape. METHODS Study Design This was an observational study whose overall program was approved as a quality management initiative by the site’s clinical leadership and risk management department. Compliance with institutional and government regulations was confirmed with legal counsel; the institutional review board exempted the program. Study Setting and Population The program was conducted between October 2014 and February 2015 in the urgent care areas of an academic referral hospital ED. Study site urgent care area signouts are conducted separately by physicians and nurses in central, partially enclosed, provider-only areas. All ED clinical personnel were notified of the recording sessions and general objectives prior to program start; individual sessions were not explicitly announced. Recorded locations were screened with strict protocols, and written consent was obtained from the occupant(s) when recording in patient rooms. If patients were unwilling or unable to consent, recording was attempted in an alternate study space at the scheduled time.

Study Protocol Investigators collaborated with a professional sound engineer and interior architecture experts to establish applicable and pertinent sound recording protocols. Four recordists (DBC, SAG, SP, SPS) used a digital recorder with in-ear microphones to complete 24 scheduled recording sessions that were block-randomized by sign-out time (7 AM, 3 PM/5 PM, 11 PM), day-of-week, urgent care area (A, B, C), and recording space (three hallway-opening, single-patient rooms with curtain dividers and three patient-accessible areas adjacent to nurses’ stations, selected a priori). Recordists were instructed to keep the recording environment in situ during the sessions (see Data Supplement S1, available as supporting information in the online version of this paper, for details). Upon securing the session location, the recordist positioned himself or herself at the predetermined patient room or nurses’ station and recorded for the full sign-out duration or for at least 15 minutes. Encrypted recordings were subsequently played back for transcription of audible content through professional-grade headphones; the playback volume was precisely adjusted to accurately replicate recorded sound audibility. Program personnel then scrubbed the transcripts of all eighteen Health Insurance Portability and Accountability Act (HIPAA) patient identifiers (e.g., patient names were replaced with “PHI-patient.name”). Data Analysis Each deidentified transcript was assessed independently by two investigators (TZ, LK) and then jointly reviewed to ensure consistent coding (intraclass correlation coefficients were calculated). All transcripts were characterized by location and duration of recording and number of understandable words. Transcript content was initially categorized by speaker, i.e., health care provider, patient/family/friend, or unknown; for patient room recordings, material attributed to individuals inside the room was excluded from analysis. Words, phrases, clauses, and/or sentences in the nonexcluded transcript content were then coded with NVivo10 software into predefined thematic categories. Specifically, content that was explicitly inappropriate, defined a priori as profanity and derogatory comments, was first coded with speaker category. General content themes were coded next: clinical (e.g., provider discussion on patient care), nonclinical (e.g., provider discussion on recreational topic), or unknown. NVivo-calculated quantification data for each category were reported as proportions of the total transcript (e.g., 86% of recording 5 was categorized as clinical). After general characterization, each transcript was then coded for understandable health information pertaining to active ED patients, using the following categories: history, physical examination, test results, medical decision-making, management, and disposition, along with HIPAA-defined protected health information

1224

Zhang et al. • PATIENT-AUDIBLE INFORMATION IN ED SETTINGS

(PHI) identifiers. These NVivo quantification data were reported as approximate counts of audible words per sign-out sample by category. (Investigators used understandable word counts as a primary, overestimating metric instead of aggregates such as phrases due to the ability of a single word in the proper context to inadvertently convey significant, understandable PHI.) RESULTS Twenty-two sign-out recordings were completed after six rescheduled sessions (four missed sessions, one patient refusal, one incarcerated patient); one recording was lost to device malfunction. Ten recordings were inroom and 11 were at nurses’ stations; five morning, eight afternoon, and nine evening sign-outs were recorded, with six weekend sign-outs. Patient room recordings had a median duration of 16.2 minutes (interquartile range [IQR] = 15.1 to 20.7

minutes) and featured a median of 171 (IQR = 40 to 569) audible, understandable words; 11 (IQR = 2 to 33) words per minute (wpm) were understandable. Nurses’ station recordings lasted a median of 17.0 (IQR = 15.6 to 20.5) minutes and featured 1,467 (IQR = 1,133 to 1,649) audible, understandable words; 74 (IQR = 47 to 109) wpm were understandable. A median of 22.7% (IQR = 0.0% to 65.0%) of patient room recording transcript content was attributable to patients, family, and staff inside the room and was excluded from analysis. Transcript content was categorized as clinical for 44.8% (IQR = 17.7% to 62.2%) on patient room recordings and 86.0% (IQR = 68.7% to 94.7%) on nurses’ station recordings (see Table 1 for details). Recordings from patient rooms contained limited audible patient information. Patient information audible at nurses’ stations was coded per sign-out sample as follows: general patient history, 116 (IQR = 19 to 206) words; social history, 12 (IQR = 4 to 19) words; physical examination, 39

Table 1 General characteristics of study recordings and the results of transcript coding for content, patient information and HIPAA-defined patient identifiers.

ICC Recording description Duration (min:sec) Total recorded word count (audible, understandable; per sign-out recording) Total analyzed word count (excluding in-room material; per sign-out recording) Transcript coding by thematic category Inappropriate content (provider) (e.g., swearing, disparaging comments) Clinical, appropriate content Nonclinical content Content from other patients and accompanying individuals Unknown (inadequate content and/ or context for categorization) Transcript coding by thematic category Patient history (general)

n/a n/a n/a

Patient Room Recording, Median (IQR); n = 10

Nurses’ Station Recording, Median (IQR); n = 11

16:09 (15:07–18:41) 574 (156–1,347)

17:00 (15:38–20:28) 1,467 (1,135–1,661)

n/a n/a

1,467 (1,133–1,649)

n/a

171 (40–569)

for content* (% of transcript) 0.946 0 (0.0–0.0)

0.1 (0.0–2.3)

Example Transcript Content

“Why should I go clean up after somebody else that’s lazy?”

0.997 0.982

44.8 (17.7–62.2) 0.0 (0.0–0.0)

86.0 (68.7–94.7) 1.2 (0.0–19.5)

0.994

0.3 (0.0–3.0)

1.5 (0.0–2.1)

“I would send a culture.” “Why do I have a holiday balance of 16?” n/a

0.997

6.0 (1.7–58.2)

1.3 (0.0–7.1)

n/a

for patient information* (words per sign-out sample) 0.956 0 (0–27) 116 (19–206) “She was 43 . . . the one with breast cancer.” Patient history (social, including 0.959 0 (0–0) 12 (4–19) “He was the drunk one and kept substance abuse) trying to walk.” Physical examination 0.597 0 (0–2) 39 (19–56) “His left ankle is swollen.” Imaging test results 0.870 0 (0–0) 0 (0–21) “I just saw her x-ray, broken arms.” “Glucose is 375.” Laboratory test results 0.926 0 (0–0) 7 (0–22) Miscellaneous test results 0.665 0 (0–0) 0 (0–3) “His urine is clear” Medical decision-making 0.786 0 (0–4) 39 (10–69) “Consider MRI, he’s got more focal deficits . . .” General patient care 0.871 2 (0–55) 118 (79–235) “I need a negative pressure room for my patient in 17.” Pain management 0.940 2 (0–5) 4 (0–53) “I gave him morphine . . . he’s gonna get . . . 4.” Disposition 0.917 0 (0–9) 42 (22–50) “She is probably going to be admitted to control the migraines.” HIPAA-defined patient identifiers* (identifiers per sign-out sample) Patient name n/a 0 (0–0) 3 (1–4) n/a Other patient identifier n/a 0 (0–0) 0 (0–1) n/a

HIPAA = Health Information Portability and Accountability Act; ICC = intraclass correlation coefficient. *Analysis of provider content of transcript only, unless otherwise specified.

ACADEMIC EMERGENCY MEDICINE • October 2015, Vol. 22, No. 10 • www.aemj.org

(IQR = 19 to 56) words; laboratory results, 7 (IQR = 0 to 22) words; medical decision-making, 39 (IQR = 10 to 69) words; management (general), 118 (IQR = 79 to 235) words; and disposition, 42 (IQR = 22 to 60) words. Medians of 0 (IQR = 0 to 0) and 3 (IQR = 1 to 4) patient name identifiers were audible on in-room and nurses’ station recordings, respectively. Intraclass correlation coefficients were high (see Table 1). DISCUSSION As a workplace improvement study, the multiphasic PEAP program is working to address a particular environmental issue that affects the well-being and safety of ED patients and providers. Applying a novel research methodology, audible information was recorded from the patient’s perspective and analyzed to characterize what patients can actually hear during their ED stays at our institution. The transcription and thematic coding of the recordings confirmed that readily comprehensible patient information can doubtless be overheard, occasionally with specific and traceable HIPAA-defined patient identifiers such as names. Although patient privacy and confidentiality are held as essential requirements of medical practice, study observations uphold results in the existing literature indicating recurring problems with the protection of personal health information.4–6 Specifically, our results corroborate the continued existence of a persistent vulnerability in how patient information is handled and communicated, alongside the absence of formal criteria with realistic thresholds for what constitutes “acceptable” occurrences during the course of ED patient care. This concern is compounded by the complementary discovery of the nonclinical nature of a discernible element, occasionally as much as 20%, in several recorded patientaudible discussions. Even though patients’ perceptions were not directly studied (i.e., whether individual participating patients did in fact hear and understand what was audible is unknown), it seems reasonable to expect that their expectations and satisfaction could be adversely affected by both identified elements. Methods of optimally disseminating program findings are being explored to increase ED provider insight and awareness of the need for mindful and appropriate communications. Facilities and settings that use alternative communication modalities such as “walk rounds” and on-body communicators likely experience additional difficulties, along with expanded opportunities to balance what is heard and what is deemed acceptable. Program findings, in conjunction with ongoing provider environment soundscaping and a deeper assessment of patient perceptions, will help guide mitigatory interventions. LIMITATIONS Operational constraints and funding limited recordings to 3% of sign-outs in three of eight ED areas during the program period; this narrow sampling severely limits the ability of findings to be generalized within and beyond the care areas studied. Recordist presence may have affected the behavior of those being recorded,

1225

potentially altering the nature, quantity, or audibility of recorded materials. Patients were not interviewed about what they heard or understood. The transcription and deidentification processes removed cues that could have altered content categorization. As an exploratory program, the thematic codes employed were broad; coding was completed by two physician investigators only. CONCLUSIONS Sound recordings in an ED patient care setting captured audible and understandable provider discussions that included confidential, protected health information and discernible quantities of nonclinical content. Promoting provider awareness and emphasizing behaviors that ensure optimal clinical communication, patient safety, and staff professionalism should be part of mitigation efforts. The authors acknowledge Dr. Janette Baird, PhD, of the Department of Emergency Medicine at the Alpert Medical School of Brown University for her statistical expertise and input; James R. Moses of the Department of Music at Brown University for his technical expertise and input; Marvin Abdalah of Rhode Island Hospital Interpreter Services and Rosalie Berrios Candelaria of the research team at Rhode Island Hospital Emergency Department for their assistance in translating the study session data recording consent forms into Spanish versions.

References 1. Bub B, Friesdorf W. Ergonomics; noise and alarms in healthcare–an ergonomic dilemma. in: Carayon P, ed. Handbook of Human Factors and Ergonomics in Health Care and Patient Safety. Mahwah, NJ: Lawrence Erlbaum, 2007:347–63. 2. Cvach M, Dang D, Foster J, Irechukwu J. Clinical alarms and the impact on patient safety. Initiatives in Safe Patient Care 2. Burlington, VT: Saxe Healthcare Communications, 2011. 3. Welch SJ, Cheung J. Strategies for improving communication in the emergency department: mediums and messages in a noisy environment. Jt Comm J Qual Patient Saf 2013;39:240–88. 4. Mlinek EJ, Pierce J. Confidentiality and privacy breaches in a university hospital emergency department. Acad Emerg Med 1997;4:1142–6. 5. Olsen JC, Sabin BR. Emergency department patient perceptions of privacy and confidentiality. J Emerg Med 2003;25:329–33. 6. Karro J, Dent AW, Farish S. Patient perceptions of privacy infringements in an emergency department. Emerg Med Austral 2005;17:117–23. 7. Cheung DS, Kelly JJ, Beach C, et al. Improving handoffs in the emergency department. Ann Emerg Med 2010;55:171–80. Supporting Information The following supporting information is available in the online version of this paper: Data Supplement S1. Study recording session protocol.