Should clinical software be regulated?

4 downloads 19 Views 85KB Size Report
Jun 19, 2006 - whose view is that the evidence supporting its introduction is still wanting ... computer programs, such as electronic prescribing systems, are not.
ED I T O R I A L S

substitution study that should have been excluded, the odds ratio for mortality in HITH becomes a near-statistically significant 0.76 (95% CI 0.57–1.01; P = 0.0599)! The fact that it is not significant is probably a type II error. Interestingly, both groups in the Cochrane analysis, after removing the failed HITH trial, show an odds ratio of about 0.76 for mortality, indicating homogeneity. Even with borderline statistical significance, a one-quarter reduction in mortality from 17.8% to 13.4%, with a number needed to treat in HITH to prevent one death in 25, is clinically significant. Assessment for function in HITH studies shows two patterns. Studies where HITH substituted for hospital admission found that physical and cognitive function were improved.9,10 In studies in which patients are discharged early to HITH, the general focus on rehabilitation means that both groups attain comparable function. There are insufficient data on nursing home placement to draw conclusions. The problems with the financial analyses are similar, but simpler. Services where HITH is not a substitute for in-hospital care, but merely add-on care, are bound to be more expensive, no matter how sophisticated the economic analysis.4 Where HITH substitutes for inhospital care, and the service works at reasonable capacity, HITH is cheaper than hospital.11 All the pieces are in place, though more evidence is needed to achieve statistical significance. The evidence clearly leads towards a conclusion that HITH offers better health outcomes and a reduction in costs. Author details Gideon A Caplan, MBBS, FRACP, Director, Specialist Geriatrician and Acting Director,1 and Conjoint Associate Professor2 1 Hospital in the Home, Post Acute Care Services, Prince of Wales Hospital, Sydney, NSW.

2 School of Public Health and Community Medicine, University of New South Wales, Sydney, NSW. Correspondence: [email protected]

References 1 Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalised patients: results of the Harvard Medical Practice Study II. N Engl J Med 1991; 324: 377-384. 2 Larkins RG, Martin TJ, Johnston CI. The boundaryless hospital — a commentary. Aust N Z J Med 1995; 25: 169-170. 3 Shepperd S, Iliffe S. Hospital at home versus in-patient hospital care [review]. Cochrane Database Syst Rev 2005; (3): CD000356. 4 Shepperd S, Harwood D, Gray A, et al. Randomised controlled trial comparing hospital at home care with inpatient care. II: cost minimisation analysis. BMJ 1998; 316: 1791-1796. 5 Caplan GA, Ward JA, Brennan N, et al. Hospital in the home: a randomised controlled trial. Med J Aust 1999; 170: 156-160. 6 Canet J, Raeder J, Rasmussen LS, et al. Cognitive dysfunction after minor surgery in the elderly. Acta Anaesthesiol Scand 2003; 47: 1204-1210. 7 Leff B, Burton L, Mader SL, et al. Hospital at home: feasibility and outcomes of a program to provide hospital-level care at home for acutely ill older patients. Ann Intern Med 1997; 127: 989-995. 8 Caplan GA, Coconis J, Sayers A, Board N. A randomised controlled trial of Rehabilitation of Elderly and Care at Home Or Usual Treatment (The REACH OUT Trial). Age Ageing 2006; 35: 60-65. 9 Leff BA, Burton L, Burl J, et al. Return of physical function among patients eligible for home hospital treatment. The Johns Hopkins home hospital national demonstration and evaluation study. J Am Geriatr Soc 2003; 51 Suppl: S33. 10 Caplan GA, Coconis J, Woods J. Effect of hospital in the home treatment on physical and cognitive function: a randomized controlled trial. J Gerontol Biol Sci Med Sci 2005; 60: 1035-1038. 11 Board N, Brennan N, Caplan GA. A randomised controlled trial of the costs of hospital as compared with hospital in the home for acute ❏ medical patients. Aust N Z J Public Health 2000; 24: 305-311.

Should clinical software be regulated? Enrico W Coiera and Johanna I Westbrook

New Australian evaluation guidelines will help inform the debate

I

t takes something like 10 years for a new compound to go from laboratory to clinical trial, and many more before a drug’s safety and efficacy are proven. Why isn’t clinical software — which might check for drug–drug interactions and dosage errors and generate alerts and recommendations to influence prescriber behaviour — treated as rigorously?1 Today, anybody with programming skill could create a rudimentary electronic prescribing package and put it directly onto the desktop of a general practitioner without regulatory approval. No doubt the stand-alone software in routine clinical use has undergone rigorous evaluation by its developers, but in most countries there is no specific regulation that requires this. Commercial vendors still sometimes sell prescribing systems with significant gaps in functionality.2 Some hospital prescribing systems are even sold devoid of the decision rules that will check for errors or guide prescribing. The expectation is that a hospital drug committee will have expertise in the development and maintenance of computational knowledge bases, an arcane and highly specialised skill set if there ever was one. 600

Evidence mounts from systematic reviews that there is manifest benefit associated with clinical information technologies.3,4 However, case reports are appearing that indicate clinical software can sometimes cause harm.5 A new debate is building between those who demand that we rapidly introduce new information systems to improve the safety and quality of clinical practice and those whose view is that the evidence supporting its introduction is still wanting, and that, in some situations, there is a real possibility that it may do more harm than good.6 Much of the science on both sides in this debate is questionable. A widely reported article in 2005 identified 22 types of possible medication error risk associated with a clinical order-entry system.7 Clinical outcomes were not measured, and no attempt was made to explore whether these potential errors were the result of a badly designed system. Recently, Han et al reported that a hospital electronic prescribing system produced a statistically significant increase in mortality from about 3% to 7%.8 However, assigning the blame for this startling outcome solely to the software is problematic. Introduction of the software altered traditional work patterns

MJA • Volume 184 Number 12 • 19 June 2006

ED I T O R I A L S

and increased the complexity and time taken to prescribe. Yet the new system was implemented in less than a week — an extremely short time to introduce a complex new organisational process. On the technology proponents’ side, systematic reviews of decision support systems often try to infer which features are beneficial by lumping together widely dissimilar systems used in very different contexts.4 However, local and sociocultural variables strongly influence the uptake and efficacy of such systems,9 and these are rarely controlled for or quantified in studies, making it hard to interpret this type of systematic review. Further, citing lack of evidence for the value of different software features in a review, when the original studies were never designed to test for these features, does not say much. What should be done? The process guiding the development and testing of most medical treatments and biomedical instrumentation, including software embedded in or linked to clinical devices, is tightly regulated. In contrast, the development of stand-alone clinical software is not. In Australia, stand-alone decision-support computer programs, such as electronic prescribing systems, are not considered “therapeutic goods” and are not subject to regulation. Similarly, in the United States, software that relies on manual data input and that is not directly used in diagnosis or treatment is usually exempt from the premarket regulatory requirements of the Food and Drug Administration to demonstrate that the device is as safe and as effective as devices already on the market.10 Even if there were strict regulations for clinical software, defining either the process of system development or the knowledge within and behaviours of a system, there is no guarantee that software would be implemented or used safely. Information technology is only one component of health services.9 For the whole system to be safe, certification might have to include the skills of those using the software and the organisational processes within which the software is embedded. Consequently, the most appropriate model of governance over the safety and quality of clinical software is far from clear, and may involve elements of industry self-regulation, legislation and best practice guidance. These models are currently a matter of debate among organisations such as the International Organization for Standardization and the European Committee for Standardization. Locally, the National E-Health Transition Authority is developing basic technical standards for clinical software that should lead to more uniform and better engineered systems, and early work by the General Practice Computing Group examined the broader need for software accreditation. The United Kingdom’s National Programme for IT has moved further — establishing a safety team — and has embedded a safety management approach into its procurement processes. The Australian Health Information Council recently published national guidelines for the evaluation of electronic clinical decision support systems, to promote evaluation using rigorous and validated methodologies.11 The guidelines recognise that it is difficult to propose a single evaluation methodology that meets the diverse needs of both the software and clinical communities. Different user groups have different evaluation tasks and objectives. Even the choice of evaluation method is sometimes unclear, given the complexities of health services and the limited opportunities to carry out rigorously controlled trials. The guidelines outline approaches to testing the clinical effectiveness of decision support systems, their integration into existing work practices, user acceptability, and technical evaluations of the software and knowledge bases. Urgent debate is needed to move this agenda forward,12 and these guidelines should provide a platform to inform that debate. We can

move quickly to develop appropriate models of governance for clinical software, or we can step back and let the courts decide, when legal cases of negligence occur. Some will argue that regulation inhibits innovation, but there are good examples of regulation driving technology innovation in other industries. The airline industry is often presented to us as a safety role model, but that industry was forced to change only after a string of catastrophic disasters. We can do much better by anticipating the potential risks of these technologies, rather than reacting to mishap. Over the next few years, despite people’s lives being saved or improved by these new systems, some hard lessons may be learned about their safe and effective use. Competing interests The national guidelines for the evaluation of electronic decision support and this editorial were written by the Centre for Health Informatics, UNSW, under contract from the Australian Department of Health and Ageing.

Author details Enrico W Coiera, MB BS, PhD, Director Johanna I Westbrook, PhD, FACMI, Deputy Director Centre for Health Informatics, University of New South Wales, Sydney, NSW. Correspondence: [email protected]

References 1 Miller RA, Gardner RM. Recommendations for responsible monitoring and regulation of clinical software systems. American Medical Informatics Association, Computer-based Patient Record Institute, Medical Library Association, Association of Academic Health Science Libraries, American Health Information Management Association, American Nurses Association. J Am Med Inform Assoc 1997; 4: 442-457. 2 Wang CJ, Marken RS, Meill RC, et al. Functional characteristics of commercial ambulatory electronic prescribing systems: a field study. J Am Med Inform Assoc 2005; 12: 346-356. 3 Garg AX, Adhikari NKJ, McDonald H, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 2005; 293: 1223-1238. 4 Kawamoto K, Houlihan C, Balas EA, Lobach DF. Improving clinical practice using decision support systems: a systematic review of randomised controlled trials to identify system features critical to success. BMJ 2005; 330: 765-768. 5 Ash JS, Berg M, Coiera E. Some unintended consequences of information technology in health care: the nature of patient care information system related errors. J Am Med Inform Assoc 2004; 11: 104-112. 6 Wears RL, Berg M. Computer technology and clinical work — still waiting for Godot. JAMA 2005; 293: 1261-1263. 7 Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005; 293: 1197-1203. 8 Han YY, Carcillo JA, Venkataraman ST, et al. Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics 2005; 116: 1506-1512. 9 Coiera E. Four rules for the reinvention of healthcare. BMJ 2004; 328: 1197-1199. 10 United States Food and Drug Administration. Premarket notification 510(k): regulatory requirements for medical devices. Available at: http:// www.fda.gov/cdrh/manual/510kprt1.html (accessed May 2006). 11 Australian Health Information Council. Electronic decision support evaluation methodology. Available at: http://www.ahic.org.au/evaluation/ index.htm (accessed May 2006). 12 Miller RA, Gardner RM, Johnson KB, Hripcsak G. Clinical decision support and electronic prescribing systems: a time for responsible ❏ thought and action. J Am Med Inform Assoc 2005; 12: 403-409.

MJA • Volume 184 Number 12 • 19 June 2006

601

Suggest Documents