Computer - Springer Link

3 downloads 0 Views 955KB Size Report
factor explaining the adoption of systems that do not improve either the quality or ... of correctly sequenced efficacious services (outputs) which are produced ... Step 2: Assess the Efficacy of Alternative CISs for Improving Performance .... (MCBA) using only the numerator can be performed to compare the financial costs and.
Journal of Medical Systems, Vol. 15, Nos. 5/6, 1991

Computer l:I" More Harm Than Good? Ronald Wall

Clinical information systems (CIS) are health care technologies that can assist clinicians and clinical managers to improve the performance of health care organizations. However, failure to consider scientific evidence of efficacy, effectiveness, and efficiency when selecting CISs is one factor explaining the adoption of systems that do not improve either the quality or efficiency of patient care. This paper discusses a technology assessment framework that can assist decisionmakers to evaluate alternative CISs. Existing methodologies developed to evaluate diagnostic and therapeutic technologies can be used by researchers to provide evidence needed by decisionmakers at each step of the framework. The rigorous evaluation of CISs prior to their implementation can help decision-makers to avoid adopting "white elephants."

INTRODUCTION The efficient provision of quality patient care should be an overarching goal of health care organizations. The production of such care depends upon sound clinical and managerial decision-making. Sound decision-making, in turn, requires timely, relevant, reliable, and accurate information. Lack of appropriate information is cited as one factor explaining the poor performance of health care organizations. 1-8 Organizational performance is determined by the quality of patient care and the technical efficiency of the processes providing such care. 9 Effective health care organizations improve patient health status (outcomes) by using appropriate types and amounts of correctly sequenced efficacious services (outputs) which are produced using the lowest cost combinations of scarce resources (inputs). 9'1° Clinical Information Systems (CIS) provide clinicians and clinical managers with information to support health care decision-making. 11-13 Such information can aid clinicians to enhance the quality of clinical activities directly providing patient care (e.g., patient surveillance, automatic reminders, displaying summarized patient data, and decision support) and it can assist clinical managers to increase the efficiency of infrastructures supporting and coordinating patient care (e.g., scheduling, charting, and information sharing/transmission among clinicians). Clearly, CISs can be used to improve the From the Faculty of Medicine, University of Manitoba, Sl13-750 Bannatyne Avenue, Winnipeg, Manitoba R3E OW3, Canada. 321 0148-5598/91/1200-0321506.50/0 © 1991 Plenum Publishing Corporation

322

Computer ]~: More Harm Than Good?

performance of health care organizations. However, not all CISs are successful in so doing. Some succeed in increasing performance. Others fail to deliver as promised i.e., the proverbial "white elephant." Still others develop unanticipated side-effects that prevent the system from operating as planned. There is no guarantee that a CIS will "do more good than harm." All decisions to invest in new health care technology should consider evidence of efficacy, effectiveness, and efficiency. 14 Efficacy answers the question, "Can it work, under ideal circumstances?" Effectiveness answers the question, "Does it work, in actual practice?" Efficiency answers the question, "Is it worth it, compared to other uses of the same resources?" Before a technology can be judged worthwhile, all three questions must be answered in the affirmative. The central idea of this paper is that CISs are health care technologies that are not exempt from the rigorous requirements of technology assessment. The paper presents a framework for the technology assessment of CISs. The framework provides decisionmakers with information about the relative advantages and disadvantages of alternative CIS technologies. Hopefully, the paper can help to prevent the adoption of "white elephants." TECHNOLOGY ASSESSMENT FRAMEWORK Technology assessment is " a systematic approach to the assembly of those specific subsets of health information necessary for rational decision making on the diffusion of health care technology" 14 that provides "decision-makers with valid and timely information concerning the general value of new technology. ''15 Key steps in technology assessment include specifying quantitative goals for the technology to achieve, assessing candidate technologies for their efficacy, effectiveness, and efficiency, and performing a postimplementation assessment to determine if the goals were achieved. 14A6-18 Figure 1 shows the order and linkage of the technology assessment steps for evaluating CISs. The paper will discuss the steps in detail and use an example to illustrate an application of the framework.

Step 1: Define the Performance Goals That the CIS Should Achieve The starting point for technology assessment is to define the direct organizational goals that the CIS should achieve. Ideally, quantified direct goals stating expected improvements in patient health status and reductions in the costs of health care services used should be stated. The CIS assessment framework shown in Figure 1 defines pertinent goals in step 1 and appraises the efficacy, effectiveness, and efficiency of alternative CISs for achieving these goals. Although indirect goals (e.g., reduced turn-around time for transmitting diagnostic test results, decreased numbers of misplaced specimens) are often specified and evaluated as proxies for direct goals, it is not clear (although often assumed) that achieving these goals will improve patient health status or reduce costs. 19For example, a CIS reducing the response time for reporting diagnostic test results will improve patient outcomes only if the information is relevant, complied with, and available to clinicians prior to the time of decision-making--several factors can violate the linkage between reduction in the time of

Wall

323

STEP 1: DEFINE THE PERFORMANCE GOALS THAT THE ClS SHOULD ACHIEVE

STEP 2: ASSESS THE EFFICACY OF ALTERNATIVE ClSs FOR IMPROVING PERFORMANCE

STEP 3: ASSESS THE EFFECTIVENESS OF THE EFFICACIOUS ClSs

STEP 4: ASSESS THE EFFICIENCY OF THE EFFECTIVE ClSs

STEP 5: ASSESS THE IMPACT OF THE SELECTED ClS ON PERFORMANCE

Figure 1. CIS assessment framework

information transmission and improvements in the quality of patient care. Therefore, direct goals should be specified.

Step 2: Assess the Efficacy of Alternative CISs for Improving Performance Efficacy is concerned with the technical feasibility of the information provided by a CIS to improve patient outcomes or reduce costs under ideal conditions. In order to assess system efficacy, users must address two concerns. First, does the CIS provide timely, relevant, and accurate information (i.e., satisfy its technical specifications)? Second, does organizational performance improve if clinicians optimally use the information (i.e., does this additional information improve patient management)? In predicting the ability of proposed CISs to satisfy technical specifications, users should assess the credibility of vendors' claims. On the one hand, as a rule of thumb, the more reputable a manufacturer and the more thoroughly tested the technology, the more credible the claims. On the other hand, innovative CISs designed to satisfy nonstandard information needs face higher risks of failure. Technical feasibility can be increased by developing applications whose technological specifications can be satisfied by proven hardware and software products. A database reporting user satisfaction with computer vendors is available 2°--22 and regulation by the United States Food and Drug Administration has been proposed. 23 Although the impact of information provided by CISs on patient outcomes can be assessed using a variety of evaluation methods, clinical evidence should be derived from well-designed randomized controlled trials. 24 These scientific evaluations reduce bias, control for confounding factors, and have sufficient statistical power to distinguish real differences from random background noise. 25'26 Other designs are less likely to report valid findings. Although substantial research has evaluated the ability of CISs to achieve intermediate goals, it is not clear that hospital performance was improved.

324

Computer ~: More HarmThan Good?

Step 3: Assess the Effectiveness of the Efficacious CISs Decision-makers are concerned with the effectiveness of health care technology-i.e., will an efficacious CIS improve the performance of their organizations? Evidence of efficacy is a necessary but not a sufficient condition for effectiveness. Organizational factors (e.g., user compliance) controlled for in "ideal world" efficacy studies (or not considered) may actively reduce CIS effectiveness in "real world" settings. 27 Jaeschke and Sackett28 discuss factors that decision-makers should consider when appraising the validity of evidence of health care technology effectiveness. Although randomized-controlled trials can be used to provide valid evidence of effectiveness, cohort designs may be superior. 28 Evidence of effectiveness should be critically appraised for its validity and generalizability to other organizational settings. For example, in the reported application did a powerful and respected clinical leader motivate other physicians to use information provided by the CIS, and is there a similar individual present in your organization? If the factors determining reported effectiveness are not present, it is unlikely that the CIS will improve performance in your organization. The resistance of physicians and other clinicians to using CISs is one important factor explaining their limited application to date and high failure rate. 13,29--31 Resistance arise from concerns about: (i) potential loss of autonomy and privacy;32 (ii) possible loss of clinical decision-making functions;32-34 (iii) the limited ability of models to adequately represent complex decision-making processes; 13'31'32 and, (iv) poorly designed function, mode of operation, and human-computer interaction. 13 Additional factors also possibly reducing CIS effectiveness include insufficient funding, lack of technically sophisticated staff, poor system design, poor planning, ineffectual management, lack of support from senior clinical and administrative managers, and the resistance of administrative staff. 16'33'35-37 Therefore, decision-makers appraising evidence of CIS effectiveness should also consider schedule feasibility, motivational feasibility, and operational feasibility. 16 Schedule feasibility refers to the probability of a CIS being implemented on-time. 16 CISs not implemented on schedule may decrease system effectiveness if redesign is required or if shortcuts are used to "make-up" for lost time. Motivational feasibility is concerned with the factors that motivate adequately trained and supported users to comply with the information provided by CISs. 16 For example, although clinicians should be self-motivated to use information enhancing their patient management capabilities, the perceived loss of traditional clinical decision-making may cause physicians to reject CISs providing such information. 13,33 Decision-makers wishing to implement CISs assisting with clinical decision-making must consider means to motivate clinicians to both use them and to comply with the reported information. Operational feasibility is concerned with the ability of implemented CISs to achieve specified goals. 16 Efficacious systems provided on schedule to motivated users may still fail to improve performance because of political factors present within the organizational environment. It is important that key clinicians and clinical managers support proposed CISs. Studies evaluating quality assurance CIS were critically appraised by Haynes and Walker. 38 Although the validity of performance improvements claimed for CISs are suspect due to flawed evaluation methodology, they concluded there is evidence that certain CISs can improve the quality of patient care in specific settings. Extensive reviews

Wall

325

of evaluations of medical expert CISs found that some 90% of all systems have not been independently evaluated under real clinical operating conditions using scientific methods. 39'40 Hence, factors contributing to the success and failure of expert CIS are not known and understood. 24

Step 4: Assess the Efficiency of the Effective CISs Health care resources are scarce and are likely to become even more constrained. Therefore, decision-makers should consider the efficiency of investments in CISs. The real cost to organizations of purchasing and operating CISs is the lost opportunity to invest in other effective diagnostic, therapeutic, or organizational technologies. Economic evaluation is a systematic methodology for comparing the costs and consequences of alternative health care technologies. Methodologies have been developed to measure the costs and consequences of health care technologies. 41-43 Due to the existence of relatively advanced cost accounting and economic models, 43-45 the measurement of financial costs and consequences is the simpler of the two. In general, the measurement of nonfinancial patient or organizational consequences has been considered a difficult task. In response, analysts and administrators have used readily available indicators (e.g., morbidity and mortality rates as a proxy for the quality of patient care, absenteeism as a proxy for employee productivity) as measures of technological impact. Such measures are not always useful in detecting efficient interventions. The measurement and valuation of consequences should reflect the purpose of health care organizations--i.e., to efficiently improve patient health status. To my knowledge, no attempts have been made to measure the cost-utility of CISs for improving patient health status. Using incremental analysis, the economic evaluation analytical formulation (EEAF) shown in Figure 2 compares organizational financial costs (e.g., purchase and operating costs) and savings (e.g., reduced costs arising from productivity improvements) in the numerator with patient health status consequences (e.g., increased quality-adjusted life years) in the denominator of alternative C I S s . 43 When indirect goals are specified, organizational or patient consequences can be used in the denominator. 43 Depending upon the goals specified in step one, one or more formulations derived from the formula shown in Figure 2 can be used. If the direct goal is to reduce financial costs, and if no changes occur in patient health status, a modified cost-benefit analysis (MCBA) using only the numerator can be performed to compare the financial costs and consequences of alternative CISs. However, if the indirect goal is to improve administrative (e.g., measured as the time to perform a task) or clinical (e.g., reduced time to report diagnostic information) productivity, then a cost-effective analysis (CEA), using the numerator for financial costs and the denominator for organizational consequences, can be performed. Similarly, if the direct goal is to improve patient health status along one dimension (morbidity or mortality) CEA can also be used. Finally, when alternative CISs have financial costs and multidimensional health status consequences (morbidity a n d mortality), a cost-utility analysis (CUA) comparing financial costs in the numerator and patient health status consequences in the denominator should be performed. These three formulations are discussed in more detail in the following sections.

326

Computer 1): More Harm Than Good? FINANCIAL COSTS FINANCIAL CONSEQUENCES EEAF = ORGANIZATIONAL OR PATIENT CONSEQUENCES

Where:

EEAF is the analytical formulation comparing the net incremental financial costs (costs less consequences) to the incremental organizational or patient consequences of alternative ClSs ($ per unit organizational or patient consequence), FINANCIAL COSTS are the incremental expenditures to purchase and operate the compared ClSs ($). FINANCIAL CONSEQUENCES are the

incremental resource savings arising from the compared ClSs ($). ORGANIZATIONAL CONSEQUENCES are

the incremental changes in administrative or clinical productivity arising from the compared ClSs (physical units). PATIENT CONSEQUENCES are the

single- or multi-dimensional incremental change in patient health status arising from the compared ClSs (clinical units,)

Figure 2. Economic evaluation analytical formulation (EEAF) assessing CIS efficiency

MODIFIED COST-BENEFIT ANALYSIS MCBA is an appropriate method for economic evaluation when the relevant resource implications are valued in monetary units. For example, a CIS reminding physicians to prescribe lower cost but equally effective generic medications will improve organizational efficiency by reducing expenditures without compromising the quality of patient care. Because all costs (e.g., CIS purchase and installation costs plus annual operating costs) and savings (e.g., annual reduction in expenditures on medications) can be valued in monetary units (i.e., there are no changes in organizational productivity and no single- or multi-dimension changes in patient health status), MCBA is the appropriate economic evaluation formulation. CISs automating or increasing the productivity of clinical (e.g., laboratory technicians) or administrative (e.g., clerical staff) activities may reduce organizational workloads. Methods for estimating and valuing (in both physical and monetary units) workload reductions are available. 46-49 Assuming that the projected workload changes are translated into actual reductions in paid hours and that the quality of patient care is not affected by productivity changes, all costs and consequences to organizations can be valued in monetary units. Under these circumstances, MCBA is the appropriate economic evaluation method. However, decision-makers should be aware that the valuation of productivity changes in monetary units is methodologically difficult and can produce controver-

WaH

327

sial estimates if valid evidence linking improvements in productivity to actual costsavings has not been established. MCBA methodology has been used to evaluate the financial impact of CISs in laboratory, 5°-52 pharmacy, 53-55 radiology,56'57 ambulatory care, 58 oncology, 59 and dental practice. 6° Note that MCBA is in fact a special case of cost-benefit analysis (CBA). When patient outcomes are affected by the technologies being assessed, a complete CBA taking into account changes in patients' health status would be required. Although methods capable of valuing patient outcomes in terms of monetary units are being developed, health status measurement is usually made using relevant physical units (see below). COST-EFFECTIVENESS ANALYSIS When methods valuing intangible consequences in monetary units are controversial (e.g., human life and suffering61-63), CEA methods are appropriate for comparing alternative technologies capable of satisfying a nonfinancial goal.43 In CEA, all financial costs and consequences are incorporated in the numerator of Figure 2; intangible consequences are incorporated in the denominator using a single-dimension index (valued in an appropriate physical unit) measuring the effectiveness of the available CISs for achieving the specified goal. Physical measures of effectiveness include organizational output (e.g., numbers of specimens lost, decreased response time to report test results, changes in available operating room time) and patient health status (e.g., morbidity or mortality rates). CEA is the appropriate economic evaluation method when decision-makers have specified a quantitative, single-dimension, nonfinancial goal. In such cases, the task of the decision-makers is to select the most efficient technology in terms of net-financial costs (costs less consequences) to produce the desired effect. The evaluation of a replacement automated technology in a haematology laboratory by Szczepura and Stilwel164 is an excellent example of a CEA; another example is the evaluation of automation in bacteriology. 65 Single-dimension measures of output may fail to capture all relevant (or even the most important) consequences. For example, the consequences of CISs auditing drug prescribing in nursing units could include changes in patient morbidity and mortality and the productivity of clinicians. A single-dimension index capturing all three of these diverse outcomes may be difficult to construct. Furthermore, single-dimension measures may be difficult to measure quantitatively and they may not be reproducible. 19 Although methods measuring the impact of CISs on clerical and clinical productivity (indirect goals) are available38 and have been used in evaluations by the United States Federal Government, vendors of commercial systems, and hospitals, the validity of these studies is suspect because of their use of flawed evaluation methodology.48 Furthermore, the relevance of these reported findings to actual performance goals is not clear because the studies did not measure changes in patient health status. 66 COST-UTILITY ANALYSIS In recent years a more sophisticated and sensitive approach has been developed: health state utilities and quality-adjusted life years (QALYs). The QALY is an index

328

Computer~: More Harm Than Good?

designed to take account of the quality as well as the duration of survival in assessing the outcomes of health care interventions. The advantage of using QALYs gained as a measure of patient outcome is that it is a common and comprehensive unit of measure that can be used to compare alternative health care technologies. For example, a CIS can be compared to other CISs as well as to additional technology investments available to the organization. In CUA, patient outcomes are measured in QALY units. (An alternative measure of patient outcomes for cost-utility analysis, the healthy years equivalent [HYE], has been suggested.) 67 QALYs are simply life years, weighted by the quality of the years. For the weighing scheme, each definable health state is assigned a weight on a scale where death is zero and full health is one. The time spent in a given health state is multiplied by the corresponding weight and added together to yield the number of quality adjusted life years. One way of incorporating patient preferences into the process, and for measuring the weights to be used in the calculation of QALYs, is the utility approach. 6s Incremental improvements for patients (QALYs gained) are compared to incremental changes in costs for each alternative CIS. This relationship captures the two primary factors of hospital performance: patient health status and medical care costs. Depending on the application, health state utilities are either measured on patients who are in the health state or using raters who may, or may not, have experience with the health state. There are a number of techniques and instruments available for the measurement of health state utilities. The three methods used most frequently are rating scales, standard gamble, and time trade-off. Two additional techniques, equivalence and ratio scaling, have been used occasionally. More about QALYs and the utility approach to measure health related quality of life can be found elsewhere. 68-7° A similar approach is called for in evaluating CISs.

Step 4: Assess the Impact of the Selected CIS on Performance In general, the actual impact of information systems on organizational performance is usually less than that predicted by experts, users, and vendors. 16'5s'71"72 Postimplementation assessment assures decision-makers that implemented CISs are meeting specified goals. 14,16If the CIS is not meeting its goals, factors responsible for suboptimal performance should be identified and corrected; 72 if CIS performance remains inadequate, the system should be removed.

AN A S S E S S M E N T E X A M P L E : A CIS M O N I T O R I N G MEDICATION UTILIZATION Inappropriate prescribing of medication is a concern of patients, clinicians, and clinical managers. 73 Clinicians and clinical managers are concerned about maintaining high quality patient care. Patients suffering adverse reactions to medications may experience temporary or long-term reductions in their health status and the care of these patients will consume resources that could have been used to care for other individuals. Even when patients do not suffer adverse reactions, inappropriate utilization (i.e., incorrect prescribing or failure to use the lowest cost but equally effective generic substitute)

Wall

329

will impose needless financial burdens on health care organizations. This section will use the CIS technology assessment framework to assess the efficacy, effectiveness, and efficiency of a computer-based CIS that monitors medication utilization.

Step 1: Def'me the Organizational Goals CISs auditing utilization of medications are implemented to change physician prescribing behaviour. Indirect goals could include specifying a reduction in the rate of exceptions to medication prescribing protocols74 or a decrease in the number of suboptimal medication prescribing days. 75 Direct goals could specify expected improvements in the health status of patients receiving drug therapy and expected cost-savings to the health care organization. 76'77 If the direct and indirect outcomes reported in the literature can be generalized to other settings, organizations could use these values as a guide for estimating their quantitative goals.

Step 2: Assess the Efficacy of Alternative CISs Soumerai and Avorn73 concluded from a thorough and extensive review of published studies that clinical audit followed by repetitious feedback was the only efficacious intervention (out of six) for modifying physician medication prescribing behavior in ambulatory care settings; Haynes and Walker38 conclude from a critical appraisal of the computer-based quality assurance literature that ongoing feedback was able to change physician prescribing behavior in family medicine and general medicine clinics. However, because patient health status (morbidity, mortality, or QALYs) was not measured, it is not clear if these reported changes in physician prescribing behavior improved the quality of patient care. Therefore, the evaluated CISs are efficacious only in terms of the indirect goal of changing physician prescribing behavior.

Step 3: Assess the Effectiveness of Alternative CISs The generalizability of the above evidence of efficacy to other settings may be influenced by other factors affecting physician behavior. For example, were there financial, authority, or other factors in addition to feedback that may have motivated physicians to change their drug prescribing behavior? Because physicians in the cited studies were randomized among alternative programmes (feedback interventions and no intervention), both known and unknown confounding factors should have been controlled for. However, it is not clear if clinical and organizational factors will influence the generalizability of these experimental findings. Decision-makers must determine if factors (e.g., a respected and powerful clinician) present in their organizations could influence the effectiveness of the CIS. If such factors are present and cannot be mitigated or adjusted-for, it may not be feasible to implement the CIS. Assuming that such factors are not present, clinical audit combined with ongoing feedback appears to be an effective intervention for changing physician medication prescribing behavior. (As noted above, however, it is not clear that this CIS will improve patient health status.)

Step 4: Determine the Efficiency of Effective ClSs Although cost-savings arising from changes in physician medication prescribing behavior have been reported, 76'77 it is not clear that these savings accrued to the health

330

Computer ]~: More Harm Than Good?

care organizations (they may have accrued to patients and/or third party insurers paying for the prescribed medications). Also, the cited studies are incomplete economic evaluations because they did not include the purchase and operating costs of CISs and they did not measure changes in patient health status. Assuming (i) that the purchase and discounted operating cost are less than the reported savings, and (ii) that patient health status was not affected, the physician feedback and audit CIS (compared to no intervention) appears to be an economically efficient investment. However, if capital funds are limited, it is not clear what rank this CIS would have compared to the other investment opportunities available to organizations. 78

DISCUSSION AND SUMMARY

The technology assessment framework discussed in this paper can be used by decision-makers to select CISs that provide clinicians and clinical managers with information enabling them to improve organizational performance. The proposed CIS assessment framework is consistent with technology assessment frameworks used to assess new diagnostic and therapeutic technologies. Evaluation methodologies required by the framework can be adapted from well-known tools used in clinical research. The medication utilization technology assessment example discussed in the paper demonstrates some of the problems decision-makers encounter in assessing CISs-particularly in relating indirect goals to direct and meaningful organizational cost-savings and improvements in patient health status. Research is needed to develop and refine appropriate methodologies (tools) for evaluating the impact of CISs on patient health status and health care provider productivity. Empirical studies to implement these tools and to spread knowledge of them are also required. If a health care organization contemplating the purchase of a CIS cannot wait for research findings, it faces the difficult task of making a recommendation in the absence of hard evidence. Two approaches have been used to cost-justify CISs. 19 First, in the case of projected large cost-savings (as in the above example), CISs may be justified on purely financial grounds without any consideration of their impact on patient health status. Second, if clinical outcomes are important, then the impact of the proposed CIS on patient health status should be considered. One way to accomplish this is to use intermediate measures of clinical effectiveness. However, as already noted, the relationship between intermediate measures and patient health status is tenuous and uncertain. Technology assessment based on such measures may mislead decision-makers into adopting "white elephants." Health care organizations wishing to use CISs to improve organizational performance should address three concerns before purchasing a system. First, the decisionmaking process should receive inputs from qualified individuals. Large hospitals should have staff with computer expertise and smaller hospitals should have access to expertise through consultants (private and other hospitals), the ministry of health, universities, or hospital associations. Second, implementing a CIS may require major organizational changes. Hire, train, or develop staff who have required expertise and place them in a separate (unbiased) department reporting directly to the chief administrator. Evaluate and restructure the

Wall

331

current organization to utilize the features and strengths of the CIS. Applying a computer to a poorly performing organization will not improve its effectiveness and efficiency. Computers are conservative influences as well as harbingers of change. Using computers to improve performance in health care organizations will require new interactions between medicine and the information and management sciences that will demand new skills and attitudes on the part of decision-makers. 32 Third, technology assessments of CISs should be undertaken in the context of the overall health care organization. Both the costs and consequences of CISs can transcend individual functional areas. This paper argues that CISs can and should be evaluated on the basis of their impact on patient health status and their costs the two primary components of organizational performance. Using health state utilities and QALYs, the technology assessment framework provides clinicians with known tools to evaluate the impact of CISs on their activities and areas of responsibility. Decision-makers contemplating the purchase of a CIS should require computer vendors to demonstrate the efficacy of their systems. Health services researchers should evaluate the effectiveness and efficiency of CISs.

ACKNOWLEDGMENTS This research was carried out while the author was the research coordinator for health economics with the Centre for Health Economics and Policy Analysis (CHEPA) at McMaster University. An earlier version of the paper was presented at the CATCH 1987 conference (Toronto, Ontario). I am grateful to Drs. Gafni and Torrance (McMaster University) for their encouragement, suggestions, and valued comments on earlier drafts of this paper. Financial support was provided by an Ontario Ministry of Health operating grant to the Centre for Health Economics and Policy Analysis and through a fellowship to the author from the Manitoba Centre for Health Policy and Evaluation.

REFERENCES 1. Anderson, G.M., and Lomas, J., The development of utilization analysis: How, why, and where it's going. In Proceedings of the First Annual Health Policy Conference, Reviewing Utilization: The Methods and Promise of Utilization Analysis for the Canadian Health Care System. (C. Fooks, and J. Lomas, eds.), Hamilton, Ontario, McMaster University Centre for Health Economics and Policy Analysis, 1988, pp. 1-27. 2. Evans, R., Strained mercy: The Economics of Canadian Health Care. Toronto, Butterworths, 1984. 3. Evans, R., Incomplete vertical integration: The distinctive structure the health-care industry. In Health, Economics, and Health Economics (J. Van der Gaag and M. Perlman, eds.), Amsterdam, North-Holland, 1981, pp. 329-354. 4. Lomas, J., Quality assurance and effectiveness in health care: An overview. Quality Assurance in Health Care, 2:5-12. 5. Martin, J.B., The environment and future of health information systems. J. Health Admin. Ed. 8:11-24, 1990. 6. Ostrowski, C.P., New input to hospital decision making. J. Med. Syst. 8:65-71, 1984. 7. Pierskalla, W.P., and Woods, D., Computers in hospital management and improvements in patient care-New trends in the United States. J. Med. Syst. 12:411428, 1988.

332

Computer ]~: More Harm Than Good?

8. Sutherland, R.W., and Fulton, M.J., Health Care in Canada: A Description of Canadian Health Services, Ottawa, The Health Group, 1988. 9. Chains, M.P., and Schaefer, M., Health Care Organizations:A Modelfor Management. Englewood Cliffs, NJ, Prentice-Hall, 1983. 10. Feeny, D., New health care technologies: Their effect on health and the cost of health care. In Health Care Technology: Effectiveness, Efficiency & Public Policy. (D. Feeny, G. Guyatt, and P. Tugwell, eds.), Montreal, Quebec, The Institute for Research on Public Policy, 1986, pp. 5-24. 11. Blum, B.I., Clinical Information Systems, New York, Springer-Vedag, 1986. 12. Blum, B.I., Information systems for patient care. In Information Systems for Patient Care. (B.I. Blum, ed.), New York, Springer-Verlag, 1984. 13. Shortliffe, E., Computer programs to support clinical decision-making. J. Am. Med. Assoc. 258:61-66, 1987. 14. Tugwell, P., Bennett, K., Feeny, D., et al., A framework for the evaluation of technology: The technology assessment iterative loop. In Health Care Technology: Effectiveness, Efficiency & Public Policy (D. Feeny, G. Guyatt, and P. Tugwell, eds.), Montreal, Quebec, The Institute for Research on Public Policy, 1986, pp. 41-56. 15. Brorsson, B., and Wall, S., Assessment of Medical Technology~Problems and Methods, Falun, Sweden: Swedish Medical Research Council, 1985. 16. Davis, G.B., and Olsun, M.H., Management Information Systems: Conceptual Foundations, Structure, and Development. New York, McGraw-Hill, 1985. 17. Eddy, D.M., Selecting technologies for assessment. Int. J. Technol. Assess. Health Care 5:485-502, 1989. 18. Glasser, J.H., The aims and methods of technology assessment. Health Policy 3:239-240, 1988. 19. Covvey, D.H., Craven, N.H., and McAlister, N.H., Computers in Health Care, Volume I: Concepts and Issues in Health Care Computing, St. Louis, The C.V. Mosby Company, 1985. 20. Grams, R.R., Evaluation tools for hospital computer equipment and systems. J. Med. Syst. 8:229-237, 1984. 21. Grams, R.R., Peck, G.C., Massey, J.K., and Austin, J.J., Review of hospital data process in the United States (1982-1984). J. Med. Syst. 9:175-269, 1985. 22. Grams, R.R., and Peck, G.C., National survey of hospital data processing--1985. J. Med. Syst. 10:423568, 1986. 23. Young, F.E., Validation of medical software: Present policy of the Food and Drug Administration. Ann. Int. Med. 106:628-629, 1987. 24. Lundsgaarde, H.P., Evaluating medical expert systems. Social Sci. Med. 24:805-819, 1987. 25. Hennekens, C.H., and Buring, J.E., Epidemiology in Medicine, Boston, Little, Brown and Company, 1987. 26. Sackett, D.L., Haynes, R.B., and Tugwell, P., Clinical Epidemiology: A Basic Science for Clinical Medicine, Boston, Little, Brown and Company, 1985. 27. Kaplan, B., Initial impact of a clinical laboratory computer system. J. Med. Syst. 11:137-147, 1987. 28. Jaeschke, R., and Sackett, D.L., Research methods for obtaining primary evidence. Int. J. Technol. Assess. Health Care 5:503-520, 1989. 29. Barnett, C.O., The modular hospital information system. In Computers in Biomedical Research (Volume IV) (R.W. Stacy and B.D. Wazman, eds.), New York, Academic Press, 1982, pp. 243-266. 30. Jay, S.J., and Anderson, J.G., Computerized hospital information systems: Their future role in medicine. J. Roy. Soc. Med. 75:303-305, 1982. 31. Mandell, S.F., Resistance to computerization: An examination of the relationship between resistance and the cognitive style of the clinician. J. Med. Syst. 11:311-317, 1987. 32. Anderson, J.G., Jay, S.J., Schweer, H.M., and Anderson, M.M., Why doctors don't use computers: Some empirical t'mdings. J. Roy. Soc. Med. 79:142-144, 1986. 33. Kaplan, B., The medical computing 'lag': Perceptions of barriers to the application of computers to medicine. Int. J. Technol. Assess. Health Care 3:123-136, 1987. 34. Teach, R.L., and Shortliffe, E.H., An analysis of physician attitudes regarding computer-based clinical consultation systems. Comput. Biomed. Res. 14:542-558, 1981. 35. Friedman, B.A., and Martin, J.B., The physician as a locus of authority, responsibility, and operational control of medical systems. J. Med. Syst. 12:389-396, 1988.

Wall

333

36. Vegoda, P.R., and Banacore, E., Hospital information systems--Friend or foe? J. Med. Syst. 10:11-23, 1986. 37. Dowling, A.F., Do hospital staff interfere with computer system implementation? Health Care Manag. Rev. 5:23-32, 1980. 38. Haynes, R.B., and Walker, C.J., Computer-aided quality assurance. Arch. Int. Med. 147:1297-1301, 1986. 39. Duda, R.O., and Shortliffe, E.H., Expert systems research. Science 220:261-268, 1983. 40. Reggia, J.A., Computer assisted medical decision making: A critical review. Ann. Biomed. Eng. 9:605619, 1981. 41. Drummond, M.F., Guidelines for health technology assessment: Economic evaluation. In Health Care Technology: Effectiveness, Efficiency & Public Policy. (D. Feeny, G. Guyatt, and P. Tugwell, eds.), Montreal, Quebec: The Institute for Research on Public Policy, 1986, pp. 107-128. 42. Drummond, M.F., Methods for economic appraisal of health technology. In Economic Appraisal of Health Technology in the European Community. (M.F. Drummond, ed.), Oxford, Oxford University Press, 1987. 43. Drummond, M.F., Stoddart, G.L., and Torrance, G.W., Methods for the Economic Evaluation of Health Care Programmes. Oxford, Oxford University Press, 1987. 44. Boyle, M.H., Torrance, G.W., Horwood, S.P., and Sinclair, J.C., A Cost Analysis of Providing Neonatal Intensive Care to 500-1499 Gram Birth Weight Infants, Hamilton, Ontario, McMaster University Programme for Quantitative Studies in Economics and Population, Research Report 51, 1982. 45. Luce, B.R., and Elixhauser, A., Estimating costs in the economic evaluation of medical technologies. Int. J. Technol. Assess. Health Care 6:57-76, 1990. 46. Bradley, E., and Campbell, J.G., Information system selection: methods for comparing service benefits. In Proceedings of the Fifth Annual Symposium on Computer Applications in Medical Care, Silver Springs, MD, IEEE Computer Society Press, 1981. 47. Cascio, W.F., Costing Human Resources: The Financial Impact of Behaviour in Organizations, Boston, Kent, 1982. 48. Drazen, E., Methods for evaluating costs of automated hospital information systems. In Information Systems for Patient Care. (B.I. Blum, ed.), New York, Springer-Verlag, 1980, pp. 427-437. 49. Spencer, L.M., Calculating Human Resource Costs and Benefits: Cutting Costs and Improving Productivity. New York, Wiley, 1986. 50. McLaughlin, C.P., An economic analysis of the operation of automated clinical laboratories. In Computers in Biomedical Research (volume IV) (R.W. Stacy and B.D. Waxman, eds.), New York, Academic Press, 1974, pp. 181-213. 51. Rappoport, A.E., and Gennaro, W.D., The economics of computer-coupled automation in the clinical chemistry laboratory of the Youngstown Hospital Association. In Computers in Biomedical Research (Volume IV) (R.W. Stacy and B.D. Waxman, eds.), New York: Academic Press, 1974, pp. 243-266. 52. Wolfe, H.B., Cost-benefit of laboratory computer systems. J. Med. Syst. 10:1-9, 1986. 53. Arenson, R.L., and London, J.W., Radiology operations management computer system, Hospital of the University of Pennsylvania. In: Proceedings of the American College of Radiology sixth conference on computer applications in radiology, 1979. 54. Mishelevich, D.J., Gipe, W.G., Roberts, J.R., et al., Cost-benefit analysis in a computer-based hospital information system. In Proceedings of the Third Annual Symposium on Computer Applications in Medical Care, Silver Springs, MD, IEEE Computer Society Press, 1979, pp. 339-47. 55. Szota, P., A cost-effectiveness and cost-benefit analysis of the unit-dose medication system and the traditional system. An unpublished project prepared for the Design, Measurement, and Evaluation Programme course MS737, McMaster University, 1985. 56. Ellis, W.T., and Oliver, D.M., A multifaceted approach to a computerized radiotherapy system. In Proceedings of the Third Annual Symposium on Computer Applications in Medical Care, Silver Springs, MD, IEEE Computer Society Press, 1979, pp. 358-361. 57. Gall, J.E. et al., Demonstration and evaluation of a total hospital information system. Washington, DC, National Centre for Health Services Research NTIS Report PB-262-106, 1975. 58. Mackintosh, D., Hertz, R., St. Clair, D., and Brower, L., A comparative evaluation of two multiphasic health testing systems. In Proceedings of the Third Annual Symposium on Computer Applications in Medical Care. Silver Springs, MD, IEEE Computer Society Press, 1979, pp. 362-371.

334

Computer ~ : More Harm Than Good?

59. Enteriine, J.P., Evaluation and future directions. In A clinical information system for Oncology (J.P. Enterline, R.E. Lenhard, and B.1. Blum, eds.), New York: Springer-Vedag, 1989, pp. 217-240. 60. Kerr, D.R., Quantifying the cost-benefits of computer dental management systems. In Proceedings of the Sixth Annual IEEE Symposium on Computer Applications in Medical Care. Silver Springs, MD, IEEE Computer Society Press, 1982, pp. 458-461. 61. Jones-Lee, M.W., The Value of Life: An Economic Analysis. London, Martin Robertson, 1976. 62. Mishan, E., Cost-Benefit Analysis: An Informal Introduction, London, George Alien & Unwin, 1975. 63. Mooney, G., In Human Life and Suffering. (D. Pearce, ed.), The Valuation of Social Cost. London, George Allen & Unwin, 1978, pp. 120-139. 64. Szczepura, A.K., and Stilwell, J.A., Information for decision makers at hospital laboratory level: An example of a graphical method of representing costs and effects for a replacement automated technology in a haematology laboratory. Social Sci. Med. 26:715-725, 1988. 65. Sailly, J.C., Lebmn, T., et al., The medical and economic consequences of automation in bacteriology: A case study in a French university hospital. Social Sci. Med. 21:1163-1166, 1985. 66. Drazen, E., Feely, R., Metzger, J., and Wolfe, H., Methods for evaluating costs of automated hospital information systems. Report by Arthur D. Little, Inc. to the National Centre for Health Services Research. Washington DC: National Centre for Health Services Research NTIS Report PB-80-178-593, 1980. 67. Mehrez, A., and Gafni, A., Quality-adjusted life-years, utility theory, and healthy years equivalent. Med. Decision Making 9:142-149, 1989. 68. Torrance, G.W., Measurement of health state utilities for economic appraisal: A review. J. Health Econ. 5:1-30, 1986. 69. Torrance, G.W., Utility approach to measuring health-related quality of life. J. Chronic Dis. 40:593-600, 1987. 70. Torrance, G.W., and Feeny, D., Utilities and quality-adjusted life years. Int. J. Technol. Assess. Health Care 5:559-579, 1989. 71. Lezotte, D.C., Quality controlling interpretive reporting systems--A changing responsibility. J. Med. Syst. 12:153-167, 1988. 72. Glasser, J.P., Drazen, E.L., and Cohen, L.A., Maximizing the benefits of health care information systems. J. Med. Syst. 10:51-56. 73. Soumeral, S.B., and Avorn, J., Efficacy and cost-containment in hospital pharmacotherapy: state of the art and future directions. Milbank Memorial Fund Quarterly~Health and Society 62:447--474, 1984. 74. Siegel, C., Alexander, M.J., Dlugacz, Y.D., and Fischer, S., Evaluation of a computerized drug review system: impact, attitudes, and interactions. Comput. Biomed. Res. 17:419--435, 1984. 75. Levine, M.A.H., A randomized controlled trial for improving physician prescribing behaviour. An unpublished thesis submitted to the school of graduate studies in partial fulfillment of the requirements for the degree Master of Science, McMaster University, 1988. 76. Gehlbach, S.H., Wikinson, W.E., Hammond, W.E., et al., Improving drug prescribing in a primary care practice. Medical Care 22:193-201, 1984. 77. Hershey, C.O., Porter, D.K., Breslau, D., et al., Influence of simple computerized feedback on prescription charges in an ambulatory clinic: A randomized clinical trial. Medical Care 24:427-481, 1986. 78. Birch, S., Donaldson, C., Applications of cost-benefit analysis to health care: Departures from welfare economic theory. J. Health Econ. 6:211-225, 1987.