Do Information Systems Actually Improve Problem ...

5 downloads 135 Views 34KB Size Report
Current research on problem solving still follows the model created by Simon .... A later article by Friedman and colleagues (Friedman, Elstein, Wolf, Murphy, ...
Do Information Systems Actually Improve Problem-Solving and DecisionMaking Performance? An Analysis of 3 Different Approaches to the Design of Information Systems Carlos Nakamura Department of Educational and Counselling Psychology, McGill University, Canada [email protected] Susanne P. Lajoie Department of Educational and Counselling Psychology, McGill University, Canada [email protected] Gloria C. Berdugo Department of Educational and Counselling Psychology, McGill University, Canada [email protected]

Abstract: We present three different approaches to the research and development (R&D) of information systems: problem solving, decision making, and case-based reasoning. In contrast to case-based reasoning, problem solving and decision making are rule-based approaches. Problem solving emphasizes the sequential process of searching for a solution path. Decision making focuses on the nature of the decision outcome. We present the results of a selection of studies on the effects of these systems on the problem-solving and decision-making performance of their users. Finally, we discuss the limitations of each of the three approaches and their implications for future research.

Introduction Although learning is , most of the times, associated with school work, it does not stop when we graduate from school. Learning also happens at any work environment, although it ceases to be the final goal. In any job, to a lesser or greater extent, people are expected to constantly learn and adapt. Every time we solve a new problem we learn something from it. That knowledge can then be used to solve future problems. Our ability to solve new problems is constrained by our problem-solving skills and by the limitations of our knowledge base. In many situations, our knowledge base is incomplete -- we often do not have all the knowledge required to solve a specific problem. Consequently, it makes sense to include the role of external sources of information in research on problem solving and decision ma king. Every time people gather more information to solve a problem, each new piece of information can potentially change what they thought they knew about the problem. New information about a problem can even change the initial representation of the problem itself. For example, results from laboratory tests can change the initial hypotheses a physician holds about a patient's disease. In more complex medical cases, newly found information about a patient can even change the pathophysiological approach (which physiological anomalies are related to the disease). We view these external sources of information as cognitive tools (Lajoie, 2000; Lajoie & Derry, 1993). Cognitive tools can accomplish four functions: support cognitive processes (such as memory and metacognitive processes ); share the cognitive load by providing support for lower level cognitive skills so that resources are left over for higher order thinking skills; allow the learners to engage in cognitive activities that would be out of their reach otherwise; and allow learners to generate and test hypotheses in the context of problem solving. This paper is about learning that must occur just-in-time, as we work on a problem, either at school or at work. More specifically, it focuses on the role of the cognitive tools we devise to help us solve those problems. We investigate to what extend and in which ways information systems can fill in the gaps in our knowledge base and enhance our problem solving and decision making performance.

Most empirical studies cited in this paper dealt with the use of information systems used in educational settings, although information systems are equally used in professional settings. In theses studies, learning was measured as the difference between the scores of assisted and unassisted performance. This type of assessment reflects an educational perspective in which learning is not viewed (or measured) as the amount of knowledge one gathers but as the amount of knowledge one can manipulate and apply at a time to solve specific problems. In other words, we will be focusing on learning that is situated within specific activities, contexts, and cultures (Lave & Wenger, 1990).

Research and Development Approaches to Information Systems The research and development (R&D) of information systems is , above all, based on the reasoning process the system is intended to support. In this regard, there are two major polarizing perspectives: problem solving and decision making, and rule-based and case-based reasoning (see Fig. 1).

Fig. 1 : Polarizing perspectives on reasoning In contrast to case-based reasoning, problem solving and decision making are rule-based approaches. In rule-based reasoning, knowledge gained from past experience is converted into generalized rules that can then be applied to solve new problems. For example, when dealing with a challenging clinical case, physicians will follow a set of diagnostic procedures in order to generate and test hypotheses about the disease. In case-based reasoning, prior episodes are directly applied to the problem at hand – it is a pattern-matching approach. For example, experienced physicians can quickly diagnose diseases in their areas of specialization by recalling prior episodes that match the present patient’s conditions. Proble m solving usually relates to fixing agendas, setting goals, and designing actions; decision making relates to evaluating and choosing (Simon, Dantzig, Hogarth, Piott, Raiffa, Schelling, Shepsle, et al., 1986). “Problem solving research emphasizes the sequential process of searching for a solution path, whereas decision research focuses more on the nature of the decision outcome and how it may deviate from an acceptable normative standard.” (Patel, Kaufman, & Arocha, 2002). In the next sections we will summa rize the theoretical and empirical findings of existing research on the cognitive perspectives on reasoning presented above.

Problem Solving Current research on problem solving still follows the model created by Simon and his colleagues in the 1960s. The model proposed by Newell and Simon (1972) consists of a problem space with an initial state, a goal

state, and a set of operators that the problem solver uses to move from one state to another. Problem solvers do not necessarily have the whole problem space represented in their minds at one time. Furthermore, some problem spaces are so large that the problem solver cannot search through all possible intermediate states. Consequently, strategies to select the most promising paths are necessary. One of the simplest problem solving strategies is known as hill climbing. In hill climbing, the problem solver moves to the next intermediate state that is most likely to lead to the goal state. One limitation of the hill climbing strategy is that, in the absence of a panoramic view of the problem space, a move that appears to lead the problem solver closer to the goal state may in fact lead him or her further from it. A more effective problem solving strategy is means-ends analysis. Means-ends analysis is a decomposition or subgoaling strategy: the problem solver starts by tracing intermediate states and subgoals between the initial state and the goal state. These subgoals can be solved with relative independence to the rest of the problem. If a subgoal cannot be solved, it can be further subdivided (Dunbar, 1998). The most thorough investigations of the relationship between problem solving performance and the use of information systems have been conducted by a research team at the University of North Carolina at Chapel Hill (de Bliek, Friedman, Wildemuth, Martz, Twarog, and File, 1994; Wildemuth, de Bliek, Friedman, and File, 1995; Wildemuth, Friedman, Keyes, and Downs, 2000). The North Carolina research team conducted several studies on the use of a database on bacteriology by medical students. Inquirer, the database, was developed by the research team in 1986. Later the database was expanded to include information on toxicology and pharmacology. The study by de Bliek et al. (1994) assessed medical students’ ability to solve problems in bacteriology. A treatment group (database assisted) and a control group (unassisted) were tested 3 times: right before a bacteriology course; right after the course; and five months later. The control group had low scores in the first assessment (12%). Their scores rose in the second assessment (48%) but decreased after six months (25%). The treatment group rose from 44% at the first assessment to 57% in the second assessment to 75% in the third assessment. This study showed that the used of databases can revert the parabolic trend in students’ problem solving skills to a linear increasing trend. This seems to indicate that, in the long run, there is a synergistic relation between clinical reasoning and the use of medical databases. Wildemuth et al. (1995) conducted a study on the relationship between domain knowledge, information searching proficiency, and database assisted problem-solving performance. Sixty-four first-year medical students participated in the study. Participants were assessed in four different occasions (between Fall 1990 and Spring 1992) in three different domains (bacteriology, pharmacology, and toxicology). The methodology for this study was analogous to the one followed by de Bliek et al. (1994). An expanded version of Inquirer was used in this study. The study’s primary findings show that there is little correlation between domain knowledge and information searching proficiency. Only four of the 28 correlations traced between domain knowledge scores and searching factor scores were statistically significant, and three out of the four were negatively correlated. The secondary findings showed a high correlation between information searching proficiency and successful use of information in problem solving. Students' ability to select appropriate terms for their searches was “consistently related to their ability to respond to problems with database assistance” across assessment occasions. Trend analysis confirmed the correlation by showing that both trends were parallel. Wildemuth once again replicated the study by de Bliek et al. (1994) as part of a larger study (Wildemuth et al., 2000) that included the use of two different database interfaces. Similar results were found. Unassisted scores were highest just after the course and lowest just before the course. Database-Assisted Scores were similar just before and after the course but were higher six months after the course.

Decision Making Decision making relates to one’s ability to reach a decision even with incomplete evidence. A decision is reached by weighing the relative probabilities of competing hypotheses. Decision making theory developed from two different theories: the theory of expected utility (EU) or subjective expected utility (SEU) and Bayes Theorem. The rationale for SEU is that “in making decisions one should maximize one's gain, which is calculated as the ratio of chance taken by amount of payoff.” (Patel, Kaufman, Arocha, 2002). Baye’s theorem, on the other hand, “prescribes how people should take account of new information and how they should respond to incomplete information.” (Simon et al., 1986). “Bayes' theorem is a relation among conditional and marginal probabilities. It can be viewed as a means of incorporating information, from an observation, for example, to produce a modified or updated probability distribution” (Free Software Foundation, 2001). The decision making perspective focuses on how decisions made by individuals deviate from a normative standard. These deviations are attributed to biases or

violations of consistency where individuals selectively attend to some variables and ignore others (Patel, Kaufman, Arocha, 2002). Due to these biases, some probabilities are overestimated while others are underestimated. Decision support systems are information systems that were specifically developed to support decision making. These systems are usually based on Bayesian networks or a similar probabilistic model. “In a typical [clinical] decision support system, an explicit representation of medical knowledge is applied to the specific circumstances of a case to provide advice to clinicians regarding the diagnosis or management of that case” (Friedman, Elstein, Wolf, Murphy, Franz, et al. 1999). Once again we will restrict our discussion to systems developed for clinical purposes. Published studies show mixed results regarding the effects of decision support systems on clinical performance and patient outcomes. The review of controlled clinical trials assessing the effects of decision support systems conducted by Hunt and colleagues (Hunt, Haynes, Hanna, & Smith, 1998) show that decision support systems “can enhance clinical performance for drug dosing, preventive care, and other aspects of medical care, but not convincingly for diagnosis”. Additionally, it shows that there is still a lack of evidence regarding effects of decision support systems on patient outcomes. A later article by Friedman and colleagues (Friedman, Elstein, Wolf, Murphy, Franz, et al. 1999) published the results of a study conducted during four years and involving 216 participants distributed among three academic medical centers. The study involved the use of two decision support systems and assessed, among others, the diagnostic accuracy of physicians before and after the use of the selected decision support systems. There were significant differences between the two conditions. The correct diagnosis appeared in participants’ hypothesis lists in 39.5% of the cases before the consultation and 45.4% after the consultation. Larger effects were observed among students (P=.048) than among residents or faculty. The authors concluded that, given these results, there is a potential educational role for these systems.

Case-Based Reasoning The case-based reasoning perspective posits that reasoning is primarily based on remembering and reapplying the lessons of prior episodes. In case-based reasoning, one solves a new problem by (1) retrieving relevant episodes from long-term memory; (2) establishing the correspondence between the new and the old problem; and (3) adapting the solutions of the prior episode to the current one (Leake, 1998). In addition, case-based reasoning theory assumes that learning occurs as a byproduct of this process as knowledge gained from the solution of the new problem is stored for future use. This description bares close resemblance to the mechanisms of analogical reasoning. Reimann and Schult (1996) in fact argue that case-based reasoning is equated to analogical reasoning when the analogies refer back to prior epis odes of the same domain of the problem being solved. Casebased reasoning can also be equated to what some researchers call a pattern-matching or pattern-recognition approach: discerning patterns in the current situation that are then used to search in the LTM for phenomena with similar patterns. Leake (1998) makes the following argument regarding the advantages of case-based reasoning over rule-based models of reasoning: Reasoning from prior episodes has a number of functional benefits compared to traditional rule-based models of reasoning. First, it helps to generate effective solutions in situations whose causal structure is not completely understood. Rule -based models assume that conclusions are drawn by chaining together generalized rules, and that learning involves deriving and storing new generalized rules for future use. However, real-world events are often too complex and too imperfectly understood to immediately distill them into generalized rules. The case-based reasoning process avoids this problem by deriving conclusions only when they are needed, directly from the episodes themselves, in the context of the new situation. (p. 467) Most of the literature on case-based systems has either a descriptive or a prescriptive nature. Some studies have been conducted on the performance of case-based system. Bareiss (1989), for example, conducted a study where he compared the performance of a case-based system to the performance of clinicians and students in classifying hearing disorders. However, we were not able to find any study that assessed the effects of case-based systems on its users’ problem solving or decision making performance.

Implication for Future Research Each of the above described R&D approaches has its specific limitations. Next, we will discuss these limitations and their implications for future research on database-assisted problem solving and decision making. The set of studies on database-assisted problem solving cited in this paper measured the effects of database use based on the diagnostic accuracy of medical students. This type of approach provides us with a partial view of the nature of those effects. Exactly in which ways is the use of the information system improving diagnostic accuracy? To answer this question we must take a closer look at how medical students reason about the problem and discriminate which parts of that reasoning process are being affected by the use of the information system. There have been a few studies that have produced a detailed mapping of the diagnostic reasoning process such as the one by Patel and Groen (1991). However, none of those studies included the use of an information system. The same criticisms can be made of the research done on the use of decision support systems. Even though research on decision making focuses on the outcomes rather than on the process, we cannot ignore the reasoning that produces those outcomes. We still need a detailed mapping of the process used to reach a decision (which is specific to each domain) in order to provide support to that process. One practical way to break down the reasoning process in any of the three discussed approaches is by discriminating how people generate and test hypothetical solutions to a problem. The recursive process of hypothesis generation and testing is a key feature of the reasoning used in both social and natural sciences. These two complementary processes are very different in nature. Their relative weights vary for each domain and task. Let us take the example of clinical diagnosis. After getting acquainted with a patient case, physicians generate a list of hypothes es that are tested through further exams and laboratory tests. If the correct hypothesis is not among the initial hypothesis list, the hypothesis testing process will not produce a diagnosis (or worse, it will produce a wrong diagnosis). Moreover, if all existing hypotheses are disconfirmed, the process of hypothesis testing itself will seldom generate alternative hypotheses. This demonstrates how complementary but independent the processes of hypothesis generation and hypothesis testing are. Research on medical expertise has shown that experienced physicians almost always have the correct hypothesis among their initial hypothesis list. Conversely, that research shows that medical students almost always do not have the correct hypothesis among their initial hypothesis list (Faremo, 2004). These findings have important implications for medical education in particular and for (social or natural) science education in general. This recursive process of hypothesis generation and testing is complemented by a third process of information seeking. When trying to solve a problem, people often must seek for additional information to help them solve that problem. Depending on whether they need support to generate or test a hypothesis, the nature of the information they need will be different. Medical databases, either bibliographic (such as Medline) or full-text (such as Harrison’s Online) can offer reasonable support to hypothesis testing. However, they are of little help if one does not have any working hypothesis. Decision support systems , on the other hand, can offer some level of support to hypothesis generation. Nonetheless, decision support systems were conceived to suggest diagnoses, not hypothes es. The difference between suggesting diagnoses and suggesting hypotheses is that while the first aims for precision, the second aims for breadth. That is, when generating a list of hypotheses, one would want to include every possibility. Conversely, when generating a list of possible diagnoses, one would want to restrict the list to the most likely possibilities. The goals of the latter are far more ambitious. Because they are trying to accomplish a great deal, decision support systems become very complex systems. The increasing complexity of a system tends to make it less user-friendly. The less user-friendly a system, the less likely it is to be adopted by its intended audience. So far decision support systems have not been incorporated in medical practice or medical education in very significant ways. We believe that a simpler tool, such a hypothesis generation system may turn out to be much more useful to the diagnostic reasoning process because it tackles the problem in a more specific and more tangible manner. We have just started a new study on the effects of a hypothesis generation tool in medical students’ clinical reasoning and should have results within the next year. The hypothesis generation and testing approach is not new. It has been used by Mengel and Fields (1997) as an instructional model of clinical diagnosis. It has been used by Green and Trotman (2001) to assess the reasoning process in auditing. However, it has not yet been used to guide the design of information systems. We believe this kind of approach can bring significant contribution to the R&D of database-assisted problem solving and decision making. The reasoning process required to solve a problem or make a decision is not homogeneous. It involves different stages that impose different kinds of cognitive demands. We have argued that the systems discussed in this paper provide support to either hypothesis generation or hypothesis testing. Further, because they were not specifically designed to specifically support hypothesis generation or hypothesis testing, their usefulness is limited. We argue that the research on problem solving and decision making in any domain needs to take a closer

look at the different stages involved in those processes in order to better understand the specific cognitive demands of each of those stages. Once we have that understanding, we will be more likely to develop information systems that provide optimal support to problem solving and decision making.

References Bareiss, R. (1989). The experimental evaluation of a ase-based learning apprentice. In Proceedings of a Workshop on CaseBased Reasoning (pp.162-167). San Mateo, CA: Morgan Kaufman Publishers. Berner, E. S., Webster, G. D., Shugerman, A. A., Jackson, J. R., Algina, J., Baker, A. L., Ball, E. V., Cobbs, C. G., Dennis, V. W., Frenkel, E. P., Hudson, L. D., Mancall, E. L., Rackley, C. E., & Taunton, O. D. (1994). Performance of Four ComputerBased Diagnostic Systems. The New England Journal of Medicine, 330(25), 1792-1796. Dunbar, K. (1998). Problem Solving. In W. Bechtel & G. Graham (Eds.), A companion to cognitive science (pp. 289-298). Malden, MA: Blackwell. Faremo, S. (2004). Medical problem solving and post-problem reflection in BioWorld. Unpublished doctoral dissertation. McGill University, Montreal. Free Software Foundation. (2001). Wikipedia. Retrieved on June 6, 2005 from http://en.wikipedia.org/wiki/Bayes%27_theorem. Friedman, C. P., Elstein, A. S., Wolf, F. M., Murphy, G. C., Franz, T. M., et al. (1999). Enhancement of clinicians’ diagnostic reasoning by computer-based consultation: A multisite study of 2 systems. Journal of the American Medical Association, 282(19), 1851-1856 Green, W., & Trotman, K. (2001). An examination of different performaWence outcomes in an analytical procedures task. Retrieved September 1, 2005, from The University of New South Wales, School of Accounting Web site: http://www2.accounting.unsw.edu.au/nps/servlet/portalservice?GI_ID=System.LoggedOutInheritableArea&maxWnd=_Research _WorkingPaperSeries_2001#5 Hunt, D. L., Haynes, R. B., Hanna, S. E., & Smith, K. (1998). Effects of computer-based clinical decision support systems on physician performance and patient outcomes. JAMA, 290(15), 1339-1346. Lajoie, S. P. (Ed.). (2000). Computers as cognitive tools (vol. 2): No more walls. Mahwah, NJ: Erlbaum. Lajoie, S. P., & Derry, S. J. (Eds.). (1993). Computers as cognitive tools. Hillsdale, NJ: Erlbaum. Lave, J., & Wenger, E. (1990). Situated learning: Legitimate periperal participation. Cambridge, UK: Cambridge University Press. Leake, D. B. (1998). Case-based reasoning. In W. Bechtel & G. Graham (Eds.), A companion to cognitive science (pp.465-476). Malden, MA: Blackwell. Mengel, M. B. & Fields, S. A. (1997). Introduction to clinical skills: A patient-centered textbook. New York; London: Plenum Medical Book Company. Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall. Patel, V. L. & Groen, G. J. (1991). The general and specific nature of medical expertise: A critical look. In K. A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 93-125). Cambridge: Cambridge University Press. Patel, V.L., Kaufman, D. R., & Arocha, J. F. (2002). Emerging paradigms of cognition in medical decision making. Journal of Biomedics Informatics, 35(1):52-75. Reimann, P., & Schult, Th. J. (1996). Turning examples into cases: Acquiring knowledge structures for analogical problem solving. Educational Psychologist, 31, 123-132.

Simon, H. A., Dantzig, G. B., Hogarth, R., Piott, C. R., Raiffa, H., Schelling, T. C., Shepsle, K. A., Thaier, R., Tversky, A., Winter. S. (1986). Decision making and problem solving (Report of the Research Briefing Panel on Decision Making and Problem Solving). Washington, DC: National Academy Press. Retrieved on June 9, 2005, from http://dieoff.org/page163.htm Wildemuth, B.M., Friedman, C. P., Keyes, J., & Downs, S. M. (2000). A longitudinal study of database-assisted problem solving. Information Processing and Management, 36, 445-459. Wildemuth, B. M., de Bliek, R., Friedman, C. P., & File, D. D. (1995). Medical students’ personal knowledge, searching proficiency, and database use in problem solving. Journal of the American Society for Information Science, 46(8), 590-607.