PrimaVera Working Paper Series - CiteSeerX

6 downloads 0 Views 56KB Size Report
bank was sold to the Dutch bank ING for one pound sterling in March 1995. 3 Different ... Staw and Ross (1987) suggest asking these questions: Do I have.
PrimaVera Working Paper Series

PrimaVera Working Paper 2005-07

An Information Perspective of Organizational Disasters Chun Wei Choo

March 2005

Category: academic

University of Amsterdam Department of Information Management Roetersstraat 11 1018 WB Amsterdam http://primavera.fee.uva.nl

Copyright ©2005 by the Universiteit van Amsterdam All rights reserved. No part of this article may be reproduced or utilized in any form or by any means, electronic of mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the authors.

An Information Perspective of Organizational Disasters

An Information Perspective of Organizational Disasters Prof. Dr. Chun Wei Choo The Faculty of Information Studies, the University of Toronto [email protected]

,

HTTP://CHOO.FIS.UTORONTO.CA

Abstract: An initial reaction by managers is that most organizational disasters are caused by human error. While human error may be the event that precipitates an accident, organizational failures always have multiple causes, and focusing only on human error misses the systemic contexts in which the failure occurred and can happen again in the future. From an information perspective, most organizational disasters can be prevented. Organizational disasters incubate over long gestation periods during which errors and warning signals build up. While these signals become painfully clear in hindsight, the challenge for organizations is to develop the capability to recognize and treat these precursor conditions before they tailspin into failure. In this paper, we look at theories that explain why disasters happen, and highlight information practices that can reduce the risk of catastrophic failure.

An Information Perspective of Organizational Disasters

An Information Perspective of Organizational Disasters

From an information perspective, most organizational disasters can be prevented. Organizational disasters develop gradually over time, producing many warning signals. Unfortunately, there are information barriers that keep us from recognizing these warnings. Organizations can increase their vigilance by developing attentive information practices that surmount these barriers.

In September 2004, Merck initiated the largest prescription-drug withdrawal in history. After more than 80 million patients had taken Vioxx for arthritis pain since 1999, the company withdrew the drug because of an excessive risk of heart attack and stroke. As early as 2000, the New England Journal of Medicine had published the results of a Merck trial which showed that patients taking Vioxx were four times as likely to have a heart attack or stroke as patients taking naproxen, a competing drug. Dr Edward Scolnick (then Merck’s chief scientist) had e-mailed to colleagues lamenting that the cardiovascular risks with Vioxx "are clearly there." Merck argued that the difference was due to the protective effects of naproxen and not danger from its drug. In 2001, the Journal of the American Medical Association published a study by Cleveland Clinic researchers which found that the "available data raise a cautionary flag about the risk of cardiovascular events" with Vioxx and other cox-2 inhibitors (Topol 2004). Merck did not answer the call for more studies to be done. In 2003, Circulation, a journal of the American Heart Association, published a study showing that Vioxx was associated with a higher risk of heart attacks. Merck had participated in the study but then removed the name of its co-author. In 2004, Merck was testing whether Vioxx could also prevent a recurrence of polyps in the colon. An external panel overseeing the clinical trial recommended stopping the trial because patients on the drug were twice as likely to have a heart attack or stroke as those on a placebo. Merck finally decided to withdraw Vioxx, four years after the introduction of the blockbuster drug. Most organizational disasters incubate over long gestation periods during which errors and warning signals build up. While these signals become painfully clear in hindsight, the challenge for organizations is to develop the capability to recognize and treat these precursor conditions before they tailspin into failure. In this paper, we look at theories that explain why disasters happen, and highlight information practices that can reduce the risk of catastrophic failure. Theories of Organizational Disasters: Why do catastrophic accidents happen in organizations? An initial reaction by managers is that most accidents are caused by human error. While human error may be the event that precipitates an accident, organizational accidents always have multiple causes,

An Information Perspective of Organizational Disasters

and focusing only on human error misses the systemic contexts in which the accident occurred and can happen again in the future (Reason 1997). In fact, Perrow’s Normal Accident Theory (1999) maintains that accidents and disasters are inevitable in complex, tightly-coupled technological systems, such as nuclear power plants. In his theory, complex systems show unexpected interactions between independent failures, and tight-coupling between subsystems propagate and escalate initial failures into a general breakdown. It is this combination of complexity and tight-coupling that makes accidents inevitable or “normal.”

Accidents also occur in non-complex, low-technology work settings. Rasmussen (1997) and Vicente (2003) argue that accidents can happen when work practices migrate beyond the boundary of safe and acceptable performance. Over time, work practices drift or migrate under the influence of two sets of forces. The first is the desire to complete work with a minimum expenditure of mental and physical energy that moves work practices towards least effort. The second is management pressure that moves work practices towards cost reduction or a minimum expenditure of resources. The combined effect is that work practices drift towards and perhaps beyond the boundary of safety. Can organizational disasters be foreseen? The surprising answer is yes. According to the seminal work of Turner and Pidgeon (1997), organizational disasters are neither chance events nor ‘acts of God’ but “failures of foresight.” Turner analyzed 84 official accident reports published by the British Government over an eleven-year period. He discovered that disasters develop over long incubation periods during which warning signals fail to be noticed “because of erroneous assumptions on the part of those who might have noticed them; because there were information handling difficulties; because of a cultural lag in precautions; or because those concerned were reluctant to take notice of events which signaled a disastrous outcome.” (Turner and Pidgeon 1997, p. 85-86) Three types of information problems are important—we look at each with a recent example. 1. Signals are not seen as warnings because they are consistent with organizational beliefs and aspirations. The Congressional Subcommittee investigating the fall of Enron concluded that “there were more than a dozen red flags that should have caused the Enron Board to ask hard questions, examine Enron policies, and consider changing course. Those red flags were not heeded.” (US Senate Report 2002, p. 59) The investigation found that the Board received substantial information about Enron’s activities and explicitly authorized many of the improper transactions. During the 1990s, Enron had created an online trading business that bought and sold contracts for energy products. Enron believed that to succeed it would need to access significant lines of credit to settle its contracts daily, and to reduce the

An Information Perspective of Organizational Disasters

large quarterly earnings fluctuations which affected its credit ratings. To address these financial needs, Enron developed a number of practices, including “prepays,” an “asset light” strategy, and the “monetizing” of its assets. Because it was hard to find parties willing to invest in Enron assets and bear the significant risks involved, Enron began to sell or syndicate its assets, not to independent third parties, but to “unconsolidated affiliates.” These were entities that were not on Enron’s financial statements but were so closely associated that their assets were considered part of Enron’s own holdings. When warning signals appeared about these questionable methods, Board members were not worried: they saw these practices as part of the way of doing business at Enron. In the end, the Board knowingly allowed Enron to move at least $27 billion or almost half of its assets off balance sheet.

2. Warning signals are noticed but those concerned do not act on them. In February 1995 one of England’s oldest merchant banks was bankrupted by $1 billion of unauthorized trading losses. The Bank of England report into the collapse of Barings Bank concluded that “a number of warning signs were present” but that “individuals in a number of different departments failed to face up to, or follow up on, identified problems” (sec. 13.12). In mid-1994, an internal audit of BFS (Baring Futures Singapore) reported as unsatisfactory that Nick Leeson was in charge of both front office and back office at BFS, and recommended a separation of the two roles. This report was regarded as important and was seen by the CEO of Baring Investment Bank Group, Group Finance Director, Director of Group Treasury and Risk, and the Chief Operating Officer. Yet by February 1995 nothing was done to segregate duties at BFS. In January 1995, SIMEX (Singapore International Monetary Exchange) sent two important letters to BFS about a possible violation of SIMEX rules and the ability of BFS to fund its margin calls. There was no follow-up investigations into these concerns. During all this time, Barings in London continued to fund the trading of BFS: significant funds were remitted regularly to BFS without knowing how they were being applied. Senior management continued to act on these requests without question, even as the level of funding increased and the lack of information persisted. Trading losses ballooned quickly and the insolvent bank was sold to the Dutch bank ING for one pound sterling in March 1995. 3 Different groups have partial information and interpretations, and no one has a view of the situation as a whole. In August 2000, Bridgestone/Firestone announced a recall of more than 6.5 million tires, mostly mounted on Ford Explorers, because of accidents caused by tire treads separating from the tire cores. Ford had followed industry practice in allowing its tire supplier Firestone to choose how it would meet performance and tire safety specifications. Ford and other car makers also did not keep safety records on their tires. In mid-1998, an insurance firm informed the National Highway Traffic Safety Administration (NHTSA) of a pattern of tread separation in Firestone ATX tires. Later that year, in

An Information Perspective of Organizational Disasters

Venezuela, Ford noted problems of tread separation in Firestone tires on Explorers, and sent failed tires to Firestone for analysis. In 1999, Ford replaced tires on Explorers sold in Saudi Arabia affected by tread-separation problems as a “customer notification enhancement action.” Ford and Firestone began to blame each other as outside safety concerns about Explorer tires intensified. In May 2000, NHTSA launched a formal investigation into alleged tread separation on Firestone tires. Three months later Firestone announced the recall. Subsequent analysis determined that the tire tread separations were caused by several factors acting in combination: the tread design, manufacturing practices at a Firestone plant, the low tire inflation pressure recommended by Ford, and overloading of the vehicle. A Ford spokesman noted that “in the interest of public safety, some more sharing of information between the tire and auto makers may be desirable.” (Aeppel et al, 2000) How can organizations guard against catastrophic failures? There are strategies that an organization can adopt to raise its information vigilance

(Choo;

Macintosh-Murray and Choo, in press). At the individual level, people in organizations can increase their information alertness by being aware of the biases in the ways that we use information to make judgments. For example, Kahneman and Tversky (2000) describe how a situation is framed can affect our perception of risk. Imagine that two programs (A, B) are available to fight a disease that will kill 600 persons. In A, 200 persons are saved; in B there is a 33% chance of saving everyone and a 67% chance that no one is be saved. Most people would choose the certainty of A. The same problem is then framed differently. In program C, 400 persons die. In program D there is a 33% chance that nobody dies and a 67% chance that everyone dies. Most people now choose D, the riskier option. Thus, people facing losses become risk takers—they gamble (riskily) in the hope of reducing the loss.

Research has identified many other information biases: we prefer confirmatory information, feel overconfident about our judgments, and over-rely on stereotypes (Gilovich et al 2002). Research has identified other information biases that are common among business executives: we prefer information that confirm our actions and abilities; we feel over-confident about our judgments; and we tend to be unrealistically optimistic (Lovallo and Kahneman 2003, Gilovich et al 2002). We can offset these tendencies by using different frames of reference to look at a problem, imagining improbable or unpopular outcomes, and balancing optimism with realism. For example, in crisis decision making, we might generate multiple views of the future rather than stick to one scenario; we might pretend that the crisis has ended badly, and then imagine the reasons why; and we might examine the experience of others in order to arrive at a more realistic estimate of our own chances of success. When a course of action has gone very wrong, and objective information indicates that withdrawal is necessary to avoid further losses, many managers decide to persist, often pouring in more resources

An Information Perspective of Organizational Disasters

(Ross and Staw, 1993). Although past decisions are sunk costs that are irrecoverable (Arkes and Blumer, 1985), they still weigh heavily in our minds, mainly because we do not want to admit error to ourselves, much less expose our mistakes to others. If facts challenge a project’s viability, we find reasons to discredit the information. If the information is ambiguous, we select favorable facts that support the project. Culturally, we associate persistence with strong leaders who stay the course and view withdrawal as a sign of weakness. How can managers know if they have crossed the line between determination and over-commitment? Staw and Ross (1987) suggest asking these questions: Do I have trouble defining what would constitute as failure for this decision? Would failure in this project radically change the way I think of myself as a manager? If I took over this job for the first time today and found this project going on, would I want to get rid of it? At the group level, we are concerned with how a group’s ability to make risky decisions may be compromised by groupthink and group polarization. Groupthink occurs when members hide or discount information in order to preserve group cohesiveness (Janis, 1982). The group overestimates its ability and morality, closes its mind to contradicting information, and applies pressure to maintain conformity. Recently, groupthink was identified as a cause of the erroneous intelligence assessment on WMD in Iraq. The US Senate Select Committee Report (2004) found that intelligence community personnel “demonstrated several aspects of groupthink: examining few alternatives, selective gathering of information, pressure to conform with the group or withhold criticism, and collective rationalization.” (p. 18) Groupthink can be prevented. The same team of President Kennedy and his advisors that launched the disastrous Bay of Pigs invasion subsequently handled the 1962 Cuban Missile Crisis effectively, creating a model of crisis management.

Group polarization (Stoner 1969) happens when a group collectively makes a decision that is more risky than what each member would have done on their own. Sunstein (2003) explains group polarization as the combined effect of informational cascades and reputational cascades. Informational cascades (Bikhchandani et al, 1992) refer to the idea that if a large number of people seem to believe that some proposition is true, there is reason to believe that the proposition is in fact true. Most of what we know comes not from firsthand knowledge but from what we learn from what others do and think. If we see that a large number of people believe that some proposition is true, we think there is reason to believe that the proposition is in fact true. We make a decision based on the observation of what others have done, even if our own information is opposing. Reputational cascade refers to the desire to have the good opinion of others. If a number of people seem to believe something, there is an incentive not to disagree with them, at least not in public. The desire to maintain the good opinion of others breeds conformity and silences dissent, especially in closely-knit groups.

An Information Perspective of Organizational Disasters

Groupthink and group polarization can be controlled. To overcome conformity tendencies, the leader should create a group environment that encourages the frank exchange of dissimilar views. The leader should be impartial and avoid stating preferences at the outset. To counter close-mindedness, the group should actively seek information from outside experts, including those who can challenge the group’s core views. The group could divide into multiple subgroups that work on the same problem with different assumptions. A member could play the role of a devil’s advocate who looks out for missing information, doubtful assumptions, and flawed reasoning. At the organizational level, organizations need to cultivate an information culture that can not only recognize and respond to unexpected warning signals, but also enable the organization to contain or recover from incipient errors. Research on “high reliability organizations” (such as nuclear aircraft carriers and hospital emergency departments that do risky work but remain relatively accident-free) reveal that these organizations depend on a culture of ‘collective mindfulness’ (Weick and Sutcliffe 2001, Roberts and Bea 2001, La Porte 1996). Such organizations observe five information priorities. They are preoccupied with the possibility of failure, and they do what they can to avoid it—they encourage error reporting, analyze experiences of near misses, and resist complacency. They recognize that the world is complex and rather than accepting simplified interpretations, they seek a more complete and nuanced picture of what is happening. They are attentive to operations at the front line, so that they can notice anomalies early while they are still tractable and can be isolated. They develop capabilities to detect, contain, and bounce back from errors; and so create a commitment to resilience. They push decision making authority to the people with the most expertise, regardless of their rank. Ultimately, a vigilant information culture is a continuing set of conversations and reflections about safety and risk that is backed up by the requisite imagination and political will to act (Pidgeon 1998, Westrum 1992, Bazerman and Watkins 2004). In today’s environment, an organization’s focus on vigilance and safety is constantly being eroded by the forces of competitive pressure and economic scarcity. Perhaps the most important condition of an alert information culture is that senior management needs to make vigilance and safety an organizational priority (Boin and Lagadec, 2000). Where there is a fundamental understanding that serious errors are a realistic threat, then there is the resolve to search for, and then treat, the precursor conditions (Flin 1998, Rosenthal et al 2001). An alert information culture requires an on-going willingness to set aside the existing assumptions about failures and safety, to scan for unintended consequences, to seek out and listen seriously to perspectives which depart from administratively and politically defined norms, and the curiosity to treat information that does not fit expectations as interesting and potentially critical. Reason (1997) has pointed out that constant unease is the price of vigilance, or as Andrew Grove of Intel so famously observed, “only the paranoid survive.”

An Information Perspective of Organizational Disasters

References

Aeppel, T., Ansberry, C., Geyelin, M., & Simison, R. L. (2000). Road Signs: How Ford, Firestone Let the Warnings Slide By As Debacle Developed. Wall Street Journal, Sep 6, 2000, p. A.1. A detailed analysis of the history of events that led to the Firestone tire recall. Arkes, H. R., & Blumer, C. (1985). The Psychology of Sunk Cost. Organizational Behavior and Human Decision Processes, 35(1), 124-140. Classic paper on how past investments of time and effort affect decision making. Bank of England Board of Banking Supervision. (1995) Report of the Board of Banking Supervision Inquiry into the Circumstances of the Collapse of Barings. London, UK: Bank of England. Formal investigation of the bankruptcy of Barings Bank. Bazerman, M. H., & Watkins, M. D. (2004). Predictable Surprises: The Disasters You Should Have Seen Coming, and How to Prevent Them. Boston, MA: Harvard Business School Press. Discusses cognitive, organizational and political biases that make it hard to recognize potential disasters. Boin, A., & Lagadec, P. (2000). Preparing for the Future: Critical Challenges in Crisis Management. Journal of Contingencies & Crisis Management, 8(4), 185-191. Looks at how the idea of crisis and crisis management has changed in today’s environment. Choo, C. W. (In press). The Knowing Organization: How Organizations Use Information to Construct Meaning, Create Knowledge, and Make Decisions (2nd ed.). New York: Oxford University Press. Examines how organizations use (and fail to use) information in the context of organizational learning. Flin, R. (1998). Safety Condition Monitoring: Lessons from Man-Made Disasters. Journal of Contingencies & Crisis Management, 6(2), 88-102. Discusses monitoring disaster pre-conditions. Gilovich, T., Griffin, D., & Kahneman, D. (Eds.). (2002). Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge, UK: Cambridge University Press.

Collection of influential research examining the role of heuristics and biases in human judgment.

An Information Perspective of Organizational Disasters

Janis, I. (1982). Groupthink: Psychological Studies of Policy Decision. Boston, MA: Houghton Mifflin. Seminal work on groupthink, with detailed case studies of major policy fiascoes. Kahneman, D., & Tversky, A. (Eds.). (2000). Choices, Values and Frames. Cambridge, UK: Cambridge University Press. Collection of classic research on “Prospect Theory,” including framing effects on risk perception. La Porte, T. R. (1996). High Reliability Organizations: Unlikely, Demanding and At Risk. Journal of Contingencies and Crisis Management, 4(2), 60-71. Overview of the “High Reliability Organization” research program. Lovallo, D., & Kahneman, D. (2003). Delusions of Success: How Optimism Undermines Executives' Decisions. Harvard Business Review, 81(7), 56-63. Discusses the cognitive biases that lead executives to be over-confident and over-optimistic in their decision making. MacIntosh-Murray, A., & Choo, C. W. (In press). Health Care Failures and Information Failures. To appear in B. Cronin (Ed.), Annual Review of Information Science and Technology (Vol. 40). Medford, NJ: Information Today Inc. Survey of research on information behaviors, organizational culture, and patient safety. Perrow, C. (1999). Normal Accidents: Living With High Risk Technologies. Princeton, NJ: Princeton University Press. Seminal work on “Normal Accident Theory.” Pidgeon, N. (1998). Shaking the Kaleidoscope of Disasters Research . Journal of Contingencies & Crisis Management, 6(2), 97-101. Debates the question: Can organizational disasters be foreseen? Rasmussen, J. (1997). Risk Management in a Dynamic Society: A Modeling Problem. Safety Science, 27(2/3), 183-213. Classic paper presenting a social-technical system model of accident causation. Reason, J. T. (1997). Managing Risks of Organizational Accidents. London, UK: Ashgate Publishing Company.

Classic work that introduces many influential concepts in disaster theory. Roberts, K. H., & Bea, R. (2001). Must Accidents Happen? Lessons from High-Reliability Organizations. Academy of Management Executive, 15(3), 70-79. Executive overview of high reliability organizations, with examples and advice. Rosenthal, U., Boin, A., & Comfort, L. K. (Eds.). (2001). Managing Crises: Threats, Dilemmas, Opportunities. Springfield, IL: Charles C. Thomas, Publisher. Collection of case studies and theoretical research on crisis management. Ross, J., & Staw, B. M. (1993). Organizational Escalation and Exit: Lessons from the Shoreham Nuclear Power Plant. Academy of Management Journal, 36(4), 701-732. Case study of escalation of commitment in an organization.

An Information Perspective of Organizational Disasters

Staw, B. M., & Ross, J. (1987). Knowing When to Pull the Plug. Harvard Business Review, 65(2), 6874. Discusses escalation pitfalls in managerial decision making and how to avoid them. Stoner, J. (1968). Risky and Cautious Shifts in Group Decisions: The Influence of Widely Held Values. Journal of Experimental Social Psychology, 4, 442-459. Seminal paper on group polarization resulting in risky shifts in group decision making. Sunstein, C. R. (2003). Why Societies Need Dissent. Cambridge, MA: Harvard University Press. Recent synthesis and elaboration of the research on group polarization. Topol, E. J. (2004). Failing the Public Health: Rofecoxib, Merck, and the FDA. New England Journal of Medicine, 351(17), 1707-1709. Dr Topol of the Cleveland Clinic Foundation is a strong critic of the failures at Merck and the FDA that contributed to the Vioxx crisis. Turner, B. A., & Pidgeon, N. F. (1997). Man-Made Disasters (2nd ed.). Oxford: ButterworthHeinemann. Seminal work on the incubation model of organizational disasters. US Senate Report of the Permanent Subcommittee on Investigations. (2002). Report on The Role of The Board of Directors in Enron’s Collapse. Washington, DC: US Government Printing Office. Report 107-70. Formal investigation of the failures of the Enron Board in the company’s collapse. US Senate Select Committee on Intelligence. (2004). Report on the US Intelligence Community's Prewar Intelligence Assessments on Iraq. Washington, DC: US Government Printing Office. Formal review of US intelligence on the existence of Iraq’s WMD programs. Vicente, K. (2003). The Human Factor: Revolutionizing the Way People Live with Technology. Toronto, Canada: Alfred A. Knopf Canada. Discusses human factors engineering in relation to organizational accidents and crises. Weick, K. E., & Sutcliffe, K. M. (2001). Managing the Unexpected: Assuring High Performance in an Age of Complexity. San Francisco: Jossey-Bass. Explains the concept of “Collective Mindfulness” and the organizational skills needed to achieve it. Westrum, R. (1992). Cultures with Requisite Imagination. In J. A. Wise, V. D. Hopkin & P. Stager (Eds.), Verification and Validation of Complex Systems: Human Factors Issues (pp. 401-416). Berlin: Springer-Verlag. Discusses how different organizational cultures treat information.