Development of risk assessment for nuclear power: insights from ...

2 downloads 0 Views 337KB Size Report
Aug 23, 2014 - Based on new designs of nuclear power plants, new methods of assessing risks, and calculations of cost efficiency, proponents of nuclear ...
J Environ Stud Sci (2014) 4:273–287 DOI 10.1007/s13412-014-0186-8

Development of risk assessment for nuclear power: insights from history John H. Perkins

Published online: 23 August 2014 # AESS 2014

Abstract Nuclear power plays an important role in the global energy economy, but its safety has been a contentious issue for over 50 years. Based on new designs of nuclear power plants, new methods of assessing risks, and calculations of cost efficiency, proponents of nuclear power see it as safe and necessary, but skeptics do not. How can people be so divided on a fundamental issue like safety? Part of the answer lies in the history of risk assessment’s invention, development, and deployment. The US Atomic Energy Commission (AEC) developed a form of risk assessment extensively used today: probabilistic risk assessment (PRA). The AEC originally wanted to strategically assure the public of nuclear power’s safety. Controversy greeted PRA’s debut, however, and the US Nuclear Regulatory Commission, AEC’s successor agency, changed PRA into a tactical tool. Scientific and ethical criticisms, political opposition to nuclear power, and accidents combined to force the transition. In contrast to PRA for nuclear power, other forms of risk assessment successfully entered the regulation of toxic chemicals. The safety of nuclear power still elicits sharp disagreements between opponents and proponents of the technology, which in turn leaves a cloud over the future of the technology.

environment. Where does the balance lie? Should people accept the dangers to have the benefits? Benefits are genuine and easy to measure, usually in cents per kilowatt-hour. The dangers pose more challenges. The USA pioneered a new method, probabilistic risk assessment (PRA), in an attempt to assess danger of nuclear power plants, but the new method sparked disagreements about its adequacy. Despite nuclear power’s important roles in the USA and global economies, the adequacy of its safety remains contentious. The following questions exemplify the issues of importance to environmental scholars seeking the balance between risk and benefits.

Keywords Nuclear power . Probabilistic risk assessment . Risk assessment . Safety . History

&

Nuclear power has operated since the 1950s, and currently, it produces about 12 % of the world’s electricity. Fossil fuels produce 68 % and hydropower 16 % (International Energy Agency 2013). Like many other technologies, nuclear power offers genuine benefits but poses a danger to people and the

This article explores these issues by examining the development of PRA for nuclear power. In 1974, the US Atomic Energy Commission (AEC) released a draft of the Reactor Safety Study, also known as the Rasmussen Report after its chairman, Norman C. Rasmussen, and as WASH-1400, its publication number. AEC’s successor agency, the US Nuclear Regulatory Commission (NRC) released the final version in 1975 (U.S. Nuclear Regulatory Commission 1975a). In this paper, I will refer to it as “the Study,” a massive multivolume

J. H. Perkins (*) Member of the Faculty Emeritus, The Evergreen State College, 236 Cambridge Avenue, Kensington, CA 94708, USA e-mail: [email protected]

& & & & &

Who developed the safety procedures and why did they do it? Does robust science underlie safety procedures? In what ways did political agendas affect the development of safety procedures? How do plant operators and government manage safety of nuclear power plants? How did economic costs interact with issues of safety in the growth of the nuclear power industry? How does risk analysis for nuclear power compare with risk analysis for other dangerous technologies, such as pesticides and other toxic chemicals?

274

tome with a widely circulated Executive Summary. The Study developed PRA, for calculating the probabilities of (a) melting of the fuel of a nuclear reactor, (b) release of radiation from the containment structures of a reactor, and (c) illness or death from radiation among people exposed to the radiation. The paper argues four points. First, the AEC and congressional allies wanted the Study to show strategically that concerns about safety of nuclear power were improperly exaggerated and that safety was not a legitimate barrier to expansion of the technology. The strategic mission failed, and the NRC relegated PRA to the role of a tactical management tool for each plant individually. This was a diminished role that made no claim that nuclear power was safe enough for routine deployment. Second, the disputes about the Study and PRA demonstrated that subjective values were inextricably intertwined with the scientific dimensions of risk assessment and the political agendas surrounding nuclear power. Developers of PRA believed in the scientific validity of their numbers, but embedded value judgments shaped the outcomes and reception of PRA. Third, PRA dealt with different kinds of questions than risk assessment for hazardous chemicals. The latter uses of risk assessment entered legal regulatory standards, but PRA remained as guidelines, not legally enforceable standards. Fourth, selection of new energy technologies requires profiles of their respective strengths and weaknesses. This paper provides information relevant to building a profile of nuclear power. Existing work on PRA’s history celebrates the claims for its technical value but provides few insights into its evolution, shifting objectives, and failure to settle safety arguments (Carlisle 1997; Keller and Modarres 2005). The history of PRA recounted here highlights the contentious origins of PRA and the consequences of those debates. With safety unresolved, nuclear power will remain a controversial technology for mitigating environmental problems such as climate change and health problems, even if costs come down. We begin the story with the dawn of commercial nuclear power, the emergence of safety concerns, and the role of costs in the growth of the nuclear industry. Appendix 1 summarizes the chronology of important events.

AEC, the birth of nuclear power, and the rise of safety analysis The Atomic Energy Act of 1946 governed the exploitation of a new discovery: uranium fission. Weapons development occupied first priority, but Congress embraced the heat of fission as useful for civilian purposes. The 1954 rewriting of the Atomic Energy Act brought private companies into the development of nuclear power. The takeoff of commercial nuclear power, however, remained slow during the 1950s, and the USSR and UK rivaled the USA in this new technology.

J Environ Stud Sci (2014) 4:273–287

Proponents of nuclear power in the AEC and Congress persuaded President Kennedy to ask the AEC for a plan to stimulate development of commercial nuclear power. The AEC’s 1962 report projected 90 % of electricity from uranium by 2020. At the time of the report, electricity demand was rising rapidly, and the AEC argued that only nuclear power from about 1,000 reactors could produce the amounts needed for an indefinite future (U.S. Atomic Energy Commission 1962). Congress embraced AEC’s vision (U.S. Congress Joint Committee on Atomic Energy 1963). Electricity consumption never rose as much as predicted, and nuclear power never came close to supplying 90 %. The nuclear industry began to stall about 1974, and no utility obtained a new construction permit between early 1978 and 2012. About 20 % of US electricity currently comes from 100 reactors. The collapse came from a number of factors: a drop in the growth rates of electricity consumption, substantial cost-overruns and time delays in construction, and difficulties in financing (Bupp and Darian 1981; Cohn 1997). The nuclear power industry has never achieved economic competitiveness in over 50 years (Bradford 2013). Economics stymied nuclear power, but concerns about safety—not costs—dominated public debate from the late 1960s to the 1970s. Even though electricity shortages created problems during this time (detailed below), concerns about safety and other environmental impacts put the AEC on the defensive. The agency believed that nuclear power was safe but also knew loss of public trust jeopardized the new technology. Safety posed a pivotal problem, because uranium’s fission releases enormous heat (desired) and a wide variety of radioactive by-products or waste (not desired but unavoidable). Faulty control could cause the reactor to melt or explode, release the waste products, and cause great damage. Scientists and engineers in the Manhattan project and the AEC recognized the unique dangers of fission (Teller and Latter 1958) and sought safety originally through structural designs and remote placement (Carlisle 1997). Safety concerns gained new prominence when the Atomic Energy Act of 1954 opened nuclear energy to private companies. If a reactor released radioactive materials, the private operator would face devastating liability. Congress resolved this issue in 1957 with the Price-Anderson Act: A company operating a nuclear reactor had to obtain liability insurance up to the maximum then obtainable, $60,000,000. The US government indemnified the company for an additional $500 million. During the debates about Price-Anderson, the Joint Committee on Atomic Energy asked AEC to study the consequences of a serious accident, which led to the agency’s first publication about the risks of nuclear power: Theoretical Possibilities and Consequences of Major Accidents in Large Nuclear Power Plants (WASH-740) (U.S. Atomic Energy Commission 1957; Mazuzan and Walker 1984; Johnson 1986; Balogh 1991).

J Environ Stud Sci (2014) 4:273–287

WASH-740 asserted that “The probability of occurrence of publicly hazardous accidents in nuclear power reactor plants is exceedingly low. No one knows now or will ever know the exact magnitude of this low probability of a publicly hazardous reactor accident.” Accidents potentially could be catastrophic, however, with up to 3,400 people killed and 43,000 injured or up to $7 billion of property damage. AEC’s 1962 Report had mentioned safety issues, but AEC staff believed that they had them under control. Other factors also made concerns about hazardous technology widespread, for example, the radioactive fallout of nuclear weapons testing and the increasingly widespread uses of pesticides. Carson’s (1962) Silent spring and Barry Commoner’s study of fallout brought these concerns to a very wide audience and to the highest levels of government (Lutts 1985; Egan 2007). A diverse group of scientists and engineers responded by inventing a new scientific discipline, risk assessment. Risk assessors argued that quantitative knowledge allowed the public to enjoy benefits without undue concern. Prior to risk assessment, a response to a dangerous technology might simply be banning it, as Congress did in 1958 when it prohibited pesticides that caused cancer (Wargo 1996). AEC played an important role in arguing for the safety of nuclear power through two studies. Both will be discussed below, but one, the Study, developed PRA (U.S. Nuclear Regulatory Commission 1975a). Significantly, AEC prepared both studies during a time of rapid changes in nuclear power technology and in the political and environmental contexts in which it operated.

Changes in nuclear power technology In 1966, Clifford Beck, Deputy Director for Regulation at AEC, identified the changes underway. Reactor manufacturers were building plants of much larger power. More fuel was in the reactors, so they contained higher amounts of radioactive wastes. The new reactors had longer times between refueling, further increasing the radioactive inventory. The increased power of the newer reactors also threatened explosions from steam and hydrogen (Beck 1966). Beck also noted that utilities wanted to locate reactors near cities to save on transmission costs. As Beck noted, however, remote siting was the first method for protecting the public. Despite the distance standards, however, nuclear engineers and AEC regulators in the mid-1960s were already installing and approving, respectively, engineering devices that enabled licenses for sites near cities (Culver 1966). Beck observed that such siting demanded that the engineered safety devices have extraordinarily high reliability. The primary concern was that the supplies of cooling water might fail, leaving the fuel to overheat and melt, a loss-ofcoolant accident or LOCA (Nuclear Energy 2009). Melted fuel would release radioactive materials, but emergency

275

cooling water might prevent overheating and melting. AEC increasingly insisted upon emergency core cooling systems or ECCSs as an engineered safety feature. The major problem with ECCSs, fully recognized by AEC in the mid-1960s, lay in the fact that no empirical evidence indicated that they would work reliably. Experiments at small and medium scales, combined with budget reductions, left AEC unconvinced of ECCS reliability in larger reactors. In April 1971, the agency concluded that ECCSs were not as good as previously thought and that licensing delays might result. By June, however, the AEC had formulated Interim Acceptance Criteria for ECCSs to move forward with licensing and yet maintain purportedly acceptable levels of safety (Walker 1992).1

Changes in the environmental and political contexts of nuclear power Changes in reactor technology altered the hazards. Changes in political and environmental conditions affected how the public viewed those hazards. From 1946, AEC had responsibilities for protecting public health from radiation. Through the 1950s and 1960s, however, the agency’s credibility steadily eroded, especially for exposures to low-level radiation, e.g., from fallout or normal operations of nuclear power plants. Thus, AEC entered the 1970s with a controversial reputation on its ability to safeguard health (Walker 2000). Hazards from large releases of radiation, as depicted in WASH-740, however, were quite different from those of low-level radiation, because they left no doubt of their immediate, potentially fatal effects. The debates about LOCAs and ECCSs were about large catastrophic releases, and AEC quickly lost its credibility for this hazard, too. In July 1971, the debates on LOCAs and ECCSs moved into the public arena. The recently organized Union of Concerned Scientists (UCS) criticized ECCSs. Henry Kendall, professor of physics at MIT, and Daniel Ford, a recent Harvard graduate, led the charge. Starting in 1972, UCS and other environmental groups battered AEC engineers in public hearings to set standards for ECCSs. For the first time, the UCS marshaled scientific expertise to challenge 1

AEC establishes task force to study fuel cooling systems of nuclear power plants, Press Release, October 27, 1966; Harold L. Price to John T. Conway, October 20, 1967; Carroll W. Zabel to Glenn T. Seaborg, February 26, 1968; Glenn T. Seaborg to John O. Pastore, April 27, 1971; all in RG 128—Records of the Joint Committee on Atomic Energy 1946–1977, General Correspondence, Box 566, Folder: E.C.C.S.—Low as Practicable, National Archives (I), Washington, D.C. US Atomic Energy Commission, Regulatory Information Meeting 486, June 9, 1971, RG 431 Records of the Nuclear Regulatory Commission, Office of the Secretary, Regulatory Program General Correspondence Files, 1956–1972, Box 33, 1970–1972, Folder: Industrial Devel & Regulations-6Reg, Hazards Evaluations Vol. 2, National Archives (II), College Park, MD.

276

industry and governmental scientists (Kendall 2000; Walker 1992; Wellock 2012; Balogh 1991). July 1971 also brought another setback to the AEC on a distinct but related topic. The D.C. Circuit Court of Appeals ruled against AEC in a suit brought by environmental organizations seeking to block a nuclear power complex at Calvert Cliffs, MD (“Calvert Cliffs”). Plaintiffs focused on thermal pollution that would alter ecosystems in Chesapeake Bay and AEC’s lack of compliance with the National Environmental Policy Act (NEPA) (Walker 1992).2 The judges ruled that the Commission’s compliance with NEPA was inadequate. AEC’s loss in Calvert Cliffs lessened the agency’s credibility and immediately disrupted the agency’s mission of promoting nuclear power by forcing the writing of an Environmental Impact Statement for each license application (Del Sesto 1979; Rolph 1979; Walker 1992).3 For federal and utility industry analysts, this disruption potentially meant serious problems with electricity. Analysts worried that supply was falling behind rapidly rising demand. New York City, for example, suffered serious power shortages in 1969 due in part to Consolidated Edison’s inability to put its second nuclear power plant at Indian Point online. In February 1971, 5 months before the Calvert Cliffs decision, AEC had arranged to possibly curtail electric power consumption for uranium enrichment if major eastern cities needed power. Consolidated Edison’s Indian Point 2, still not licensed in late

J Environ Stud Sci (2014) 4:273–287

1971, was one of the plants potentially affected by Calvert Cliffs (Federal Power Commission 1969; Development and Resources Corporation 1969).4 To complicate AEC’s decision-making processes, the agency endured contradictory pressures to ensure safety but also to license as quickly as possible. Representatives and Senators wanted assurances that nuclear power plants would not produce catastrophic accidents. At the same time, they believed that projected shortages of electricity in Pennsylvania, New Jersey, and Maryland were serious. Elected and regulatory officials from many states expressed concern that delays in licensing would cause electricity shortages.5 During the debates of the late 1960s and early 1970s, Commissioners of the AEC acknowledged the importance of environmental protection and safety but emphasized the need for more electricity from nuclear plants. For example, in 1969, Commissioner James T. Ramey argued strenuously and repeatedly that increased needs for electricity made increased supplies of nuclear electricity mandatory. He railed against “stirrer-uppers”6 of dissent, who ignored or were ignorant of the science underlying nuclear power and the many efforts AEC took to ensure safety. The stirrer-uppers that so annoyed Commissioner Ramey came primarily from local organizations that had opposed particular proposed nuclear power plants. These widely 5

2

Calvert Cliffs Coordinating Committee, National Wildlife Federation, and The Sierra Club, Notice of Filing of Petition for Rule Making and Denial of Petition for Rule Making in Light of Pending Rule Making Proceeding, June 29, 1970; Anthony Z. Roisman to The Secretary, Atomic Energy Commission, June 29, 1970; In the Court of Appeals of Maryland, No. 41, September Term, 1970, People’s Counsel Public Service Commission, et al., v. Public Service Commission of Maryland, et al., October 23, 1970; Public Service Commission of Maryland, Order No. [illegible] re Case No. 6281, 4th November 1970; W. B. McCool, Secretary to the Commission, Note by the Secretary, November 13, 1970; Harold L. Price, Director of Regulation, to Berlin, Roisman, and Keasler, November 25, 1970; all in RG 431 Records of the Nuclear Regulatory Commission, Office of the Secretary, Regulatory Program General Correspondence Files, 1956–1972, Box 30, 1970–1972 NRC, Folder: Industrial Develop & Regulations 5 Reg. Baltimore Gas & Electric (Calvert Cliffs, Vol. 1 of 2; and Anthony Z. Roisman to U.S. Atomic Energy Commission, November 25, 1970, in RG 431 Records of the Nuclear Regulatory Commission, Office of the Secretary, Regulatory Program General Correspondence Files, 1956–1972, Box 30, 1970– 1972 NRC, Folder: I.D.R. 5 Reg Baltimore Cal Cliffs (handwritten label; spelling uncertain). 3 James R. Schlesinger, Chairman, to John O. Pastore, Chairman, Joint Committee on Atomic Energy, September 30, 1971, in RG 431 Records of the Nuclear Regulatory Commission, Office of the secretary, Regulatory Program General Correspondence Files, 1956–1972, Box 30, 1970–1972, NRC, Folder: Budget 2 Reg, 1972 and 1973. 4 G. F. Quinn, Assistant General Manager for Development and Production, Meeting of Joint Board for Fuel Supply and Fuel Transport, February 18, 1971, February 22, 1971, in RG 431 Records of the Nuclear Regulatory Commission, Office of the Secretary, Regulatory Program General Correspondence Files, 1956–1972, Box 30, 1970–9172 NRC, Folder: Industrial Devel. & Regulation—5 Licenses.

William O. Mills to Glenn T. Seaborg, June 7, 1971; William G. Milliken to James R. Schlesinger, October 4, 1971; H. T. Westcott to James R. Schlesinger, October 11, 1971; and John R. Verani to James R. Schlesinger, November 23, 1971; all in RG 431 Records of the Nuclear Regulatory Commission, Office of the Secretary, Regulatory Program General Correspondence Files, 1956–1972, Box 30, 1970–1972 NRC, Folder: Industrial Develop & Regulations 5 Reg. Baltimore Gas & Electric (Calvert Cliffs, Vol. 1 of 2. Harold L. Price to Daniel J. Flood, Sep 8, 1971; Harrison A. Williams, Jr., to James R. Schlesinger, September 9, 1971; Harrison A. Williams to Glenn T. Seaborg, June 24, 1971, all in RG 431 Records of the Nuclear Regulatory Commission, Office of the Secretary, Regulatory Program General Correspondence Files, 1956–1972, Box 30, 1970–9172 NRC, Folder: Industrial Devel & Regulations-6Reg, Hazards Evaluations Vol. 2. Interregional Review Subcommittee of the Technical Advisory Committee, “Impact of a 12month delay of new nuclear and fossil-fired steam generating units on the adequacy of electric power supply in the United States, a report,” (National Electric Reliability Council, Research Park, Princeton, New Jersey, February, 1972), 12 pp., in RG 431 Records of the Nuclear Regulatory Commission, Office of the Secretary, Regulatory Program General Correspondence Files, 1956–1972, Box 35, 1970–1972 NRC, Folder: O M–6–Reg; L. Manning Muntzing to the Commissioners, Status Report on Plants for which Full-Power Operation Is Possible During June, July, and August, 1972, May 12, 1972, in RG 431 Records of the Nuclear Regulatory Commission, Office of the Secretary, Regulatory Program General Correspondence Files, 1956–1972, Box 30, 1970– 1972 NRC, Folder: Industrial Develop & Regulations—5 Licenses. 6 James T. Ramey, Licensing and environmental considerations in atomic power development: a checklist, to Atomic Industrial Forum Workshop on Power Reactor Licensing, Glen Cove, NY, June 30, 1969, 18 pp., released July 10, 1969, John F. Kennedy Presidential Library Archives, James T. Ramey (#193) Personal Papers, Series 1. Writings, 1956–1973, Box 2, File: 1969: June–August.

J Environ Stud Sci (2014) 4:273–287

scattered movements occurred in Bodega Bay (California), Eugene (Oregon), New York City, Ithaca (New York), and New Hampshire (Walker 1990; Pope 1990; Mazuzan 1986; Randall 1989; Nelkin 1971). In addition, a few authors had written critically about nuclear power. Novick’s (1968) The careless atom offered the most comprehensive critique for a general audience. No overarching national organization of opponents existed, however, and for the most part, the skeptics were not technical experts about nuclear power. Nevertheless, opposition at the local level had already delayed or stopped specific projects, and it was reasonable for proponents to fear the spread of political dissent. Converging changes in the technical, environmental, and political contexts surrounding nuclear power thus created by 1971 a stage of contradictions. AEC had lost credibility with critics over control of low-level radiation. Designers built larger, more dangerous reactors near cities to save money on transmission, but in some areas, local protest had blocked projects. AEC staff and commissioners promoted development of the technology to solve shortages of electricity. Congress reinforced the agency’s sense of urgency. At the same time, AEC and nuclear industry personnel failed to grasp the message of the new environmentalism, and the court’s chastisement further diminished the credibility of both AEC and the nuclear industry. If these conflicting pressures were not enough, scientists and engineers both inside and outside AEC believed that the engineered safety features justifying placement of large reactors near cities might not always work as planned. Moreover, internal bickering within AEC and between AEC and the national laboratories reflected considerable disagreement about the proper direction and management of safety research at AEC (Gillette 1972a, b, c, d). Consequently, environmental acceptability of nuclear power lay in doubt by mid-1971. Thermal pollution, the basis of Calvert Cliffs, had already caused postponement in 1969 of a reactor on Lake Cayuga in New York State (Nelkin 1971). Safety, however, was the real sticking point. Different cooling technologies could resolve the thermal pollution issue, but lack of confidence in engineered safety features threatened the entire program. Proponents believed reactors were “safe enough,” but critics did not agree. Contradictory pressures and calls for moratoria put the AEC squarely on the defensive by the second half of 1971. In August, AEC’s new chairman, James Schlesinger, knew that restoration of AEC’s credibility on safety and environmental compliance was his top priority (R. S. Lewis 1972).

The origins of the Reactor Safety Study (WASH-1400) The Study arose as a defensive measure against the critics of nuclear power who advocated a slowing or stoppage of the

277

nuclear power industry. The events directly leading to the launch of the Study started when Saul Levine, a nuclear engineer on the AEC staff, spent a few weeks in July and August 1971, preparing recommendations for the Joint Committee on Atomic Energy of the US Congress. Levine suggested that AEC needed to organize a major study on reactor safety, including problems with ECCSs as well as other issues. He explained that some efforts were currently underway on these matters at AEC, but they were too slow. He also argued that the Interim Acceptance Criteria for ECCSs were not sufficient to carry the argument for safety and that the Joint Committee needed to establish a better public record on safety of ECCSs. Perhaps most significantly, Levine put in a plea for strong quantification of levels of safety: “The existence of such a report, plus a JCAE [Joint Committee on Atomic Energy] hearing record that presented quantitatively stated levels of safety being provided in connection with nuclear power and that compared the risks involved in the use of nuclear power with other low probability, large consequence risks in other aspects of our society might well dispel this type of action [criticisms of nuclear power].”7 Levine’s recommendations found their way into a letter from Senator John Pastore, chair of the Joint Committee, to Schlesinger in October, suggesting that AEC perform a major study on safety. Schlesinger agreed,8 and AEC quickly ramped up production of what became the first of two safety reports, The safety of nuclear power reactors (light watercooled) and related facilities (1973, WASH-1250) . In May 1972, the AEC appointed Rasmussen to lead the Study (WASH-1400) (New York Times 1984; Keller and Modarres 2005).9 WASH-1250 appeared in final form in July 1973, and a draft of WASH-1400 was released in August 1974. 7

Saul Levine to Edward J. Bauser, July 28, 1971, RG 128—Records of the Joint Committee on Atomic Energy 1946–1977, General Correspondence, Box 629, Folder: Reactor Safety, National Archives (I), Washington, D.C.; Saul Levine to Edward J. Bauser, July 29, 1971, RG 128—Records of the Joint Committee on Atomic Energy 1946– 1977, General Correspondence, Box 566, Folder: E.C.C.S.—Low as Practicable, National Archives (I), Washington, D.C.; and Saul Levine to Edward J. Bauser, August 13, 1971, RG 128—Records of the Joint Committee on Atomic Energy 1946–1977,General Correspondence, Box 629, Folder: Reactor Safety Comprehensive Study, National Archives (I), Washington, D.C. Quote is from memo of August 13. All memos were prepared on letterhead of the Joint Committee, not AEC, which suggests that Levine spent a few weeks at the Joint Committee. 8 John O. Pastore to James R. Schlesinger, Oct 7–1971; James R. Schlesinger to John O. Pastore, October 14, 1971; RG 128—Records of the Joint Committee on Atomic Energy 1946–1977, General Correspondence, Box 629, Folder: Reactor Safety Comprehensive Study, National Archives (I), Washington, D.C. 9 Saul Levine, Testimony to Science and Technology Committee, New Hampshire House of Representatives, February 10, 1982, in Norman Rasmussen Papers, MC 542, Box 2, folder 3, Levine, Saul, 1981–1985, MIT Archives, Cambridge, MA. Norman C. Rasmussen, Resume, January 22, 1981, Norman Rasmussen Papers, MC 542, MIT Archives, Box 2, Folder 2, Levine, Saul, 1979–1981.

278

The differences in these two reports clearly demonstrated the radical innovations that the Study proposed for the concepts and measurement of safety of nuclear power. Consider first the concept and measurement of safety described in WASH-1250. This report outlined “defense in depth” and “design basis accident/maximum credible accident” as the two fundamental principles for safety management. In this framework, engineers used redundant safety features and designed for maximum safety during normal operations and maximum tolerance for malfunctions. Even with the best of designs, problems would emerge, but engineered safety features such as ECCSs would prevent harm to the plant operators and the public. Engineers then outlined severe hypothetical accidents that were imaginable but improbable. These “design basis accidents” led to additional features to protect the public. Based on design and engineered safety features, AEC staff calculated the expected radiation doses expected within (a) the “exclusion boundary,” i.e., the fence line around the plant, and (b) outside the plant fence line. The AEC considered the reactor safe enough so long as the projected doses any person received fell below 300 rem for the thyroid gland and 25 rem for whole body irradiation.10 If a projected accident generated the release of radiation above the allowable levels, then permission to construct and operate became contingent upon engineered safety features projected to reduce the radiation released. Alternatively, the plant could be located in a remote location. The guidelines for maximum allowable exposures following an accident were considerably higher than exposure limits mandated during normal operations (Barry 1970; U.S. Atomic Energy Commission 1973). The fact that AEC issued licenses to operate reactors that—if a big accident happened—might result in the larger exposures, however, meant that the limits of 300 and 25 rem, respectively, were “acceptable” in emergencies. WASH-1250 conceived of harm in a holistic fashion: Radiation has multiple effects. WASH-1250 did not try to quantify the occurrences of any specific illness or death. Instead, the standard sought to quantify an amount of radiation above which no individual was ever to be exposed. The Reactor Safety Study sought a new way to determine safe enough by calculating probabilities of harm from specific diseases or death. AEC had long wanted estimates of risk, or probability of harm, as a criterion of safety, but only in the 1960s did the dream of calculating such probabilities emerge.

J Environ Stud Sci (2014) 4:273–287

These new techniques had first appeared in preliminary form in issues of safety and reliability of aerospace weapons, in studies of nuclear reactors, and in siting studies in the UK (Mazuzan and Walker 1984; Balogh 1991; Umlauf 1965; Means 1965; Hassl 1965; Mulvihill 1966; Farmer 1967, 1975). They departed sharply from existing practices by basing criteria of safety on probability of harm, i.e., “risk.” This was a new concept for safety. In the late 1960s, AEC staff scientists were clearly interested in the idea that risk calculations might inform decisionmaking, but they could not see a clear way to perform the calculations or interpret them.11 In 1969, Chauncey Starr, a pioneer in reactor technology, developed the philosophical foundation for using probability to judge safety (Starr 1969). Starr wanted to know, quantitatively, how safe was safe enough? He recognized that people might make different decisions based on whether a choice was voluntary or involuntary. Starr believed that a calculation of risk, i.e., a probability of harm, had no meaning unless the analyst could compare it to other such calculations. Starr developed a large range of accident statistics to measure the risks that people—in the US at that time—actually accepted. He could then see if voluntary risks accepted differed from those that were involuntary. They did, and he classified acceptance of electric power as an involuntary risk, i.e., decisions about electric power generation are made politically and socially, not individually. He then turned to a case study of nuclear power, and concluded that for an involuntary activity, the risks of nuclear power were at a level already accepted by society. He also argued that the level of risk accepted by society was in fact higher than that which would be demanded by owners of the plants, who would lose a great deal of money if their plant failed and was destroyed. Starr’s paper provided the conceptual foundation for moving away from safety regulations expressed in WASH-1250. He argued that his methodology—based on revealed, accepted probabilities of harm—could be used to rank risks numerically and make decisions on new technology, like nuclear power. His paper in 1969 opened the door to the development of risk analysis as a method for government’s management of dangerous technologies. Significantly, the Commissioners of the AEC discussed his article just after they had launched the Rasmussen Study. Saul Levine served as the Study’s Staff Director. The charge to the project was “to try to reach some meaningful conclusions about the risks of nuclear accidents using current technology [as] an important first step in the development of quantitative

10

The numerical guidelines for maximum exposure came from 10CFR100.11. The figure of 25 rem was at the time considered by the National Committee on Radiation Protection to be a “once in a lifetime accidental or emergency dose for radiation workers, which may be disregarded in the determination of their radiation exposure status.” The figure of 300 rem for thyroid exposure was not further explained in either 10CFR100 or in WASH-1250.

11 Stephen H. Hanauer to Trevor Griffiths, May 8, 1969, Public Documents Room, U.S. Nuclear Regulatory Commission, Bethesda, MD, Microform address: 41799:074–41799:075; Peter A. Morris to A. Philip Bray, July 28, 1969, Public Documents Room, U.S. Nuclear Regulatory Commission, Bethesda, MD, Microform address: 41724:139–41724:139.

J Environ Stud Sci (2014) 4:273–287

risk analysis methods.” Ultimately, the study took 3 years and involved over 60 people from multiple organizations.12 AEC issued the draft version in 1974. A year later, NRC, AEC’s replacement agency, released the final version. Starr’s probabilistic conception and measurement of hazards had entered AEC’s and NRC’s work.

The findings of the Study: a new concept of safety The Study claimed important insights: melting of the reactor core in a LOCA does not always result in large consequences to the public; for most accidents involving melting of the core, fatalities expected are smaller than other commonly occurring accidents; previous analyses of accidents (WASH-740) assumed poor atmospheric conditions and thus high casualties, but most of the time such conditions would not prevail. The Study stated that it offered no judgment about acceptability of risks, but it emphasized that the calculated risks were much lower than other dangerous technologies already in use.13 Despite the avowal of not judging acceptability, the Study produced a plethora of numbers claimed to be objective. These numbers make it appear that nuclear power is not a particularly dangerous technology, especially compared to other risks routinely accepted. For example, the estimated probability that the fuel would melt was 1 in 20,000 per reactor per year, which would result in likely early fatalities of less than 1. Fifteen million people lived within 25 mi of a nuclear power plant, and an estimated 4,200 people of them would die annually from auto accidents. The average person endured a risk of death from a fleet of nuclear power plants of one in five billion.14 Most importantly, the Study switched the object of attention of the safety experts. As noted earlier, the focus of analysts at the time was on the calculated dose that might be incurred by an individual in the event of a rare but imaginable accident. The maximum doses considered tolerable were 300 rem to the thyroid or 25 rem to the whole body. In contrast, the Study focused on limiting the probability that a conceptual statistical 12

W. B. McCool to Peter A. Morris and John A. Harris, SECY-R-463, May 22, 1972; W. B. McCool to File, Study of Risks Due to Accidents– Nuclear Power Reactors, SECY-R 432, May 15, 1972; Norman C. Rasmussen and Manson Benedict to Stephen Hanauer, March 17, 1972; all in RG 431, Records of the Nuclear Regulatory Commission, Office of the Secretary, Regulatory Program General Correspondence Files, 1956– 1972, Box 33, 1970–1972 NRC, Folder: Industrial Devel & Regulations6Reg, Hazards Evaluations, Volume 4, National Archives (II), College Park, MD; U. S. Nuclear Regulatory Commission, Reactor Safety Study (Washington, D. C.: U. S. Nuclear Regulatory Commission), Main Report, Chapter 1, 1; Executive Summary, 5. 13 Nuclear Regulatory Commission, Reactor Safety Study, Main Report, Chapter 1, 6–7. 14 Nuclear Regulatory Commission, Reactor Safety Study, Main Report, Table 5–4, 83, Table 5–6, 84, Table 6–6, 114, Executive Summary, Table 1–1, 3.

279

individual, living within specified distances of nuclear power plant, would receive a dose that would cause an unacceptable biological effect. The focus was no longer on any real individual. Although the Study stated that it did not address the question of acceptable level of risk, Rasmussen believed— before the Study even began—that it was no longer worth worrying about a risk once its probability drops below 1 in 100,000 (Barry 1970; Rasmussen 1972). The Study specifically disavowed that PRA was ready for use in decision-making. Moreover, it stated that new research on PRA might be useful but no urgency attended the matter as nuclear power was already less risky than other risks.15 Considerable fanfare attended the AEC’s release of the draft Study on 20 August 1974. The Press Release emphasized the vastness and thoroughness of the effort and highlighted the overall message the Commission wished to deliver: “the risks to the public from potential accidents in nuclear power plants are very small. The consequences are no larger, and in most cases, are much smaller than people have been led to believe by previous studies. The likelihood of reactor accidents is much smaller than many non-nuclear accidents having similar consequences.” (U.S. Atomic Energy Commission 1974) At the press conference, AEC chairman Dixy Lee Ray noted, as quoted in the New York Times, “there is no question that the nuclear industry comes off very well. But there is no such thing as zero risk.” Despite the caveat about zero risk, the Times article emphasized the exceptionally infrequent predictions of rates of accidents and personal injuries (Lyons 1974). Release of the final version by the NRC a year later repeated the same enthusiasm shown by AEC (U.S. Nuclear Regulatory Commission 1975b). The Study was not to improve decision-making on site selection or any other regulatory matter; its purpose was strategic: to show skeptics that the technology was safe. The Study pioneered a radically new method for analyzing accident potentials in a nuclear power plant, the first highly visible effort of the federal government in risk analysis, formally sponsored and endorsed by a major agency.

Reception of the Study and the fate of PRA The nuclear industry and many scientists enthusiastically greeted the Study. In March 1975, shortly after the release of the draft Study, a group of 32 physicists and engineers, one third of them Nobel laureates, stated that nuclear power was essential and that benefits far outweighed risks (Bethe et al. 15

Nuclear Regulatory Commission, Reactor Safety Study, Main Report, Chapter 7, 131, 139.

280

1975). Generally, many scientists felt that PRA finally demonstrated with scientific objectivity what they had always believed, namely that nuclear power was safe enough (U.S. Congress House 1976). Others greeted the Study with skepticism and sometimes scorn and ridicule. Four major critiques had a substantial impact. Professor Harold Lewis (University of California, Santa Barbara) led two of the assessments. The first, for the American Physical Society, provided a partial, informal review of the Study. This first Lewis report believed that PRA was useful and should be refined, but it also said that the calculated risks were likely to be relative, not absolute (i.e., not accurate in predicting true rates of accidents). It also stated that there was no quantitative basis for an accurate assessment of ECCSs and that the long-term health effects were probably larger than calculated. They also felt that the Study’s avoidance of sabotage because of uncertainties of its probability left a serious gap (Lewis et al. 1975). Second, the House Committee on Interior and Insular Affairs, Subcommittee on Energy and the Environment, held hearings in 1976 on the final version of the Study, which had been released the previous October and cited by the Joint Committee as support for the adequacy of Price-Anderson’s liability limits of $560 million. Subcommittee chairman Morris Udall argued that Congress had not known how the scientific community regarded the Study and had requested a delay in the vote on Price-Anderson until after his subcommittee vetted the Study. Congress, however, renewed Price-Anderson in 1975 without the subcommittee’s hearings (U.S. Congress House 1977; U.S. Congress House 1986). Nevertheless, Udall proceeded with hearings. The resulting report in 1977 said that the Study was misleading about its comprehensiveness and calculations, and it noted that proponents had used the document to argue that nuclear power was safe enough, despite the Study’s claims not to make such conclusions. The subcommittee said that the Study provided only part of the information needed for judging acceptability. They also stated that NRC had hastened the release date and compromised peer review in order to promote renewal of Price-Anderson (U.S. Congress House 1977). The subcommittee wanted the NRC to arrange for an independent group, including skeptics of nuclear power, to rewrite the Executive Summary and report directly to the Commissioners. NRC, however, asked Professor Lewis to perform a new review of the Study.16 Third, the UCS developed a harsh critique, released in 1977. It focused on what the Union considered faulty or incomplete technical information, and they urged NRC to 16

Morris K. Udall to Marcus A. Rowden, March 14, 1977, Marcus A. Rowden to Morris K. Udall, June 17, 1977, in Victor Gilinsky Papers, Folder: Reactor Safety—Research & Policy, Risk Assessment Program, 1977 June–Nov., Box 315, Hoover Institution Archives, Stanford University, Stanford, California.

J Environ Stud Sci (2014) 4:273–287

withdraw the entire study, reassess plans to expand nuclear power, and subject plants then operating to a new review. Like the first Lewis committee, the Union said that the risk numbers calculated were only relative risks, not absolute. They presented evidence that use of PRA in aerospace, its first use, had underestimated risks as learned later through experience. Moreover, the accident sequences were incomplete, as were data on component reliability. In addition, the Study overlooked design deficiencies, aging, earthquakes, sabotage, and groundwater contamination with strontium-90. Longterm cancers were underestimated. The peer review was inadequate. The nuclear industry had misused the Study in efforts to assert safety (Union of Concerned Scientists 1977). Perhaps most importantly, UCS noted that the Study could not help people choose among energy technologies, a key question. UCS thus joined the conclusions of the Ford Foundation’s A time to choose, which assessed the US energy economy. Prior to A time, US energy policy focused on assuring adequate energy supplies. The foundation, in contrast, emphasized the equal or more important task of reducing demand by increasing energy efficiency. UCS founders strongly advocated extensive development of solar energy, and failure of the Study to address demand or fuel choice was a fatal flaw (Energy Policy Project of the Ford Foundation 1974; Kendall and Nadis 1980). Professor Lewis’ second report in 1978 stated that the Study was a “conscientious and honest effort,” but it had problems. Some of the assumptions used in calculating probabilities of failure and risks were conservative, but others were not. Therefore, the uncertainties were understated. PRA was a useful tool, but the critique also described the Study as “inscrutable” and difficult to follow. It lacked adequate peer review, and its inadequate Executive Summary led to misuse (Risk Assessment Review Group 1978). In addition to the specific critiques, other events created additional concerns between 1974 and 1976. Perhaps of most importance, consumer-advocate Ralph Nader forged a national organization of citizen opposition to nuclear power in 1974. In November of that year, shortly after the release of WASH1400, Nader’s national conference, Critical Mass 74, assembled 650 people from 165 citizen groups from 39 states plus Britain, France, and Japan. Eight scientists in attendance, including Henry Kendall, presented a petition to Congress at the conference. They asked for resolution of ten serious problems, including the possibilities of a catastrophic accident. Specifically, the petition was not to go to the Joint Committee on Atomic Energy, as that committee was the bastion of nuclear power’s political champions in Congress (Burnham 1974; Nader 1974). Political opposition to nuclear power had taken a major step upward, which impinged directly on the reception of the Study. Accidents at nuclear reactors had occurred before the Study’s release, but these events tended to be at small reactors,

J Environ Stud Sci (2014) 4:273–287

281

mostly for weapon’s production or research, and little publicity attended them. That, too, changed on March 22, 1975. Workers at the Browns Ferry nuclear power plant inadvertently started a fire in the electrical cables linking the reactor with the control room. Destruction of electrical cables incapacitated the normal and the emergency cooling system pumps plus valves controlling pressure within the reactor vessel. The fire also affected the control systems of a second reactor. Operators kept control of the reactors only by rigging alternative measures. This fire could have led to catastrophic releases of radioactive materials, and major media outlets publicized the event (Burnham 1975a, b; Gwynne and Bishop 1975).17 In February 1976, three nuclear engineers from General Electric publicly resigned their jobs over concerns about safety. Later that same month, a reactor engineer of the NRC resigned publicly for the same reason. The uproar in the media led the Joint Committee to hold hearings in late February and early March (U.S. Congress Joint Committee 1976; Burnham 1976). After the Study’s release, protests about nuclear power moved from calls for moratoria to ballot propositions to civil disobedience and arrests. Opponents of nuclear power in California imposed restrictions on construction of new facilities (Schils 2011; Carter 1976). Activities in California and elsewhere indicated that many citizens found no comfort in assurances of “low risk.” Whatever praise came to the Study, the critiques, Nader’s national movement, the Browns Ferry fire, the resignations, and visible protests made it impossible for NRC to proclaim that the safety of nuclear power was beyond dispute. NRC withdrew its embrace of the Executive Summary and many of the Study’s findings in January 1979 (U.S. Congress House 1979). This ended the story of PRA, in its original intention, as a strategic ploy to use numbers to convince skeptics of the safety of nuclear power. Despite its troubled origins, PRA is now extensively used tactically to manage nuclear power plants. The accident at Three Mile Island happened just 2 months after NRC distanced itself from aspects of the Study, and ironically, the accident may have provided the stimulus to revise PRA’s purposes, because the Study contained an accident sequence

much as happened at Three Mile Island, which demonstrated that small incidents could cause serious accidents, an important new insight revealed by the PRA models (U.S. Nuclear Regulatory Commission 1982; Levine and Stetson 1982; Levine and Rasmussen 1984).18 Today, NRC continues to use legal standards similar to those in WASH-1250: major accidents shall not result in an exposure of any individual to more than 25 rem. Beyond that, NRC expects licensees to analyze their systems with PRA and show an “extremely low probability” of releasing radiation in the event of an accident. Expectations focus on limits to the number of early fatalities and of cancer deaths and limits to accidents damaging the core and releasing radiation.19 NRC does not embed PRA in law, mandate exact compliance of PRA results with expectations, or ask for comparisons of PRA results with estimated risks of other hazards. PRA became a tactical, not strategic, tool to frame expectations, not requirements.

17

18 S. Levine, “Safety goals for nuclear power plants,” draft, 6/4/80, in Norman Rasmussen papers, MC 542, MIT Archives, Box 2, Folder 2, Levine, Saul, 1979–1981; Milton S. Plesset to John F. Ahearne, October 31, 1980, in Norman Rasmussen Papers, MC 542, MIT Archives, Box 2, Folder 28, Pate, Zack T., 1979–1989; Joseph M. Hendrie to Commissioners Gilinsky, Bradford, Ahearne, June 9, 1981, in Norman C. Rasmussen Papers, MC 542, MIT Archives, Box 1, Folder 48, Hendrie, Joseph M., 1981–1983. 19 10 CFR 50: (a) (1) (ii) (D) (1); 10 CFR 50:34 (a) (1) (ii); 51 FR 30028, August 21, 1986; U.S. Nuclear Regulatory Commission, Office of Nuclear Reactor Regulation, “Standard Review Plan for the Review of Safety Analysis Reports for Nuclear Power Plants: LWR Edition— Severe Accidents,” (NUREG-0800, Chapter 19), http://www.nrc.gov/ reading-rm/doc-collections/nuregs/staff/sr0800/ch19/, 17 January 2013.

John G. Davis to [Directors of Regional Offices of NRC], March 24, 1975, and April 3, 1975, http://www.nrc.gov/reading-rm/doc-collections/ gen-comm/bulletins/1975/bl75004.html and http://www.nrc.gov/ reading-rm/doc-collections/gen-comm/bulletins/1975/bl75004a.html, 8 May 2011; US Nuclear Regulatory Commission, The Browns Ferry Nuclear Plant Fire of 1975 and the History of NRC Fire Regulations (Washington, D.C.: 2009), 1–8, NUREG/BR-0361; Nuclear Information and Resource Services “Safety deficiencies at Browns Ferry Nuclear Power Complex,” June, 2007, 1–2, http://www.nirs.org/factsheets/ brownsferryfactsheet.pdf, 8 May 2011; M. Ragheb and Jim Kolodziej, “Browns Ferry Fire,” January 11, 2011, 1–7, https://netfiles.uiuc.edu/ mragheb/www/NPRE%20457%20CSE%20462%20Safety% 20Analysis%20of%20Nuclear%20Reactor%20Systems/Browns% 20Ferry%20Fire.pdf, 8 May 2011.

Contesting the new concept of safety Starr (1969) had opened the door for using probabilities and risk assessment to characterize “dangers” and “safety.” For Starr, safety of any technology and specifically nuclear power lay in a probability of illness or death that was lower than involuntary risks already accepted. Three points stood out in this new concept of safety. First, safety became a matter with only one or two dimensions, usually immediate mortality and long-term mortality from cancer. Second, all technologies could be compared meaningfully on the basis of one ratio, which allowed contrasting a proposed technology with already accepted risks of others. Third, and perhaps the most important, Starr saw his probabilistic models as extensions of benefit-cost models that were already routine in engineering practice. Benefit-cost analysis itself stemmed from economics that divorced considerations of politics, morality, and justice from business and money. A company seeking profits simply had to show that benefits were sufficiently greater than costs to justify the investment.

282

Safety, seen as ratios of death, enriched benefit-cost analysis and placed safety into a context intended to identify economically profitable technology. The process ignored factors that people outside the enterprise might see as relevant. NRC in 1979, before the accident at Three Mile Island, moved away from the Study and PRA as a strategic tool because of technical criticisms, the inability to claim undisputed safety, and political unrest. After the accident, however, a steady stream of scholarship from the humanities and social sciences began a more thorough critique of PRA and its concept of safety, and these scholars found it wanting as a tool. Perhaps the most significant was sociologist Perrow’s (1984) Normal accidents, prepared in the aftermath of the accident at Three Mile Island. Perrow argued that the Study and PRA exemplified “absolute rationality,” which calculates numerical values for risks and ranks them. In this framework, it is “rational” to pick options with lower calculated risks and “irrational” to pick options with higher risk numbers. Only the numbers have meaning, and the context in which risks appear matters not at all. For example, if benefits flowed to one group of people and risks to another, this would not affect the risk numbers. Distribution of risks compared to benefits does not matter in risk assessment. As Perrow noted, experts use absolute rationality and consider the public that does not use or understand it to be mistaken or wrong. Implied, he said, was the sentiment that experts should calculate and make decisions rather than allow an ignorant public to participate. Perrow’s book, in other words, proposed a different conceptual scheme for safety and the management of risks; he took account of human concerns for family, community, places to live, and justice, none of which appear in PRA (Perrow 1984). After Perrow, a steady stream of scholars enlarged on his themes. For example, philosopher Kristen Shrader-Frechette argued that low risk does not equate easily with a social judgment of safe enough, i.e., that a person or community will voluntarily judge the risk acceptable. Sociologists Ulrich Beck, Andrew Sterling, and Brian Wynne argued that risk is not an independent, objective entity easy to quantify. For Beck, risk became part of science, and science became part of politics. Historian Jacob Hamblin argued that risk assessment can deflect or obfuscate accountability for damages if a technological device malfunctions. For these scholars, ethics and political processes played a large role in the acceptability of risk, and PRA did not capture these factors (ShraderFrechette 1983, 1991; Beck 1992; Sterling 1998; Wynne 2005; Hamblin 2012). Recently, natural scientists returned to technical critiques of probabilistic assessments of risk and, consistent with doubts raised by their peers in the late 1970s, found that calculations performed for PRA may significantly underestimate now-

J Environ Stud Sci (2014) 4:273–287

known accident rates.20 In 2011, the world had 14,400 reactor-years of experience.21 Scientists at the Natural Resources Defense Council concluded that the actual frequency of core damage was about one in 1,000 reactor-years. The NRC, however, currently expects one in 10,000 reactor-years, although the Study had predicted one in 20,000 reactor-years. Experience thus suggested that the global frequency was about 10 times greater than current official expectations and 20 times greater than predicted in the Study. Reactors of US design at Fukushima were especially dangerous with a coremelt frequency of about one in 630 reactor-years. In addition, the accident at Fukushima did not violate the NRC’s expectations of limits to prompt fatalities and cancer deaths, even though little disagreement exists about the severity of the events in Japan (Cochran and McKinzie 2011).

Risk assessment beyond PRA As noted earlier, the Study was the first comprehensive effort to develop risk assessment as a new way of dealing with dangerous technologies. PRA for nuclear power, however, fit into a larger arena in which government agencies sought legitimacy for their actions and projects through numbers. Porter’s (1995) Trust in numbers argued that scientists and engineers turn to numbers when communities of experts have low internal social coherence and trust, when multiple communities address the same subject with different methods and different results, and when nonscientists exert pressure or criticism threatening the conclusions and “truths” believed by a group of experts. Henry Kendall’s leadership of the UCS’ attack on the safety of ECCS’s was, in Porter’s terms, an example of the technical community’s internal diversity leading to breakdown of trust. Kendall was a physicist and Rasmussen an engineer. Both relied heavily on the physical sciences, and both were professors at MIT. They were not, however, in the same “community” which could operate on trust and social relationships. Senator Pastore’s letter to Chairman Schlesinger was, in Porter’s terms, pressure exercised by nontechnical outsiders who needed something beside the “trust us” mantra of AEC staff in the face of mounting public criticism. Pastore, the Joint Committee, and AEC staff all sought refuge in the purported 20

In contrast to Perrow and others who rejected the concept of safety based on comparable probabilities, these natural scientists accepted the concept but criticized the numerical values of past calculations. 21 One reactor operating for 1 year=“one reactor-year.” Ten reactors working for 10 years each=100 reactor-years. Calculation of reactoryears entails adding together the number of years of operation of each reactor in the world. The World Nuclear Association, a trade association for the nuclear power industry, currently lists 435 operable reactors worldwide. With a total world experience of 14,400 reactor-years, each operable reactor has worked an average of about 33.1 years. Many reactors have operated for a longer periods, and many for shorter.

J Environ Stud Sci (2014) 4:273–287

objectivity of calculated low probabilities of harm as a defense of what they considered a good and necessary technology. Proponents, along with Professor Rasmussen, believed—before the Study even started—that the calculated risks would be low, because they already believed that nuclear power was safe enough. Absent carping critics, Rasmussen, Levine, and other proponents never would have seen the necessity of developing a method to calculate the presumed low probabilities of accidents and harm. Balogh’s (1991) Chain reaction had anticipated Porter’s argument a few years earlier. He attributed the breakdown of coherence and trust within the nuclear technical community to the movement away from the secretive weapons and wartime community of the Manhattan Project and early AEC to the commercial nuclear power industry. The latter had to win trust of highly diverse technical and nontechnical communities, and it could not rely on the exigencies of military emergency to smoother concerns about safety. Porter does not dwell on risk analysis (RA), but he notes that RA has close relationships to cost-benefit analysis. He notes that both risk analysts and cost-benefit analysts claim not to seek elimination of political judgment in decisionmaking, but they also wanted to curtail such judgment. Technical and political criticism thwarted the efforts of AEC and NRC to use the numbers of PRA to quell disagreements about nuclear power. Significantly, however, the US Environmental Protection Agency (EPA) had a remarkably different experience when it brought the numbers of risk analysis into the regulation of pesticides and toxic chemicals (Albert et al. 1977; Cohrssen and Covello 1989). EPA successfully incorporated calculations of the lifetime probability of contracting cancer into, for example, the tolerances of minute amounts of pesticide residues that could legally remain on crops going to market.

Conclusions Chauncey Starr, the AEC, congressional supporters of nuclear power, and the nuclear industry all wanted a simple demonstration by objective, scientific means that nuclear power was a reasonably safe technology. For the AEC, their report in 1962, Civilian Nuclear Power, pointed to a future dominated by nuclear electricity. The Study was supposed to calm the unrest and agitation about nuclear power. What happened? First, the history of the launching of the Study, its reception, and events following its release disrupted the strategic mission the AEC had planned. Events from start to finish indicated that the Study’s purpose was strategic: to demonstrate that nuclear power was safe enough for routine deployment. Saul Levine had emphasized this need in his original recommendations to the Joint Committee. The Commission’s attention to Chauncey Starr’s article and their charge to Rasmussen’s group implemented Levine’s recommendations. Rasmussen’s comments about

283

the size of risks worth worrying about—issued before the project began—suggest that his concerns lay in portraying nuclear power as sufficiently benign to use. Dixy Lee Ray’s comments at the press conference on the Study’s release reinforce the sense that AEC wanted to show that nuclear power was safe enough, and the same message came again a year later when NRC released the final version of the Study. These hopes fell by the wayside after the report’s release. Scientific and congressional critics found holes and overblown optimism. Organization by Ralph Nader of a national protest movement, an accident at Browns Ferry, and resignations of engineers from General Electric and the NRC indicated that the Study warranted skepticism. PRA ultimately became a tool for modeling and assessing the dangers of a nuclear power plant, taken one at a time, and NRC uses it for that purpose today. That utility, however, falls far short of the original hopes for a strategic proof of safety for society. Failure of the Study’s strategic mission dealt a severe blow to the legitimacy and attractiveness of nuclear power. Lack of consensus on safety left each nuclear power plant open to attack by local citizens based on the individual plant’s location, construction, and operation. Second, scholarship in the natural and social sciences and in ethics argued that risk had multiple dimensions, difficult or impossible to capture on a simple linear scale. Possibly of most importance, demonstrated by Three Mile Island, Chernobyl,22 and Fukushima, was the power of accidents to force evacuations and long-term loss of land, homes, and communities. The Study focused on the chances of getting cancer and dying, as if the individual chances of poor health were all that worried citizens about nuclear power. Loss of family, friends, and community, however, probably looms as large as or larger than individual health for most people. PRA at its inception and today finds it difficult to assess these dimensions of safety. Third, why did risk assessment succeed in EPA’s mission and fail the strategic mission of PRA for AEC and NRC? A full answer to this question is beyond the scope of this paper, but one key difference clearly distinguishes the two uses of risk assessment. For pesticides, the amount of residue left on the crop is relatively easy to measure with high precision, and if it is on the crop, then the consumer will endure exposure. The amounts of residue are small, and toxicological experiments provide data on cancer induction, which can be extrapolated to the low exposure doses. EPA made no strategic 22

The accident at Chernobyl, a reactor in the former USSR and now Ukraine, happened in 1986, after the main events with PRA developed in this article. PRA as developed in the USA did not include RBMK reactor designs like Chernobyl. American nuclear engineers, after the Chernobyl catastrophe, immediately pointed to the RBMK’s positive void coefficient and the Soviet decision not to put a containment building around the reactor. No PRA was needed to identify these two features as serious safety problems. In addition, irregularities in operating procedures and a lax safety culture also contributed. Chernobyl proved that bad accidents really could be catastrophic, but that story had no direct relationship to the history of PRA in the USA.

284

claims for safety. Instead, an exceptionally rare, unknown, statistical individual might suffer cancer; no catastrophe would engulf entire regions. PRA, in contrast, has to estimate from models the amount of exposure to the radioactive debris released by an accident. Victims may suffer heavy exposures, and whole regions and communities may be devastated. In other words, the scale of the consequences that PRA grappled with was many orders of magnitude greater than those EPA confronted. In the two decades after EPA brought risk assessment into the agency, disputes arose, but they focused on whether children were adequately protected by the tolerances adopted, which in turn relied on toxicity tests with adults or rodents (Wargo 1996). Criticisms faced by EPA did not include the legitimacy of using the probabilities of risk as the conceptual foundation of safety. More recently, numerous scholars have sought further deployment of risk assessment is such fields as hydraulic fracturing for natural gas (for example, Soeder et al. 2014; McKenzie et al. 2012), marine drilling for oil (for example, Agwa et al. 2013), and climate change (for example, Lin et al. 2014). Interestingly, these efforts to use risk assessment deal with dangers that can be more catastrophic than those posed by pesticides and toxic chemicals. It remains to be seen if controversy will surround these applications. Finally, does this history provide useful insights into the potential roles that nuclear power might play today in grappling with future choices of energy technology? Yes, it does. It demonstrated that a major tool (PRA), envisioned to persuade the public on the safety of uranium as a fuel, could not achieve its strategic mission. In fact, it was not even taken into the law as a clear legal standard for safety. That, combined with the catastrophic accidents that have happened, leaves the adequacy of nuclear power’s safety at best ambiguous with at least a significant minority vociferously opposed. Doubts about safety, combined with high capital costs, lead skeptics of nuclear power to promote efficiency and renewable energy as better pathways (for example, Lovins 2011; Sovacool 2011; Jacobson et al. 2013). Proponents of nuclear power reach the opposite conclusion (for example, National Research Council 2009; Richter 2010; Stone 2013).

J Environ Stud Sci (2014) 4:273–287

Appendix 1 Chronology of important events Date 1946 1954 March 1957 September 1957 December 1957 1962 1962 1950s – 1970s 1969 1969 1969

1969 April 1971 June 1971 July 1971

August 1971 October 1971 May 1972 July 1973

Acknowledgments I’m very grateful for helpful comments received from a number of people on drafts of this paper: Peter Bradford, Robert J. Budnitz, George Irwin, Natalie Kopytko, Cheri Lucas Jennings, Chris Jones, Carolyn Merchant, Ralph Murphy, Laura Nader, Richard Muller, Barbara Bridgman Perkins, Kathleen Saul, and anonymous reviewers. I also thank the staff at the National Archives in Washington, DC, and College Park, MD; the Public Documents Room of the Nuclear Regulatory Commission in Rockville, MD; the MIT Archives in Cambridge, MA; the John F Kennedy Presidential Library in Boston, MA; and the Hoover Institution Archives at Stanford University, Palo Alto, CA. I thank Susan Jenkins, Energy Biosciences Institute, University of California, Berkeley, for support of my efforts in energy education. None of these people are accountable for the views expressed here, and I remain responsible for all errors.

1974 August 1974 November 1974 January 1975 March 1975 March 1975

Event US Atomic Energy Commission established by Atomic Energy Act Atomic Energy Act revisions opened nuclear energy to private companies AEC released Theoretical possibilities and consequences, WASH-740 Price-Anderson Act indemnified nuclear power plants from liability First commercial US nuclear power plant opened at Shippingport, PA AEC released Civilian nuclear power: a report to the President Rachel Carson’s Silent Spring intensified environmental concerns Local opposition to nuclear power projects thwarted and delayed them AEC Commissioners defended nuclear power Studies indicated New York City had inadequate electricity generation Critics blocked nuclear power plant on Lake Cayuga: thermal pollution Chauncey Starr proposed new safety concept based on risk AEC concluded ECCSs were not as reliable as previously thought AEC released new, interim criteria for ECCSs Court ruled AEC must comply with National Environmental Policy Act AEC engineer Saul Levine recommended to Congress a new safety study Senator John Pastore recommended a new study be done by AEC AEC appointed Rasmussen to head Reactor Safety Study AEC released WASH-1250, The safety of nuclear power reactors Ford Foundation released A time to choose AEC released draft of WASH-1400, Reactor Safety Study Ralph Nader held first national conference to oppose nuclear power US Nuclear Regulatory Commission replaced AEC 32 prominent scientists said nuclear power was essential and worth risks Fire disabled emergency equipment, Browns Ferry nuclear plant

J Environ Stud Sci (2014) 4:273–287 Date Summer 1975 October 1975 February 1976 February 1976 March 1976 June 1976 June 1976 March 1977 June 1977 August 1977 August 1977 1978 January 1979 March 1979 1980s–present 1984 April 1986 March 2011 1980s–present

Event First Lewis report criticized Reactor Safety Study NRC released final draft of Reactor Safety Study Three engineers resigned from General Electric over safety issues Engineer resigned from NRC over safety issues Joint Committee on Atomic Energy held hearings on resignations CA voters rejected curb on nuclear power but stiffer conditions followed US House Subcommittee held hearings on Reactor Safety Study US House Subcommittee report criticized Reactor Safety Study NRC appointed Lewis to head second study of Reactor Safety Study Union of Concerned Scientists criticized Reactor Safety Study Hundreds arrested for sit-in at California nuclear power plant Second Lewis report praised and criticized Reactor Safety Study NRC withdrew support of Executive Summary of Reactor Safety Study Accident destroyed nuclear power plant at Three Mile Island US Nuclear Regulatory Commission made PRA a tactical tool Charles Perrow’s Normal Accidents proposed alternatives to PRA Catastrophic explosion at Chernobyl contaminated huge area Catastrophic explosions at Fukushima contaminated huge area Continued debates on safety clouded future of nuclear power

References Agwa A, Leheta H, Salem A, Sadiq R (2013) Fate of drilling waste discharges and ecological risk assessment in the Egyptian Red Sea: an aquivalence-based fuzzy analysis. Stoch Env Res Risk A 27(1): 169–181 Albert RE, Train RE, Anderson E (1977) Rationale developed by the Environmental Protection Agency for the assessment of carcinogenic risks, 58 (5):1537–1541 Balogh B (1991) Chain reaction: expert debate and public participation in American commercial nuclear power. Cambridge University Press, New York Barry PJ (1970) The siting and safety of civilian nuclear power plants, CRC Critical Reviews in Environmental Control, June Beck CK (1966) Current trends & perspectives in reactor location and safety requirements. Nucl Saf 8:12–16

285 Beck U (1992) Risk society: towards a new modernity. Sage Publications, London Bethe H et al (1975) No alternative to nuclear power, Bulletin of the Atomic Scientists (March): 4–5 Bradford PA (2013) How to close the US nuclear industry: do nothing. Bull At Sci 69:12–21 Burnham (1974) “Inquiry on impact of A-power urged,” New York Times, November 17 Burnham D (1975a) “Fire raises issue of safe reactors,” New York Times, March 26 Burnham D (1975b) “Hope for cheap power from atom is fading,” New York Times, November 16 Burnham D (1976) 3 engineers quit G.E. reactor division and volunteer in antinuclear movement, New York Times, February 3 Bupp IC, Darian J (1981) The failed promise of nuclear power: the story of light water. Basic Books, New York Carlisle RP (1997) Probabilistic risk assessment in nuclear reactors: engineering success, public relations failure. Technol Cult 38:920–941 Carson R (1962) Silent Spring. Houghton Mifflin Company, Boston Carter LJ (1976) Nuclear initiative: Californians vote “no”, but legislature acts. Science 192(1317):1319 Cochran TB, McKinzie MG (2011) Global implications of the Fukushima disaster for nuclear power, paper delivered to World Federation of Scientists’ International Seminars on Planetary Emergencies, Erice, Sicily, August 19 – 25, http://docs.nrdc.org/ nuclear/files/nuc_11102801a.pdf, 18 January 2013 Cohn SM (1997) Too cheap to meter: an economic and philosophical analysis of the nuclear dream. State University of New York Press, Albany Cohrssen JH, Covello VT (1989) Risk analysis: a guide to principles and methods for analyzing health and environmental risks. Council on Environmental Quality, Washington Culver HN (1966) Effect of engineered safeguards on reactor siting. Nucl Saf 7:342–346 Del Sesto SL (1979) Science, politics, and controversy: civilian nuclear power in the United States, 1946–1974. Westview Press, Boulder Development and Resources Corporation (1969) New York City’s power supply. Development and Resources Corporation, New York Development and Resources Corporation (1977) The risks of nuclear power reactors: a review of the NRC Reactor Safety Study, WASH1400 (NUREG-75/015). Union of Concerned Scientists, Cambridge Egan M (2007) Barry Commoner and the science of survival: the remaking of American environmentalism. MIT Press, Cambridge Energy Policy Project of the Ford Foundation (1974) A time to choose: America’s energy future. Ballinger Publishing Co., Cambridge Farmer FR (1967) Siting criteria—a new approach, in International Atomic Energy Agency, Containment and Siting of Nuclear Power Plants, Proceedings of a Symposium on the Containment and Siting of Nuclear Power Plants, 3 – 7 April 19. International Atomic Energy Agency, Vienna Farmer FR (1975) Advances in the reliability assessment of reactor systems. Atom 230:218–226 Federal Power Commission (1969) A review of consolidated Edison Company 1969 power supply problems and ten-year expansion plans. Federal Power Commission, Washington Gillette R (1972a) Nuclear safety (I): the roots of dissent. Science 177: 771–776 Gillette R (1972b) Nuclear safety (II): the years of delay. Science 177: 867–871 Gillette R (1972c) Nuclear safety (III): critics charge conflicts of interest. Science 177:970–975 Gillette R (1972d) Nuclear safety (IV): barriers to communication. Science 177:1080–1082 Gwynne P, Bishop J Jr, (1975) “Incident at Browns Ferry,” Newsweek, October 20 Hamblin JD (2012) Fukushima and the motifs of nuclear history. Environ Hist 17:285–299

286 Hassl DF (1965) Advanced concepts in fault tree analysis, System Safety Symposium. Boeing Company and University of Washington, Seattle International Energy Agency (2013) Electricity information, 2013. International Energy Agency, Paris, Part III, Table 1.2, percent calculated by author Jacobson MZ et al (2013) Examining the feasibility of converting New York State’s all-purpose energy infrastructure to one using wind, water, and sunlight. Energy Policy 57:585–601 Johnson JW (1986) Insuring against disaster: the nuclear industry on trial. Mercer University Press, Macon Keller W, Modarres M (2005) A historical overview of probabilistic risk assessment development and its use in the nuclear power industry: a tribute to the late Professor Norman Carl Rasmussen. Reliab Eng Syst Saf 89:271–285 Kendall HW (2000) A distant light: scientists and public policy. AIP Press, New York Kendall HW, Nadis SJ (eds) (1980) Energy strategies: toward a solar future: a report of the union of concerned scientists. Ballinger Pub. Co., Cambridge Levine S, Rasmussen NC (1984) Nuclear plant PRA: how far has it come? Risk Anal 4:247–254 Levine S, Stetson F (1982) How PRA is being used in the USA. Nucl Eng Int: June, 35–38 Lewis HW et al (1975) Report to the American Physical Society. Rev Mod Phys 47(Supplement 1):S1–S124, Summer Lewis RS (1972) From Seaborg to Schlesinger: a bird watcher on the AEC. Bull At Sci 28:44–45 Lin BB, Yong BK, Matthew I, Chi-Hsiang W, Sorada T, Xiaoming W (2014) Assessing inundation damage and timing of adaptation: sea level rise and the complexities of land use in coastal communities. Mitig Adapt Strateg Glob Chang 19(5):551–568 Lovins AB (2011) Reinventing fire: bold business solutions for the new energy era. Chelsea Green Publishing, White River Junction, VT Lutts RH (1985) Chemical fallout: Rachel Carson’s Silent Spring, radioactive fallout, and the environmental movement. Environ Rev 9:210–225 Lyons RD (1974) A.E.C. study finds hazards of reactors very slight, New York Times, August 21 Mazuzan GT (1986) “Very risky business”: a power reactor for New York City. Technol Cult 27(2):262–284 Mazuzan GT, Walker JS (1984) Controlling the atom: the beginnings of nuclear regulation. University of California Press, Berkeley McKenzie LM, Witter RZ, Newman LS, Adgate JL (2012) Human health risk assessment of air emissions from development of unconventional natural gas resources. Sci Total Environ 424:79–87 Means AB (1965) Fault tree analysis: the study of unlikely events in complex systems, system safety symposium. Boeing Company and University of Washington, Seattle Mulvihill RJ (1966) A probabilistic methodology for the safety analysis of nuclear power reactors, SAN-570-2. Planning Research Corporation, Los Angeles Nader R (1974) “Agenda for Critical Mass 74, conference on nuclear power,” CSHL Archives Repository, Reference JDW/2/2/1259/2, accessed June 25, 2014, http://libgallery.cshl.edu/items/show/45990 National Research Council (2009) America’s energy future. National Academies Press, Washington Nelkin D (1971) Nuclear power and its critics: the Cayuga Lake controversy. Cornell University Press, Ithaca New York Times (1984) Obituary, “Saul Levine dies at 61; nuclear safety expert,” October 28 Novick S (1968) The careless atom. Houghton Mifflin Company, Boston Nuclear Energy Agency (2009) Nuclear fuel behaviour in loss-of-coolant accident (LOCA) conditions, NEA-No. 6846. Oganisation for Economic Cooperation and Development, http://www.oecd-nea.org/ nsd/reports/2009/nea6846_LOCA.pdf, accessed 18 February 2012 Perrow C (1984) Normal accidents: living with high-risk technologies. Basic Books, New York

J Environ Stud Sci (2014) 4:273–287 Pope D (1990) “We can wait. We should wait.” Eugene’s nuclear power controversy, 1968–1970. Pac Hist Rev 59(3):349–373 Porter TM (1995) Trust in numbers: the pursuit of objectivity in science and public life. Princeton University Press, Princeton Randall PE (1989) Hampton: a century of town and beach, 1888 – 1988. Peter E. Randall Publisher, Hampton. Found at http://www.hampton.lib. nh.us/hampton/history/randall/chap18/randall18_4.htm, 30 June 2008 Rasmussen NC (1972) Nuclear reactor safety—an opinion, Nuclear News, January, 35–40 Richter B (2010) Beyond smoke and mirrors: climate change and energy in the 21st century. Cambridge University Press, New York Risk Assessment Review Group (1978) Risk assessment review group report to the U.S. nuclear regulatory commission, NUREG/CR0400. U.S. Nuclear Regulatory Commission, Washington Rolph ES (1979) Nuclear power and the public safety: a study in regulation. Lexington Books D.C. Heath and Company, Lexington Schils N (2011) “Abalone Alliance campaign against Diablo Canyon Nuclear Plant, California, 1976 – 1984,” Global Nonviolent Action Database, 2011, 1 – 7, http://nvdatabase.swarthmore.edu/ content/abalone-alliance-campaigns-against-diablo-canyonnuclear-plant-california-1976-1984, 29 February 2012 Shrader-Frechette KS (1983) Nuclear power and public policy: the social and ethical problems of fission technology, 2nd edn. D. Reidel Publishing Company, Dordrecht Shrader-Frechette KS (1991) Risk and rationality: philosophical foundations for populist reforms. University of California Press, Berkeley Soeder DJ, Sharma S, Pekney N, Hopkinson L, Dilmore R, Kutchko B, Stewart B, Carter K, Hakala A, Capo R (2014) An approach for assessing engineering risk from shale gas wells in the United States. Int J Coal Geol 126:4–19 Sovacool BK (2011) Contesting the future of nuclear power: a critical global assessment of atomic energy. World Scientific Publishing Co. Pte. Ltd., Singapore Starr C (1969) Social benefits versus technological risk. Science 165: 1232–1238 Sterling A (1998) Risk at a turning point? J Risk Res 1:97–109 Stone R (2013) Pandora’s Promise. CNN Films, Atlanta Teller E, Latter AL (1958) Our nuclear future: facts dangers and opportunities. Criterion Books, New York Umlauf JL (1965) Case history/minuteman/for weapon safety system, system safety symposium. Boeing Company and University of Washington, Seattle U.S. Atomic Energy Commission (1957) Theoretical possibilities and consequences of major accidents in large nuclear power plants, WASH-740. Government Printing Office, Washington U.S. Atomic Energy Commission (1962) Civilian nuclear power: a report to the President, 1962. U.S. Atomic Energy Commission, Oak Ridge U.S. Atomic Energy Commission (1973) The safety of nuclear power reactors (light water-cooled) and related facilities, WASH-1250. U.S. Atomic Energy Commission, Washington U.S. Atomic Energy Commission (1974) Risks to public from nuclear power plants very small, study concludes. U.S. Atomic Energy Commission, Washington, August 20, Press Release U.S. Congress House (1976) Committee on Interior & Insular Affairs, Subcommittee on Energy & Environment, Hearings, Reactor Safety Study (Rasmussen Report). Government Printing Office, Washington U.S. Congress, House (1977) Committee on Interior and Insular Affairs, Subcommittee on Energy and the Environment, Observations on the Reactor Safety Study: a report, Committee Print No. 1. Government Printing Office, Washington U.S. Congress House (1979) Committee on Interior & Insular Affairs, Subcommittee on Energy & the Environment, Hearings, Reactor Safety Study review. Government Printing Office, Washington U.S. Congress House (1986) Committee on science and technology, subcommittee on energy and energy research and production,

J Environ Stud Sci (2014) 4:273–287 legislative inquiry on the Price-Anderson Act. Government Printing Office, Washington U.S. Congress Joint Committee on Atomic Energy (1963) Hearings, development, growth, and state of the atomic energy industry, part i. Government Printing Office, Washington U.S. Congress Joint Committee on Atomic Energy (1976) Hearings, investigation of charges relating to nuclear reactor safety. Government Printing Office, Washington U.S. Nuclear Regulatory Commission (1975a) Reactor Safety Study: an assessment of accident risks in U.S. commercial nuclear power plants, NUREG-75/014. U.S. Nuclear Regulatory Commission, Washington U.S. Nuclear Regulatory Commission (1975b) Final report of Reactor Safety Study completed. U.S. Nuclear Regulatory Commission, Washington, October 10, Press Release U.S. Nuclear Regulatory Commission (1982) Safety goals for nuclear power plants: a discussion paper, NUREG-0880. U.S. Nuclear Regulatory Commission, Washington

287 Walker JS (1990) Reactor at the fault: the Bodega Bay nuclear plant controversy, 1958–1964: a case study in the politics of technology. Pac Hist Rev 59(3):323–348 Walker JS (1992) Containing the atom: nuclear regulation in a changing environment, 1963–1971. University of California Press, Berkeley Walker JS (2000) Permissible dose: a history of radiation protection in the twentieth century. University of California Press, Berkeley Wargo J (1996) Our children’s toxic legacy: how science and law fail to protect us from pesticides. Yale University Press, New Haven Wellock TR (2012) Engineering uncertainty and bureaucratic crisis at the Atomic Energy Commission, 1964–1973. Technol Cult 53:846–884 Wynne B (2005) Risk as globalizing ‘democratic’ discourse? Framing subjects and citizens. In: Leach M, Scoones I, Wynne B (eds) Science and citizens: globalization and the challenge of engagement. Zed Books, London