The gross misperception of risk that informs so much anti-nuclear fear



Download 448.89 Kb.
Page1/14
Date12.05.2016
Size448.89 Kb.
  1   2   3   4   5   6   7   8   9   ...   14

1AC

plan



The United States Federal Government should obtain, through alternative financing, electricity from small modular reactors for military facilities in the United States.



1ac – adv



Contention one is climate



SMR-based nuclear power is safe and solves warming—it’s key to a global nuclear renaissance.


Michael Shellenberger 12, founder of the Breakthrough Institute, graduate of Earlham College and holds a masters degree in cultural anthropology from the University of California, Santa Cruz, "New Nukes: Why We Need Radical Innovation to Make New Nuclear Energy Cheap", September 11, http://thebreakthrough.org/index.php/programs/energy-and-climate/new-nukes/

Arguably, the biggest impact of Fukushima on the nuclear debate, ironically, has been to force a growing number of pro-nuclear environmentalists out of the closet, including us. The reaction to the accident by anti-nuclear campaigners and many Western publics put a fine point on the gross misperception of risk that informs so much anti-nuclear fear. Nuclear remains the only proven technology capable of reliably generating zero-carbon energy at a scale that can have any impact on global warming. Climate change -- and, for that matter, the enormous present-day health risks associated with burning coal, oil, and gas -- simply dwarf any legitimate risk associated with the operation of nuclear power plants. About 100,000 people die every year due to exposure to air pollutants from the burning of coal. By contrast, about 4,000 people have died from nuclear energy -- ever -- almost entirely due to Chernobyl. But rather than simply lecturing our fellow environmentalists about their misplaced priorities, and how profoundly inadequate present-day renewables are as substitutes for fossil energy, we would do better to take seriously the real obstacles standing in the way of a serious nuclear renaissance. Many of these obstacles have nothing to do with the fear-mongering of the anti-nuclear movement or, for that matter, the regulatory hurdles imposed by the U.S. Nuclear Regulatory Commission and similar agencies around the world. As long as nuclear technology is characterized by enormous upfront capital costs, it is likely to remain just a hedge against overdependence on lower-cost coal and gas, not the wholesale replacement it needs to be to make a serious dent in climate change. Developing countries need large plants capable of bringing large amounts of new power to their fast-growing economies. But they also need power to be cheap. So long as coal remains the cheapest source of electricity in the developing world, it is likely to remain king. The most worrying threat to the future of nuclear isn't the political fallout from Fukushima -- it's economic reality. Even as new nuclear plants are built in the developing world, old plants are being retired in the developed world. For example, Germany's plan to phase-out nuclear simply relies on allowing existing plants to be shut down when they reach the ends of their lifetime. Given the size and cost of new conventional plants today, those plants are unlikely to be replaced with new ones. As such, the combined political and economic constraints associated with current nuclear energy technologies mean that nuclear energy's share of global energy generation is unlikely to grow in the coming decades, as global energy demand is likely to increase faster than new plants can be deployed. To move the needle on nuclear energy to the point that it might actually be capable of displacing fossil fuels, we'll need new nuclear technologies that are cheaper and smaller. Today, there are a range of nascent, smaller nuclear power plant designs, some of them modifications of the current light-water reactor technologies used on submarines, and others, like thorium fuel and fast breeder reactors, which are based on entirely different nuclear fission technologies. Smaller, modular reactors can be built much faster and cheaper than traditional large-scale nuclear power plants. Next-generation nuclear reactors are designed to be incapable of melting down, produce drastically less radioactive waste, make it very difficult or impossible to produce weapons grade material, use less water, and require less maintenance. Most of these designs still face substantial technical hurdles before they will be ready for commercial demonstration. That means a great deal of research and innovation will be necessary to make these next generation plants viable and capable of displacing coal and gas. The United States could be a leader on developing these technologies, but unfortunately U.S. nuclear policy remains mostly stuck in the past. Rather than creating new solutions, efforts to restart the U.S. nuclear industry have mostly focused on encouraging utilities to build the next generation of large, light-water reactors with loan guarantees and various other subsidies and regulatory fixes. With a few exceptions, this is largely true elsewhere around the world as well. Nuclear has enjoyed bipartisan support in Congress for more than 60 years, but the enthusiasm is running out. The Obama administration deserves credit for authorizing funding for two small modular reactors, which will be built at the Savannah River site in South Carolina. But a much more sweeping reform of U.S. nuclear energy policy is required. At present, the Nuclear Regulatory Commission has little institutional knowledge of anything other than light-water reactors and virtually no capability to review or regulate alternative designs. This affects nuclear innovation in other countries as well, since the NRC remains, despite its many critics, the global gold standard for thorough regulation of nuclear energy. Most other countries follow the NRC's lead when it comes to establishing new technical and operational standards for the design, construction, and operation of nuclear plants. What's needed now is a new national commitment to the development, testing, demonstration, and early stage commercialization of a broad range of new nuclear technologies -- from much smaller light-water reactors to next generation ones -- in search of a few designs that can be mass produced and deployed at a significantly lower cost than current designs. This will require both greater public support for nuclear innovation and an entirely different regulatory framework to review and approve new commercial designs. In the meantime, developing countries will continue to build traditional, large nuclear power plants. But time is of the essence. With the lion's share of future carbon emissions coming from those emerging economic powerhouses, the need to develop smaller and cheaper designs that can scale faster is all the more important. A true nuclear renaissance can't happen overnight. And it won't happen so long as large and expensive light-water reactors remain our only option. But in the end, there is no credible path to mitigating climate change without a massive global expansion of nuclear energy. If you care about climate change, nothing is more important than developing the nuclear technologies we will need to get that job done.

Plan results in global SMR exports – massively reduces emissions.


Rosner 11

Robert Rosner, Stephen Goldberg, Energy Policy Institute at Chicago, The Harris School of Public Policy Studies, November 2011, SMALL MODULAR REACTORS –KEY TO FUTURE NUCLEAR POWER GENERATION IN THE U.S., https://epic.sites.uchicago.edu/sites/epic.uchicago.edu/files/uploads/EPICSMRWhitePaperFinalcopy.pdf



As stated earlier, SMRs have the potential to achieve significant greenhouse gas emission reductions. They could provide alternative baseload power generation to facilitate the retirement of older, smaller, and less efficient coal generation plants that would, otherwise, not be good candidates for retrofitting carbon capture and storage technology. They could be deployed in regions of the U.S. and the world that have less potential for other forms of carbon-free electricity, such as solar or wind energy. There may be technical or market constraints, such as projected electricity demand growth and transmission capacity, which would support SMR deployment but not GW-scale LWRs. From the on-shore manufacturing perspective, a key point is that the manufacturing base needed for SMRs can be developed domestically. Thus, while the large commercial LWR industry is seeking to transplant portions of its supply chain from current foreign sources to the U.S., the SMR industry offers the potential to establish a large domestic manufacturing base building upon already existing U.S. manufacturing infrastructure and capability, including the Naval shipbuilding and underutilized domestic nuclear component and equipment plants. The study team learned that a number of sustainable domestic jobs could be created – that is, the full panoply of design, manufacturing, supplier, and construction activities – if the U.S. can establish itself as a credible and substantial designer and manufacturer of SMRs. While many SMR technologies are being studied around the world, a strong U.S. commercialization program can enable U.S. industry to be first to market SMRs, thereby serving as a fulcrum for export growth as well as a lever in influencing international decisions on deploying both nuclear reactor and nuclear fuel cycle technology. A viable U.S.-centric SMR industry would enable the U.S. to recapture technological leadership in commercial nuclear technology, which has been lost to suppliers in France, Japan, Korea, Russia, and, now rapidly emerging, China.

DOD is critical to mass adoption and commercialization


Andres and Breetz 11

Richard Andres, Professor of National Security Strategy at the National War College and a Senior Fellow and Energy and Environmental Security and Policy Chair in the Center for Strategic Research, Institute for National Strategic Studies, at the National Defense University, and Hanna Breetz, doctoral candidate in the Department of Political Science at The Massachusetts Institute of Technology, Small Nuclear Reactorsfor Military Installations:Capabilities, Costs, andTechnological Implications, www.ndu.edu/press/lib/pdf/StrForum/SF-262.pdf



Thus far, this paper has reviewed two of DOD’s most pressing energy vulnerabilities—grid insecurity and fuel convoys—and explored how they could be addressed by small reactors. We acknowledge that there are many uncertainties and risks associated with these reactors. On the other hand, failing to pursue these technologies raises its own set of risks for DOD, which we review in this section: first, small reactors may fail to be commercialized in the United States; second, the designs that get locked in by the private market may not be optimal for DOD’s needs; and third, expertise on small reactors may become concentrated in foreign countries. By taking an early “first mover” role in the small reactor market, DOD could mitigate these risks and secure the long-term availability and appropriateness of these technologies for U.S. military applications. The “Valley of Death.” Given the promise that small reactors hold for military installations and mobility, DOD has a compelling interest in ensuring that they make the leap from paper to production. However, if DOD does not provide an initial demonstration and market, there is a chance that the U.S. small reactor industry may never get off the ground. The leap from the laboratory to the marketplace is so difficult to bridge that it is widely referred to as the “Valley of Death.” Many promising technologies are never commercialized due to a variety of market failuresincluding technical and financial uncertainties, information asymmetries, capital market imperfections, transaction costs, and environmental and security externalities— that impede financing and early adoption and can lock innovative technologies out of the marketplace. 28 In such cases, the Government can help a worthy technology to bridge the Valley of Death by accepting the first mover costs and demonstrating the technology’s scientific and economic viability.29 [FOOTNOTE 29: There are numerous actions that the Federal Government could take, such as conducting or funding research and development, stimulating private investment, demonstrating technology, mandating adoption, and guaranteeing markets. Military procurement is thus only one option, but it has often played a decisive role in technology development and is likely to be the catalyst for the U.S. small reactor industry. See Vernon W. Ruttan, Is War Necessary for Economic Growth? (New York: Oxford University Press, 2006); Kira R. Fabrizio and David C. Mowery, “The Federal Role in Financing Major Inventions: Information Technology during the Postwar Period,” in Financing Innovation in the United States, 1870 to the Present, ed. Naomi R. Lamoreaux and Kenneth L. Sokoloff (Cambridge, MA: The MIT Press, 2007), 283–316.] Historically, nuclear power has been “the most clear-cut example . . . of an important general-purpose technology that in the absence of military and defense related procurement would not have been developed at all.”30 Government involvement is likely to be crucial for innovative, next-generation nuclear technology as well. Despite the widespread revival of interest in nuclear energy, Daniel Ingersoll has argued that radically innovative designs face an uphill battle, as “the high capital cost of nuclear plants and the painful lessons learned during the first nuclear era have created a prevailing fear of first-of-a-kind designs.”31 In addition, Massachusetts Institute of Technology reports on the Future of Nuclear Power called for the Government to provide modest “first mover” assistance to the private sector due to several barriers that have hindered the nuclear renaissance, such as securing high up-front costs of site-banking, gaining NRC certification for new technologies, and demonstrating technical viability.32 It is possible, of course, that small reactors will achieve commercialization without DOD assistance. As discussed above, they have garnered increasing attention in the energy community. Several analysts have even argued that small reactors could play a key role in the second nuclear era, given that they may be the only reactors within the means of many U.S. utilities and developing countries.33 However, given the tremendous regulatory hurdles and technical and financial uncertainties, it appears far from certain that the U.S. small reactor industry will take off. If DOD wants to ensure that small reactors are available in the future, then it should pursue a leadership role now. Technological Lock-in. A second risk is that if small reactors do reach the market without DOD assistance, the designs that succeed may not be optimal for DOD’s applications. Due to a variety of positive feedback and increasing returns to adoption (including demonstration effects, technological interdependence, network and learning effects, and economies of scale), the designs that are initially developed can become “locked in.”34 Competing designs—even if they are superior in some respects or better for certain market segments— can face barriers to entry that lock them out of the market. If DOD wants to ensure that its preferred designs are not locked out, then it should take a first mover role on small reactors. It is far too early to gauge whether the private market and DOD have aligned interests in reactor designs. On one hand, Matthew Bunn and Martin Malin argue that what the world needs is cheaper, safer, more secure, and more proliferation-resistant nuclear reactors; presumably, many of the same broad qualities would be favored by DOD.35 There are many varied market niches that could be filled by small reactors, because there are many different applications and settings in which they can be used, and it is quite possible that some of those niches will be compatible with DOD’s interests.36 On the other hand, DOD may have specific needs (transportability, for instance) that would not be a high priority for any other market segment. Moreover, while DOD has unique technical and organizational capabilities that could enable it to pursue more radically innovative reactor lines, DOE has indicated that it will focus its initial small reactor deployment efforts on LWR designs.37 If DOD wants to ensure that its preferred reactors are developed and available in the future, it should take a leadership role now. Taking a first mover role does not necessarily mean that DOD would be “picking a winner” among small reactors, as the market will probably pursue multiple types of small reactors. Nevertheless, DOD leadership would likely have a profound effect on the industry’s timeline and trajectory. Domestic Nuclear Expertise. From the perspective of larger national security issues, if DOD does not catalyze the small reactor industry, there is a risk that expertise in small reactors could become dominated by foreign companies. A 2008 Defense Intelligence Agency report warned that the United States will become totally dependent on foreign governments for future commercial nuclear power unless the military acts as the prime mover to reinvigorate this critical energy technology with small, distributed power reactors.38 Several of the most prominent small reactor concepts rely on technologies perfected at Federally funded laboratories and research programs, including the Hyperion Power Module (Los Alamos National Laboratory), NuScale (DOE-sponsored research at Oregon State University), IRIS (initiated as a DOE-sponsored project), Small and Transportable Reactor (Lawrence Livermore National Laboratory), and Small, Sealed, Transportable, Autonomous Reactor (developed by a team including the Argonne, Lawrence Livermore, and Los Alamos National Laboratories). However, there are scores of competing designs under development from over a dozen countries. If DOD does not act early to support the U.S. small reactor industry, there is a chance that the industry could be dominated by foreign companies. Along with other negative consequences, the decline of the U.S. nuclear industry decreases the NRC’s influence on the technology that supplies the world’s rapidly expanding demand for nuclear energy. Unless U.S. companies begin to retake global market share, in coming decades France, China, South Korea, and Russia will dictate standards on nuclear reactor reliability, performance, and proliferation resistance.

Warming is real, anthropogenic, and causes mass death


Deibel 7 (Terry L, Professor of IR @ National War College, “Foreign Affairs Strategy: Logic for American Statecraft”, Conclusion: American Foreign Affairs Strategy Today)

Finally, there is one major existential threat to American security (as well as prosperity) of a nonviolent nature, which, though far in the future, demands urgent action. It is the threat of global warming to the stability of the climate upon which all earthly life depends. Scientists worldwide have been observing the gathering of this threat for three decades now, and what was once a mere possibility has passed through probability to near certainty. Indeed not one of more than 900 articles on climate change published in refereed scientific journals from 1993 to 2003 doubted that anthropogenic warming is occurring. “In legitimate scientific circles,” writes Elizabeth Kolbert, it is virtually impossible to find evidence of disagreement over the fundamentals of global warming.” Evidence from a vast international scientific monitoring effort accumulates almost weekly, as this sample of newspaper reports shows: an international panel predicts “brutal droughts, floods and violent storms across the planet over the next century”; climate change could “literally alter ocean currents, wipe away huge portions of Alpine Snowcaps and aid the spread of cholera and malaria”; “glaciers in the Antarctic and in Greenland are melting much faster than expected, and…worldwide, plants are blooming several days earlier than a decade ago”; “rising sea temperatures have been accompanied by a significant global increase in the most destructive hurricanes”; “NASA scientists have concluded from direct temperature measurements that 2005 was the hottest year on record, with 1998 a close second”; “Earth’s warming climate is estimated to contribute to more than 150,000 deaths and 5 million illnesses each year” as disease spreads; “widespread bleaching from Texas to Trinidad…killed broad swaths of corals” due to a 2-degree rise in sea temperatures. “The world is slowly disintegrating,” concluded Inuit hunter Noah Metuq, who lives 30 miles from the Arctic Circle. “They call it climate change…but we just call it breaking up.” From the founding of the first cities some 6,000 years ago until the beginning of the industrial revolution, carbon dioxide levels in the atmosphere remained relatively constant at about 280 parts per million (ppm). At present they are accelerating toward 400 ppm, and by 2050 they will reach 500 ppm, about double pre-industrial levels. Unfortunately, atmospheric CO2 lasts about a century, so there is no way immediately to reduce levels, only to slow their increase, we are thus in for significant global warming; the only debate is how much and how serious the effects will be. As the newspaper stories quoted above show, we are already experiencing the effects of 1-2 degree warming in more violent storms, spread of disease, mass die offs of plants and animals, species extinction, and threatened inundation of low-lying countries like the Pacific nation of Kiribati and the Netherlands at a warming of 5 degrees or less the Greenland and West Antarctic ice sheets could disintegrate, leading to a sea level of rise of 20 feet that would cover North Carolina’s outer banks, swamp the southern third of Florida, and inundate Manhattan up to the middle of Greenwich Village. Another catastrophic effect would be the collapse of the Atlantic thermohaline circulation that keeps the winter weather in Europe far warmer than its latitude would otherwise allow. Economist William Cline once estimated the damage to the United States alone from moderate levels of warming at 1-6 percent of GDP annually; severe warming could cost 13-26 percent of GDP. But the most frightening scenario is runaway greenhouse warming, based on positive feedback from the buildup of water vapor in the atmosphere that is both caused by and causes hotter surface temperatures. Past ice age transitions, associated with only 5-10 degree changes in average global temperatures, took place in just decades, even though no one was then pouring ever-increasing amounts of carbon into the atmosphere. Faced with this specter, the best one can conclude is that “humankind’s continuing enhancement of the natural greenhouse effect is akin to playing Russian roulette with the earth’s climate and humanity’s life support system. At worst, says physics professor Marty Hoffert of New York University, “we’re just going to burn everything up; we’re going to heat the atmosphere to the temperature it was in the Cretaceous when there were crocodiles at the poles, and then everything will collapse.” During the Cold War, astronomer Carl Sagan popularized a theory of nuclear winter to describe how a thermonuclear war between the Untied States and the Soviet Union would not only destroy both countries but possibly end life on this planet. Global warming is the post-Cold War era’s equivalent of nuclear winter at least as serious and considerably better supported scientifically. Over the long run it puts dangers from terrorism and traditional military challenges to shame. It is a threat not only to the security and prosperity to the United States, but potentially to the continued existence of life on this planet.

Positive feedbacks will overwhelm natural carbon sinks


James w. Kirchner 2, professor of Earth and Planetary Science at Berkeley, “The Gaia Hypothesis: Fact, Theory, And Wishful Thinking”, http://seismo.berkeley.edu/~kirchner/reprints/2002_55_Kirchner_gaia.pdf

Do biological feedbacks stabilize, or destabilize, the global environment? That is, is the ‘Homeostatic Gaia’ hypothesis correct? This is not just a matter for theoretical speculation; there is a large and growing body of information that provides an empirical basis for evaluating this question. Biogeochemists have documented and quantified many important atmosphere-biosphere linkages (particularly those associated with greenhouse gas emissions and global warming), with the result that one can estimate the sign, and sometimes the magnitude, of the resulting feedbacks. Such analyses are based on three types of evidence: biological responses to plotscale experimental manipulations of temperature and/or CO2 concentrations (e.g., Saleska et al., 1999), computer simulations of changes in vegetation community structure (e.g., Woodward et al., 1998), and correlations between temperature and atmospheric concentrations in long-term data sets (e.g., Tans et al., 1990; Keeling et al., 1996a; Petit et al., 1999). Below, I briefly summarize some of the relevant biological feedbacks affecting global warming; more detailed explanations, with references to the primary literature, can be found in reviews by Lashof (1989), Lashof et al. (1997) and Woodwell et al. (1998): Increased atmospheric CO2 concentrations stimulate increased photosynthesis, leading to carbon sequestration in biomass (negative feedback). Warmer temperatures increase soil respiration rates, releasing organic carbon stored in soils (positive feedback). Warmer temperatures increase fire frequency, leading to net replacement of older, larger trees with younger, smaller ones, resulting in net release of carbon from forest biomass (positive feedback). Warming may lead to drying, and thus sparser vegetation and increased desertification, in mid-latitudes, increasing planetary albedo and atmospheric dust concentrations (negative feedback). Conversely, higher atmospheric CO2 concentrations may increase drought tolerance in plants, potentially leading to expansion of shrublands into deserts, thus reducing planetary albedo and atmospheric dust concentrations (positive feedback). Warming leads to replacement of tundra by boreal forest, decreasing planetary albedo (positive feedback). Warming of soils accelerates methane production more than methane consumption, leading to net methane release (positive feedback). Warming of soils accelerates N20 production rates (positive feedback). Warmer temperatures lead to release of CO2 and methane from high-latitude peatlands (positive, potentially large, feedback). This list of feedbacks is not comprehensive, but I think it is sufficient to cast considerable doubt on the notion that biologically mediated feedbacks are necessarily (or even typically) stabilizing. As Lashof et al. (1997) conclude, ‘While the processes involved are complex and there are both positive and negative feedback loops, it appears likely that the combined effect of the feedback mechanisms reviewed here will be to amplify climate change relative to current projections, perhaps substantially. . . The risk that biogeochemical feedbacks could substantially amplify global warming has not been adequately considered by the scientific or the policymaking communities’. Most of the work to date on biological climate feedbacks has focused on terrestrial ecosystems and soils; less is known about potential biological feedbacks in the oceans. One outgrowth of the Gaia hypothesis has been the suggestion that oceanic phytoplankton might serve as a planetary thermostat by producing dimethyl sulfide (DMS), a precursor for cloud condensation nuclei, in response to warming (Charlson et al., 1987). Contrary to this hypothesis, paleoclimate data now indicate that to the extent that there is such a marine biological thermostat, it is hooked up backwards, making the planet colder when it is cold and warmer when it is warm (Legrand et al., 1988; Kirchner, 1990; Legrand et al., 1991). It now appears that DMS production in the Southern Ocean may be controlled by atmospheric dust, which supplies iron, a limiting nutrient (Watson et al., 2000). The Antarctic ice core record is consistent with this view, showing greater deposition of atmospheric dust during glacial periods, along with higher levels of DMS proxy compounds, lower concentrations of CO2 and CH4, and lower temperatures (Figure 1). Watson and Liss (1998) conclude, ‘It therefore seems very likely that, both with respect to CO2 and DMS interactions, the marine biological changes which occurred across the last glacial-interglacial transition were both positive feedbacks’. This example illustrates how the Gaia hypothesis can motivate interesting and potentially important research, even if, as in this case, the hypothesis itself turns out to be incorrect. But it is one thing to view Gaia as just a stimulus for research, and quite another to view it as ‘the essential theoretical basis for the putative profession of planetary medicine’ (Lovelock, 1986). To the extent that the Gaia hypothesis posits that biological feedbacks are typically stabilizing, it is contradicted both by the ice core records and by the great majority of the climate feedback research summarized above. Given the available evidence that biological feedbacks are often destabilizing, it would be scientifically unsound – and unwise as a matter of public policy – to assume that biological feedbacks will limit the impact of anthropogenic climate change. As Woodwell et al. (1998) have put it, ‘The biotic feedback issue, critical as it is, has been commonly assumed to reduce the accumulation of heat trapping gases in the atmosphere, not to amplify the trend. The assumption is serious in that the margin of safety in allowing a warming to proceed may be substantially less than has been widely believed’.

Adopting a mindset of scientific inquiry for climate change makes sense because it’s a phenomenon uniquely suited to an empiricist methodology


Jean Bricmont 1, professor of theoretical physics at the University of Louvain, “Defense of a Modest Scientific Realism”, September 23, http://www.physics.nyu.edu/faculty/sokal/bielefeld_final.pdf

Given that instrumentalism is not defensible when it is formulated as a rigid doctrine, and since redefining truth leads us from bad to worse, what should one do? A hint of one sensible response is provided by the following comment of Einstein: Science without epistemology is insofar as it is thinkable at all primitive and muddled. However, no sooner has the epistemologist, who is seeking a clear system, fought his way through such a system, than he is inclined to interpret the thought-content of science in the sense of his system and to reject whatever does not fit into his system. The scientist, however, cannot afford to carry his striving epistemological systematic that far. ... He therefore must appeal to the systematic epistemologist as an unscrupulous opportunist.'1'1 So let us try epistemological opportunism. We are, in some sense, "screened'' from reality (we have no immediate access to it, radical skepticism cannot be refuted, etc.). There are no absolutely secure foundations on which to base our knowledge. Nevertheless, we all assume implicitly that we can obtain some reasonably reliable knowledge of reality, at least in everyday life. Let us try to go farther, putting to work all the resources of our fallible and finite minds: observations, experiments, reasoning. And then let us see how far we can go. In fact, the most surprising thing, shown by the development of modern science, is how far we seem to be able to go. Unless one is a solipsism or a radical skeptic which nobody really is one has to be a realist about something: about objects in everyday life, or about the past, dinosaurs, stars, viruses, whatever. But there is no natural border where one could somehow radically change one's basic attitude and become thoroughly instrumentalist or pragmatist (say. about atoms or quarks or whatever). There are many differences between quarks and chairs, both in the nature of the evidence supporting their existence and in the way we give meaning to those words, but they are basically differences of degree. Instrumentalists are right to point out that the meaning of statements involving unobservable entities (like "quark'') is in part related to the implications of such statements for direct observations. But only in part: though it is difficult to say exactly how we give meaning to scientific expressions, it seems plausible that we do it by combining direct observations with mental pictures and mathematical formulations, and there is no good reason to restrict oneself to only one of these. Likewise, conventionalists like Poincare are right to observe that some scientific "choices", like the preference for inertial over noninertial reference frames, are made for pragmatic rather than objective reasons. In all these senses, we have to be epistemological "opportunists". But a problem worse than the disease arises when any of these ideas are taken as rigid doctrines replacing 'realism". A friend of ours once said: "I am a naive realist. But I admit that knowledge is difficult." This is the root of the problem. Knowing how things really are is the goal of science; this goal is difficult to reach, but not impossible (at least for some parts of reality and to some degrees of approximation). If we change the goal if, for example, we seek instead a consensus, or (less radically) aim only at empirical adequacy then of course things become much easier; but as Bert rand Russell observed in a similar context, this has all the advantages of theft over honest toil. Moreover, the underdetermination thesis, far from undermining scientific objectivity, actually makes the success of science all the more remarkable. Indeed, what is difficult is not to find a story that "fits the data'*, but to find even one non-crazy such story. How does one know that it is non-crazy7 A combination of factors: its predictive power, its explanatory value, its breadth and simplicity, etc. Nothing in the (Quinean) underdetermiiiation thesis tells us how to find inequivalent theories with some or all of these properties. In fact, there are vast domains in physics, chemistry and biology where there is only one"18 known non-crazy theory that accounts for Unknown facts and where many alternative theories have been tried and failed because their predictions contradicted experiments. In those domains, one can reasonably think that our present-day theories are at least approximately true, in some sense or other. An important (and difficult) problem for the philosophy of science is to clarify the meaning of “approximately true'" and its implications for the ontological status of unobservable theoretical entities. We do not claim to have a solution to this problem, but we would like to offer a few ideas that might prove useful.

We are not science, we use science – our method is the same one everyone inevitably uses on a day-to-day basis, just more rigorous


Jean Bricmont 1, professor of theoretical physics at the University of Louvain, “Defense of a Modest Scientific Realism”, September 23, http://www.physics.nyu.edu/faculty/sokal/bielefeld_final.pdf

So, how does one obtain evidence concerning the truth or falsity of scientific assertions? By the same imperfect methods that we use to obtain evidence about empirical assertions generally. Modern science, in our view, is nothing more or less than the deepest (to date) refinement of the rational attitude toward investigating any question about the world, be it atomic spectra, the etiology of smallpox, or the Bielefeld bus routes. Historians, detectives and plumbers indeed, all human beings use the same basic methods of induction, deduction and assessment of evidence as do physicists or biochemists.18 Modern science tries to carry out these operations in a more careful and systematic way, by using controls and statistical tests, insisting on replication, and so forth. Moreover, scientific measurements are often much more precise than everyday observations; they allow us to discover hitherto unknown phenomena; and scientific theories often conflict with "common sense'*. But [he con f I id is al the level of conclusions, nol (he basic approach. As Susan Haack lucidly observes: Our standards of what constitutes good, honest, thorough inquiry and what constitutes good, strong, supportive evidence are not internal to science. In judging where science has succeeded and where it has failed, in what areas and at what times it has done better and in what worse, we are appealing to the standards by which we judge the solidity of empirical beliefs, or the rigor and thoroughness of empirical inquiry, generally.1'1 Scientists' spontaneous epistemology the one that animates their work, regardless of what they may say when philosophizing is thus a rough-and-ready realism: the goal of science is to discover (some aspects of) how things really are. More The aim of science is to give a true (or approximately true) description of reality. I'll is goal is realizable, because: 1. Scientific theories are either true or false. Their truth (or falsity) is literal, not metaphorical; it does not depend in any way on us, or on how we test those theories, or on the structure of our minds, or on the society within which we live, and so on. 2. It is possible to have evidence for the truth (or falsity) of a theory. (Tt remains possible, however, that all the evidence supports some theory T, yet T is false.)20 Tin- most powerful objections to the viability of scientific realism consist in various theses showing that theories are underdetermined by data.21 In its most common formulation, the underdetermination thesis says that, for any finite (or even infinite) set of data, there are infinitely many mutually incompatible theories that are "compatible'' with those data. This thesis, if not properly understood22, can easily lead to radical conclusions. The biologist who believes that a disease is caused by a virus presumably does so on the basis of some "evidence" or some "data'*. Saying that a disease is caused by a virus presumably counts as a "theory'' (e.g. it involves, implicitly, many counlerfactual statements). But if there are really infinitely many distinct theories that are compatible with those "data", then we may legitimately wonder on what basis one can rationally choose between those theories. In order to clarify the situation, it is important to understand how the underdetermination thesis is established; then its meaning and its limitations become much clearer. Here are some examples of how underdeterminatiou works; one may claim that: The past did not exist: the universe was created five minutes ago along with all the documents and all our memories referring to the alleged past in their present state. Alternatively, it could have been created 100 or 1000 years ago. The stars do not exist: instead, there are spots on a distant sky that emit exactly the same signals as those we receive. All criminals ever put in jail were innocent. For each alleged criminal, explain away all testimony by a deliberate desire to harm the accused; declare that all evidence was fabricated by the police and that all confessions were obtained bv force.2'1 Of course, all these "theses'1 may have to be elaborated, but the basic idea is clear: given any set of facts, just make up a story, no matter how ad hoc, to "account" for the facts without running into contradictions.2,1 It is important to realize that this is all there is to the general (Quinean) underdetermination thesis. Moreover, this thesis, although it played an important role in the refutation of the most extreme versions of logical positivism, is not very different from the observation that radical skepticism or even solipsism cannot be refuted: all our knowledge about the world is based on some sort of inference from the observed to the unobserved, and no such inference can be justified by deductive logic alone. However, it is clear that, in practice, nobody ever takes seriously such "theories" as those mentioned above, any more than they take seriously solipsism or radical skepticism. Let us call these "crazy theories'*2'1 (of course, it is not easy to say exactly what it means for a theory to be non-crazy). Xote that these theories require no work: they can be formulated entirely a priori. On the other hand, the difficult problem, given some set of data, is to find even one non-crazy theory that accounts for them. Consider, for example, a police enquiry about some crime: it is easy enough to invent a story that "accounts for the facts'" in an ad hoc fashion (sometimes lawyers do just that); what is hard is to discover who really committed the crime and to obtain evidence demonstrating that beyond a reasonable doubt. Reflecting on this elementary example clarifies the meaning of the underdelermination thesis. Despite the existence of innumerable "crazy theories'* concerning any given crime, it sometimes happens in practice that there is a unique theory (i.e. a unique story about who committed the crime and how) that is plausible and compatible with the known facts; in that case, one will say that the criminal has been discovered (with a high degree of confidence, albeit not with certainty). It may also happen that no plausible theory is found, or that we are unable to decide which one among several suspects is really guilty: in these cases, the underdetermination is real.-'' One might next ask whether there exist more subtle forms of underdetermination than the one revealed by a Duhem Quine type of argument. In order to analyze this question, let us consider the example of classical electromagnetism. This is a theory that describes how particles possessing a quantifiable property called "electric charge" produce "electromagnetic fields" that "propagate in vacuum" in a certain precise fashion and then "guide" the motion of charged particles when they encounter them.2' Of course, no one ever "sees" directly an electromagnetic field or an electric charge. So, should one interpret this theory "realistically'', and if so, what should it be taken to mean? Classical electromagnetic theory is immensely well supported by precise experiments and forms the basis for a large part of modern technology. It is "confirmed'' every time one of us switches on his or her computer and finds that it works as designed.'8 Does this overwhelming empirical support imply that there are "really"' electric and magnetic fields propagating in vacuum? In support of the idea that thenare, one could argue that electromagnetic theory postulates the existence of those fields and that there is no known non-crazy theory that accounts equally well for the same data; therefore it is reasonable to believe that electric and magnetic fields really exist. But is it in fact true that there are no alternative non-crazy theories? Here is one possibility: Let us claim that there are no fields propagating "in vacuum", but that, rather, there are only "forces" acting directly between charged particles.29 Of course, in order to preserve the empirical adequacy of the theory, one lias to use exactly the same Maxwell Lorentz system of equations as before (or a mathematically equivalent system). But one may interpret the fields as a mere "calculational device" allowing us to compute more easily the net effect of the "real" forces acting between charged particles.30 Almost every physicist reading these lines will say that this is some kind of metaphysics or maybe even a play on words that this "alternative theory" is really just standard electromagnetic theory in disguise. Xow, although the precise meaning of "metaphysics" is hard to pin down 31, there is a vague sense in which, if we use exactly the same equations (or a mathematically equivalent set of equations) and make exactly the same predictions in the two theories, then they are really the same theory as far as "physics" is concerned, and the distinction between the two if any lies outside of its scope. The same kind of observation can be made about most physical theories: In classical mechanics, are there really forces acting on particles, or are the particles instead following trajectories defined by variational principles? In general relativity, is space-time really curved, or are there, rather, fields that cause particles to move as if space-time were curved?'2 Let us call this kind of underdetermination "genuine'*, as opposed to the "crazy" underdeterminations of the usual Duhem Quine thesis. By "genuine'*, we do not mean that these underdeterminations are necessarily worth losing sleep over, but simply that there is no rational way to choose (at least on empirical grounds alone) between the alternative theories if indeed they should be regarded as different theories.

Reality exists independent of signifiers


Wendt 99

Alexander Wendt, Professor of International Security at Ohio State University, 1999, “Social theory of international politics,” gbooks



The effects of holding a relational theory of meaning on theorizing about world politics are apparent in David Campbell's provocative study of US foreign policy, which shows how the threats posed by the Soviets, immigration, drugs, and so on, were constructed out of US national security discourse.29 The book clearly shows that material things in the world did not force US decision-makers to have particular representations of them - the picture theory of reference does not hold. In so doing it highlights the discursive aspects of truth and reference, the sense in which objects are relationally "constructed."30 On the other hand, while emphasizing several times that he is not denying the reality of, for example, Soviet actions, he specifically eschews (p. 4) any attempt to assess the extent to which they caused US representations. Thus he cannot address the extent to which US representations of the Soviet threat were accurate or true (questions of correspondence). He can only focus on the nature and consequences of the representations.31 Of course, there is nothing in the social science rule book which requires an interest in causal questions, and the nature and consequences of representations are important questions. In the terms discussed below he is engaging in a constitutive rather than causal inquiry. However, I suspect Campbell thinks that any attempt to assess the correspondence of discourse to reality is inherently pointless. According to the relational theory of reference we simply have no access to what the Soviet threat "really" was, and as such its truth is established entirely within discourse, not by the latter's correspondence to an extra-discursive reality 32 The main problem with the relational theory of reference is that it cannot account for the resistance of the world to certain representations, and thus for representational failures or m/'sinterpretations. Worldly resistance is most obvious in nature: whether our discourse says so or not, pigs can't fly. But examples abound in society too. In 1519 Montezuma faced the same kind of epistemological problem facing social scientists today: how to refer to people who, in his case, called themselves Spaniards. Many representations were conceivable, and no doubt the one he chose - that they were gods - drew on the discursive materials available to him. So why was he killed and his empire destroyed by an army hundreds of times smaller than his own? The realist answer is that Montezuma was simply wrong: the Spaniards were not gods, and had come instead to conquer his empire. Had Montezuma adopted this alternative representation of what the Spanish were, he might have prevented this outcome because that representation would have corresponded more to reality. The reality of the conquistadores did not force him to have a true representation, as the picture theory of reference would claim, but it did have certain effects - whether his discourse allowed them or not. The external world to which we ostensibly lack access, in other words. often frustrates or penalizes representations. Postmodernism gives us no insight into why this is so, and indeed, rejects the question altogether.33 The description theory of reference favored by empiricists focuses on sense-data in the mind while the relational theory of the postmoderns emphasizes relations among words, but they are similar in at least one crucial respect: neither grounds meaning and truth in an external world that regulates their content.34 Both privilege epistemology over ontology. What is needed is a theory of reference that takes account of the contribution of mind and language yet is anchored to external reality. The realist answer is the causal theory of reference. According to the causal theory the meaning of terms is determined by a two-stage process.35 First there is a "baptism/' in which some new referent in the environment (say, a previously unknown animal) is given a name; then this connection of thing-to-term is handed down a chain of speakers to contemporary speakers. Both stages are causal, the first because the referent impressed itself upon someone's senses in such a way that they were induced to give it a name, the second because the handing down of meanings is a causal process of imitation and social learning. Both stages allow discourse to affect meaning, and as such do not preclude a role for "difference" as posited by the relational theory. Theory is underdetermined by reality, and as such the causal theory is not a picture theory of reference. However, conceding these points does not mean that meaning is entirely socially or mentally constructed. In the realist view beliefs are determined by discourse and nature.36 This solves the key problems of the description and relational theories: our ability to refer to the same object even if our descriptions are different or change, and the resistance of the world to certain representations. Mind and language help determine meaning, but meaning is also regulated by a mind-independent, extra-linguistic world.

Directory: download -> Georgetown -> Arsht-Markoff+Aff
Arsht-Markoff+Aff -> The United States Supreme Courts should restrict the President’s war powers authority to indefinitely detain, on the grounds that the Geneva Conventions confer a private right of action
Georgetown -> Immigration reform is up—Obama has leverage—that’s key to overcome gop obstructionism Jeff Mason
Georgetown -> The aff’s call to fix a world gone astray is part of debate’s fixation on the suffering of the Other – this perspective is one of prescriptive colonialism that leads to endless violence
Georgetown -> Immigration reform is up—Obama has leverage—that’s key to overcome gop obstructionism
Georgetown -> Catastrophic anthropogenic warming is happening – observable data and a scientific consensus
Georgetown -> Escalating public and international backlash against future drone use will crush effectiveness – the plan is key to reverse that sentiment
Georgetown -> Escalating public and international backlash against future drone use will crush effectiveness – the plan is key to reverse that sentiment
Arsht-Markoff+Aff -> The plan is key to prevent an escalating public backlash against future drone use Zenko 13


Share with your friends:
  1   2   3   4   5   6   7   8   9   ...   14




The database is protected by copyright ©essaydocs.org 2020
send message

    Main page