Requiring Presidential consultation with Congress creates clarity and pragmatic oversight of introducing armed forces into hostilities
James A. Baker 11, Previous Secretary of State and Lee H. Hamilton, former Democratic representative from Indiana who chaired the House Committee on Foreign Affairs, "Breaking the war powers stalemate", June 9, www.washingtonpost.com/opinions/breaking-the-war-powers-stalemate/2011/06/08/AGX0CrNH_story.html
There is, unfortunately, no clear legal answer about which side is correct. Some argue for the presidency, saying that the Constitution assigns it the job of “Commander in Chief.” Others argue for Congress, saying that the Constitution gives it the “power to . . . declare war.” But the Supreme Court has been unwilling to resolve the matter, declining to take sides in what many consider a political dispute between the other branches of government.¶ We believe there is a better way than wasting time disputing who is responsible for initiating or continuing war.¶ Almost three years ago, we were members of the Miller Center’s bipartisan National War Powers Commission, which proposed a pragmatic framework for consultation between the president and Congress. Co-chaired by one of us and the late Warren Christopher, the commission could not resolve the legal question of which branch has the ultimate authority. Only the court system can do that. Instead, the commission strove to foster interaction and consultation, and reduce unnecessary political friction. The commission — which represented a broad spectrum of views, from Abner Mikva on the liberal end to Edwin Meese on the conservative end — made a unanimous recommendation to the president and Congress in 2008.¶ The commission’s proposed legislation would repeal and replace the War Powers Resolution. Passed over a presidential veto and in response to the Vietnam War, the 1973 resolution was designed to give Congress the ability to end a conflict and force the president to consult more actively with the legislative branch before engaging in military action. The resolution, a hasty compromise between competing House and Senate plans, stated that the president must terminate a conflict within 90 days if Congress has not authorized it. But no president has ever accepted the statute’s constitutionality, Congress has never enforced it and even the bill’s original sponsors were unhappy with the end product. In reality, the resolution has only further complicated the issue of war powers.¶ Our proposed War Powers Consultation Act offers clarity. It creates a consultation process, defines what constitutes “significant armed conflict” and identifies specific actions that both the president and Congress must take.¶ On the executive side, the president would be required to confer with a specific group of congressional leaders before committing to combat operations that last or are expected to last more than a week. Reasonable exemptions exist, including training exercises, covert operations or missions to protect and rescue Americans abroad. Likewise, if an emergency precedes engagement, or secrecy is required that precludes prior consultation, then consultation can follow within three days. Under this proposal, the strike on Osama bin Laden would plainly fall within the president’s prerogative, while an action such as our current engagement in Libya would require advance consultation and congressional action at the appropriate time.¶ On the legislative side, Congress would have to vote on a resolution of approval no later than 30 days after the president had consulted lawmakers. If Congress refused to vote yea or nay, it would do so in the face of a clear requirement to the contrary. Inaction would no longer be a realistic option.¶ Given the Constitution’s ambiguity, no solution is perfect. But Congress and the White House should view the War Powers Consultation Act as a way out of the impasse. It is what the American people want when their leaders confront the serious questions of war and peace.
A pragmatic approach to politics is optimal---argumentation should start from empirical method using a reasoned process to avoid nihilism
Robert Rowland 95, Professor of Communication at the University of Kansas, “In Defense of Rational Argument: A Pragmatic Justification of Argumentation Theory and Response to the Postmodern Critique” Philosophy & Rhetoric Vol. 28, No. 4Oct 1, 1995, EBSCO
A pragmatic theory of argument¶ The first step in developing a justifiable theory of rational argument that can account for the epistemological and axiological attacks is to recognize the performative contradiction at the heart of the postmodern critique. Postmodernists rely on rational argument in order to attack rational argument and they consistently claim that their positions are in some way superior to those of their modernist opponents. Writing of post-structuralism, Amanda Anderson notes "the incommensurability between its epistemological stance and its political aims, between its descriptions and its prescriptions, between the pessimism of its intellect and, if not the optimism, at least the intrusiveness of its moral and political will" (1992, 64).¶ The performative contradiction at the heart of postmodernism is nowhere more evident than in the epistemological critique of modernism. The two most important points made by postmodernists in relation to epistemology are that humans can understand the world only through their symbols and that there is no means of using "reality" to test a symbolic description. Advocates of traditional approaches to rationality have not been able to satisfactorily answer these positions, precisely because they seem to be "true" in some sense. This "truth," however, suggests that a theory of rational argument may be salvageable. If postmodernists can defend their views as in some sense "truer" than those of their modernist opponents, then there must be some standard for judging "truth" that can withstand the postmodern indictment. That standard is pragmatic efficacy in fulfilling a purpose in relation to a given problem.¶ Both modernists and postmodernists generally assume that truth and fact are equivalent terms. Thus, a "true" statement is one that is factually correct in all circumstances. By this standard, of course, there are no totally "true" statements. However, if no statement can be proved factually true, then a focus on facts is an inappropriate standard for judging truth.¶ I suggest that knowledge and truth should be understood not as factual statements that are certain, but as symbolic statements that function as useful problem-solving tools. When we say that a view is true, we really mean that a given symbolic description consistently solves a particular problem. Thus, the statement "the sun will come up tomorrow" can be considered "true," despite ambiguities that a postmodernist might point to in regard to the meaning of sun or tomorrow, because it usefully and consistently solves a particular epistemic problem.¶ The standard for "truth" is pragmatic utility in fulfilling a purpose in relation to a particular problem. A true statement is one that "works" to solve the problem. Both the nature of the problem and the arguer's purpose in relation to that problem infiuence whether a given statement is viewed as true knowledge. This explains why biological researchers and physicians often seem to have different definitions of truth in regard to medical practice. The researcher is concerned with fully understanding the way that the body works. His or her purpose dictates application of rigorous standards for evaluating evidence and causation. By contrast, the physician is concerned with treating patients and therefore may apply a much lower standard for evaluating new treatments. The pragmatic theory of argument I am defending draws heavily on the work of William James, who believed that "the only test of probable truth is what works" (1982, 225). Alan Brinton explains that for jEunes "the ultimate question of truth is a question about the concepts and their fruitfulness in serving the purposes for which they were created and imposed. Ideas are true insofar as they serve these purposes, and false insofar as they fail to do so" (1982, 163). Some contemporary pragmatists take a similar view. For example, Nicholas Rescher writes in relation to methodology that "the proper test for the correctness or appropriateness of anything methodological in nature is plainly and obviously posed by the paradigmatically pragmatic questions: Does it work? Does it attain its intended purposes?" (1977, 3). Similarly, Celeste Condit Railsback argues that "truth is . . . relative to the language and purposes of the persons who are using it" (1983, 358-59). At this point, someone like Derrida might argue that while the pragmatic approach accounts for the symbolic nature of truth, it does not deal with the inability of humans to get at reality directly. Although the postmodern critique denies that humans can directly experience "the facts," it does not deny that a real-world exists.¶ Thus, a pragmatist endorses a given scientific theory because the symbolic description present in that theory does a better job than its competitors of fulfilling a set of purposes in a given context. Because it fulfills those purposes, we call the theory "true." We cannot attain knowledge about "the facts," but we can test the relative adequacy of competing problem-solving statements against those facts. Michael Redhead, a professor of history and philosophy of science at Cambridge University, notes that "we can always conjecture, but there is some control. The world kicks back" (in Peterson 1992,175; emphasis added). Knowledge is not about "facts." It is about finding symbolic descriptions of the world that work, that is, avoiding nature's kicks in fulfilling a given purpose.¶ The foregoing suggests that a principled pragmatic theory of argument sidesteps the postmodern critique. Argumentation theory ¶ should be understood as a set of pragmatic rules of thumb about the kinds of symbolic statements that effectively solve ¶ problems. These statements exist at varying levels of generality. A consistency principle , for example, is really a rule of thumb stating something like "All other things being equal, consistent symbolic descriptions are more likely to prove useful for solving a particular problem in relation to a given purpose than are inconsistent descriptions." Other principles are linked to narrower purposes in more specific contexts. Thus, the standards for evaluating arguments in a subfield of physics will be tied to the particular purposes and problems found in that subfield. The key point is that all aspects of a theory of argument can be justified pragmatically, based on their value for producing useful solutions to problems.¶ A pragmatic theory of argument can be understood as operating at three levels, all of which are tied to functionality. At the first or definitional level, argument is best understood as a kind of discourse or interaction in which reasons and evidence are presented in support of a claim. Argument as a symbolic form is valued based on its ability to deal with problems; the business of argument is problem solving. At a second or theoretical level, what Toulmin would call fieldinvariant, general principles of rational argument are justified pragmatically based on their capacity to solve problems. Thus, tests of evidence, general rules for describing argument, standards relating to burden of proof or presumption, and fallacies, all can be justified pragmatically based on the general problem-solving purpose served by all argument. For example, the requirement that claims must be supported with evidence can be justified as a general rule of thumb for distinguishing between strong and weak (that is, useful and useless) arguments. Certainly, there are cases in which unsupported assertions are "true" in some sense. However, the principle that any claim on belief should be supported with evidence of some type is a functional one for distinguishing between claims that are likely to be useful and those that are less likely to be useful.¶ At a third level, that of specific fields or subfields, principles of argumentation are linked to pragmatic success in solving problems in the particular area (see Rowland 1982). Thus, for instance, the rules of evidence found in the law are linked directly to the purposes served by legal argument. This explains why the burden of proof in a criminal trial is very different from that found in the civil law. The purpose of protecting the innocent from potential conviction requires that a higher standard of proof be applied in this area than elsewhere.¶ The pragmatic perspective I have described is quite different from that of interpretive pragmatists such as Richard Rorty (1979, 1982, 1985, 1987) and Stanley Fish (1980, 1989a, 1989b). Rorty, while denying the existence of legitimate formal or content-based standards for "proof" (1982,277), endorses a processual epistemology based on "the idea of [substituting] 'unforced agreement' for that of 'objectivity' " (41-42). Janet Home summarizes Rorty's views, noting that "the difference between 'certified knowledge' and 'mere belief is based upon intersubjective agreement rather than correspondence" (1989, 249). By contrast. Fish grounds reason in the practices of particular "interpretive communities" (1989b, 98). In this view, "Particular facts are firm or in question insofar as the perspective . . . within which they emerge is firmly in place, settled" (Fish 1989a, 308).¶ Unfortunately, a theory of argumentation cannot be salvaged merely by grounding reason in conversational practice or community assent. If there are no agreed upon standards, then how does one "rationally" test a claim intersubjectively or in process? Fish and Rorty beg the question when they ground reason in community and conversational process. Unlike Rorty and Fish, who reject the ideas of "truth" and "knowledge," I argue that those concepts must be redefined in relation to problem solving.¶ The pragmatic theory of argument that I have advanced provides a principled means of choosing among competing alternatives, regardless of the context. One always should ask whether or not a particular symbolic description of the world fulfills its purposes. In so doing, methodological principles for testing knowledge claims, such as tests of evidence, fallacies, and more precise field standards, can be justified, and then they can be applied within the conversation or by the community. The approach, therefore, provides standards to be applied in Rorty's process or by Fish's community and avoids the tautology that otherwise confronts those approaches. The perspective neatly avoids the problems associated with modernism, but also provides a principled approach to argument that does not lead to relativism.¶ In defense of rational argument¶ When argument is viewed as a pragmatic problem-solving tool, the power of the postmodern critique largely dissipates. At the most basic level, a pragmatic theory of argument is based on premises such as the following:¶'Statements supported by evidence and reasoning are more likely to be useful for satisfactorily solving a problem than ones that lack that support.¶ 'Consistent arguments are more likely to be generalizable than inconsistent ones.¶'Expertsare more likely to have useful viewpoints about technical questions tied to a particular field than nonexperts. These statements are not "true" in the factual sense, but they are universally recognized as useful, a point that is emphasized in the work of even the most committed postmodernist. Even someone like Derrida demands that his opponents support their claims with evidence and consistent reasoning. In so doing, Derrida clearly recognizes the functional utility of general standards for testing argument form and process.¶ Arguing should be understood as a pragmatic process for locating solutions to problems. The ultimate justification of argument as a discipline is that it produces useful solutions. Of course, not all arguments lead to successful solutions because the world is a complex place and the people who utilize the form/process are flawed. However, the general functional utility of argument as a method of ¶ invention or discovery and the method of justification is undisputed. The pragmatic approach to argument also provides a means of answering the axiological objections to traditional reason. Initially, the view that argument is often a means of enslaving or disempowering people is based on a misunderstanding of how argument as a form of discourse functions. In fact, the danger of symbolic oppression is less applicable to argument as a type of symbol use than to other forms. Argument tells us how to solve problems. It can be a force for enslavement only to the degree that a successful problem-solution is enslaving. This is a rare event in any society grounded in democratic ethics.¶ Additionally, argument as a form and process is inherently person-respecting because in argument it is not status or force that matters, but only the reasoning (see Brockriede 1972). In a pure argumentative encounter, it does not matter whether you are President of the United States or a college junior; all that is relevant is what you have to say. Of course, this ideal is rarely realized, but the principle that humans should test their claims against standards of argumentation theory that are tied to pragmatic problem solving (and not base conclusions on power) is one that recognizes the fundamental humanity in all people.¶ Furthermore, argument is one of the most important means of protecting society from symbolic oppression. Argument as an internal process within an individual and external process within society provides a method of testing the claims of potential oppressors. Therefore, training in argument should be understood as a means of providing pragmatic tools for breaking out of terministic or disciplinary prisons.¶ Against this view, it could be argued that pragmatism, because of its "practical" bent, inevitably degenerates into "hegemonic instrumental reason" in which technocratic experts control society. In Eclipse of Reason, Max Horkheimer takes the position that "in its instrumental aspect, stressed by pragmatism," reason "has become completely harnessed to the social process. Its operational value, its role in the domination of men and nations has been made the sole criterion" (1947, 21). Later, he notes that "pragmatism is the counterpart of modern industrialism for which the factory is the prototype of human existence" (50).¶ The claims that pragmatism reduces reason to a mere instrument of production or leads to undemocratic technocratic control of society are, however, misguided. Initially, it is worth noting that Horkeimer's aim is not to indict rationality per se, but to focus on the inadequacy of a purely instrumental form of rationality, which he labels "subjective reason." Near the conclusion of Eclipse of Reason, Horkheimer defends "objective reason": "This concept of truth—the adequation of name and thing—inherent in every genuine philosophy, enables thought to withstand if not to overcome the demoralizing and mutilating effects of formalized reason" (1947, 180). The goal of this essay, to develop a theory of rational argument that can withstand the postmodern indictment, is quite consistent with Horkheimer's view that humans need "objective reason" in order to "unshackle . . . independent thought" and oppose "cynical nihilism" (127, 174). While there can be no purely "objective reason," field-invariant and field-dependent principles of argumentation can be justified pragmatically to serve the aims that Horkheimer assigns to that form.¶ Moreover, a pragmatic theory of argument should not be confused with a decision-making approach based on mere practicality or self-interest. Principles of argument are justified pragmatically, that is, because they work consistently to solve problems. But after justification, the invariant and relevant field-dependent principles may be used to test the worth of any argument and are not tied to a simple utilitarian benefit/loss calculus. The misconception that a pragmatic theory of truth is tied to a simplistic instrumentalism is a common one. John Dewey notes, for instance, that William James's reference to the "cash value" of reasoning was misinterpreted by some "to mean that the consequences themselves of our rational conceptions must be narrowly limited by their pecuniary value" (1982, 33). In fact, pragmatism "concerns not the nature of consequences but the nature of knowing" (Dewey 1960,331). Or as James himself put it, "The possession of true thoughts means everywhere the possession of invaluable instruments of action" (1948, 161). Pragmatism "is a method only," which "does not stand for any special result" (James 1982, 213), but that method can be used to justify principles of argument that in turn can be used to check the excesses of instrumental reason. Moreover, a pragmatic approach to argument is self-correcting. According to James, pragmatism "means the open air and possibilities of nature, as against dogma,artificiality and the pretense of finality in truth" (213). Dewey makes the same point when he claims that pragmatic theory involves "the use of intelligence to liberate and liberalize action" (1917,63). Nor does pragmatism necessarily lead to expert domination.A pragmatic argumentation theory endorses deference to the opinion of experts only on questions for which the expert possesses special knowledge relevant to a particular problem. And even on such issues, the views of the expert would be subject to rigorous testing. It would be quite unpragmatic to defer to expert opinion, absent good reasons and strong evidence.¶ The previous analysis in no way denies the risks associated with technical reason. It is, however precisely because of such risks that a principled pragmatic theory of argument is needed. Given that we live in an advanced technological society, it is inevitable that technical reason will play a role. Postmodernism points to the dangers of technical reason, but provides no means of avoiding those risks. A pragmatic theory of argument, by contrast, justifies principles of rationality that can be used to protect society from the nihilistic excesses of a purely instrumental reason.
This approach to politics is necessary for effective progress in situations of uncertainty---avoids instrumentalism through inter-subjective understanding
Friedrich Kratochwil 8, is Assistant Professor of International Relations at Columbia University, Pragmatism in International Relations “Ten points to ponder about pragmatism” p11-25
First, a pragmatic approach does not begin with objects or ‘things’ (ontology), or with reason and method (epistemology), but with ‘acting’ (prattein), thereby preventing some false starts. Since, as historical beings placed in a specific situations, we do not have the luxury of deferring decisions until we have found the ‘truth’ we have to act and must do so always under time pressures and in the face of incomplete information. Precisely because the social world is characterized by strategic interactions, what a situation ‘is’, is hardly ever clear ex ante, since it is being ‘produced’ by the actors and their interactions, and the multiple possibilities are rife with incentives for (dis)information. This puts a premium on quick diagnostic and cognitive shortcuts informing actors about the relevant features of the situation, and on leaving an alternative open (‘plan B’) in case of unexpected difficulties. Instead of relying on certainty and universal validity gained through abstraction and controlled experiments, we know that completeness and attentiveness to detail, rather than to generality, matter. To that extent, likening practical choices to simple ‘discoveries’ of an already independently existing ‘reality’ disclosing itself to an ‘observer’–or relying on optimal strategies – is somewhat heroic. These points have been made vividly by ‘realists’ such as Clausewitz in his controversy with von Buelow, in which he criticized the latter’s obsession with a strategic ‘science’ (Paret et al. 1986). While Clausewitz has become anicon for realists, a few of them (usually dubbed ‘old’ realists) have taken seriously his warnings against the misplaced belief in the reliability and usefulness of a ‘scientific’ study of strategy. Instead, most of them, especially ‘neorealists’ of various stripes, have embraced the ‘theory’-building based on the epistemological project as the via regia to the creation of knowledge. A pragmatist orientation would most certainly not endorse such a position. Second, since acting in the social world often involves acting ‘for’ someone, special responsibilities arise that aggravate both the incompleteness of knowledge as well as its generality problem. Since we owe special care to those entrusted to us, for example, as teachers, doctors or lawyers, we cannot just rely on what is generally true, but have to pay special attention to the particular case. Aside from avoiding the foreclosure of options, we cannot refuse to act on the basis of incomplete information or insufficient knowledge, and the necessary diagnostic will involve typification and comparison, reasoning by analogy rather than generalization or deduction. Leaving out the particularities of a case, be it a legal or medical one, in a mistaken effort to become ‘scientific’ would be a fatal flaw. Moreover, there still remains the crucial element of ‘timing’ – of knowing when to act. Students of crises have always pointed out the importance of this factor but, in attempts at building a general ‘theory’ of international politics analogously to the natural sciences, such elements are neglected on the basis of the ‘continuity of nature’ and the ‘large number’ assumptions. Besides, ‘timing’ seems to be quite recalcitrant to analytical treatment. Third, the cure for anxiety induced by Cartesian radical doubt does not consist in the discovery of a ‘foundation’ guaranteeing absolute certainty. This is a phantasmagorical undertaking engendered by a fantastic starting point, since nobody begins with universal doubt! (Peirce 1868). Rather, the remedy for this anxiety consists in the recognition of the unproductive nature of universal doubt on the one hand, and of the fetishization of ‘rigour’ on the other. Letting go of unrealizable plans and notions that lead us to delusional projects, and acquiring instead the ability to ‘go on’ despite uncertainties and the unknown, is probably the most valuable lesson to learn. Beginning somewhere, and reflecting critically on the limitations of the starting point and the perspective it opened, is likely to lead to a more fruitful research agenda than starting with some preconceived notions of the nature of things, or of ‘science’, and then testing the presumably different (but usually quite similar) theories (such as liberalism and realism). After all, ‘progress’ in the sciences occurred only after practitioners had finally given up on the idea that in order to say something about the phenomena of the world (ta onta), one had to grasp first ‘being’ itself(to ontos on). Fourth, by giving up on the idea that warranted knowledge is generated either through logical demonstration or through the representation of the world ‘out there’, a pragmatic starting point not only takes seriously the always preliminary character of knowledge, it also promises that we will learn to follow a course of action that represents a good bet.7 Thus, it accounts for changes in knowledge in a more coherent fashion. If the world were ‘out there’, ready-made, only to be discovered, scientific knowledge would have to be a simple accumulation of more and more true facts, leading us virtually automatically closer and closer to ‘the TRUTH’. Yet, if we have learned anything from the studies of various disciplines, it is the fact that progress consists in being able to formulate new questions that could not even be asked previously. Hence, whatever we think of Kuhn’s argument about ‘paradigms’, we have to recognize that in times of revolutionary change the bounds of sense are being redrawn, and thus the newly generated knowledge is not simply a larger sector of the encircled area (Kratochwil 2000). Fifth, pragmatism recognizes that science is social practice, which is determined by rules and in which scientists not only are constitutive for the definitions of problems (rather than simply lifting the veil from nature), but they also debate seemingly ‘undecidable’ questions and weigh the evidence, instead of relying on the bivalence principle of logic as an automatic truth-finder (Ziman 1991; Kratochwil 2007a). To that extent, the critical element of the epistemological project is retained, but the ‘court’, which Kant believed to be reason itself, now consists of the practitioners themselves. Instead of applying free-standing epistemological standards, each science provides its own court, judging the appropriateness of its methods and practices. Staying with the metaphor of a court, we also have to correct an implausible Kantian interpretation of law – that it has to yield determinate and unique decisions. We know from jurisprudence and case law that cases can be decided quite differently without justifying the inference that this proves the arbitrariness of law. Determinacy need not coincide with uniqueness, either in logic (multiple equilibria), science (equifinality) or law – Ronald Dworkin (1978) notwithstanding! Sixth, despite the fact that it is no longer a function of bivalent truth conditions, or anchored neither in the things themselves (as in classical ontology) nor in reason itself, ‘truth’ has not been abolished or supplanted by an ‘anything goes’ attitude. Rather, it has become a procedural notion of rule-following according to community practices, since nobody can simply make the rules as she or he goes along. These rules do not ‘determine’ outcomes, as the classical logic of deductions or truth conditions suggest, but they do constrain and enable us in our activities. Furthermore, since rule-following does not simply result in producing multiple copies of a fixed template, rules provide orientation in new situations, allowing us to ‘go on’, making for both consistency and change. Validity no longer assumes historical universality, and change is no more conceived of as temporal reversibility, as in differential equations, where time can be added and multiplied, compared with infinity, and run towards the past or the future. Thus ‘History’ is able to enter the picture, and it matters because, differently from the old ontology, change can now be conceived of as a ‘path-dependent’ development, as a (cognitive) evolution or even as radical historicity, instead of contingency or decay impairing true knowledge. Consequently, time-bound rather than universal generalizations figure prominently in social analysis, and as Diesing, a philosopher of science, reminds us, this is no embarrassment. Being critical of the logical positivists’ search for ‘laws’ does not mean that only single cases exist and that no general statements are possible. It does mean, however, that in research: there are other goals as well and that generality is a matter of degree. Generalizations about US voting behaviour can be valid though they apply only between 1948–72 and only to Americans. Truth does not have to be timeless. Logical empiricists have a derogatory name for such changing truths (relativism); but such truths are real, while the absolute, fully axiomatized truth is imaginary. (Diesing 1991:91) Seventh, the above points show their importance when applied not only to the practices of knowledge generation but also to the larger problem of the reproduction of the social world. Luhmann (1983) suggested how rule-following solves the problem of the ‘double contingency’ of choices that allows interacting parties to relate their actions meaningfully to each other. ‘Learning’ from past experience on the basis of a ‘tit for tat’ strategy represents one possibility for solving what, since Parsons, has been called the ‘Hobbesian problem of order’. This solution, however, is highly unstable, and thus it cannot account for institutionalized behaviour. The alternative to learning is to forgo ‘learning’. Actors must abstract from their own experiences by trusting in a ‘system of expectations’ which is held to be counterfactually valid. ‘Institutionalization’ occurs in this way, especially when dispute-settling instances emerge that are based on shared expectations about the system of expectations. Thus, people must form expectations about what types of arguments and reasons are upheld by ‘courts’ in case of a conflict (Luhmann 1983). Eighth, a pragmatic approach, although sensitive to the social conditions of cognition, is not simply another version of the old ‘sociology of knowledge’, let alone of utilitarianism by accepting ‘what works’ or what seems reasonable to most people. It differs from the old sociology of knowledge that hinged on the cui bono question of knowledge (Mannheim 1936), since no argument about a link between social stratification and knowledge is implied, not to mention the further-reaching Marxist claims of false consciousness. A pragmatist approach, however, is compatible with such approaches as Bourdieu’s (1977) or more constructivist accounts of knowledge production, such as Fuller’s (1991) social epistemology, because it highlights the interdependence of semantics and social structures. Ninth, as the brief discussion of ‘science studies’ above has shown, it is problematic to limit the problem of knowledge production to ‘demonstrations’ (even if loosely understood in terms of the arguments within the scientific community), thus neglecting the factors that are conducive to (or inhibitive of) innovation in the definition of problems. To start with, antecedent to any demonstration, there has to be the step of ‘invention’, as the classical tradition already suggested. In addition, although it might well be true that ‘invention’ does not follow the same ‘logic’ as ‘testing’ or demonstrating, this does not mean that these considerations are irrelevant or can be left outside the reflection on how knowledge is generated. To attribute originality solely to a residual category of a rather naively conceived individual ‘psychology of discovery’, as logical positivism does, will simply not do. After all, ‘ideas’ are not representations and properties of the individual mind, but do their work because they are shared; innovation is crucially influenced by the formal and informal channels of communication within a (scientific) community. While the logical form of refutability in principle is, for logical positivists, a necessary element of their ‘theoretical’ enterprise, it does not address issues of creativity and innovation, which are a crucial part of the search for knowledge. Corroborating what we already suspected is interesting only if such inquiries also lead to novel discoveries, since nobody is served by ‘true’ but trivial results. Quite clearly, the traditional epistemological focus is much too narrow to account for and direct innovative research, while pragmatic approaches have notoriously emphasized the creativity of action (Rochberg-Halton 1986). Tenth, the above discussion should have demonstrated that a pragmatic approach to knowledge generation is not some form of ‘instrumentalism’ á la Friedman (1968), perhaps at basement prices, or that it endorses old wives’ tales if they generated ‘useful predictions’, even though for rather unexplainable reasons. Thus, buying several lottery tickets on the advice of an acquaintance to rid oneself of debts and subsequently hitting the jackpot neither qualifies as a pragmatically generated solution to a problem nor does it make the acquaintance a financial advisor. Although ‘usefulness’ is a pragmatic standard, not every employment of it satisfies the exacting criteria of knowledge production. As suggested throughout this chapter, a coherent pragmatic approach emphasizes the intersubjective and critical nature of knowledge generation based on rules, and it cannot be reduced to the de facto existing (or fabricated) consensus of a concrete group of scientists or to the utility of results, the presuppositions of which are obscure because they remained unexamined. Conclusions No long summary of argument is necessary here. Simply, a pragmatic turn shows itself to be consistent with the trajectory of a number of debates in the epistemology of social sciences; it also ties in with and feeds into the linguistic, constructivist and ‘historical’ turns that preceded it; and finally, it is promising for the ten reasons listed above. While these insights might be useful correctives, they do not by themselves generate viable research projects. This gain might have been the false promise of the epistemological projectand its claim that simply following the path of a ‘method’ will inevitably lead to secure knowledge. Disabusing us of this idea might be useful in itself because it would redirect our efforts at formulating and conceptualizing problems that are antecedent to any ‘operationalization’ of our crucial terms (Sartori 1970), or of any ‘tests’ concerning which ‘theory’ allegedly explains best a phenomenon under investigation.
Political deliberation about war powers promotes agency and decision-making---reciprocity and public debate facilitates mutual respect that lays the groundwork for cooperation on other issues
Dr. Amy Gutmann 4, President and Christopher H. Browne Distinguished Professor of Political Science in the School of Arts and Sciences and Professor of Communication in the Annenberg School for Communication University of Pennsylvania, AND Dennis Thompson, Alfred North Whitehead Professor of Political Philosophy in the Faculty of Arts and Sciences and in the John F. Kennedy School of Government, Emeritus Political Theory, "Why Deliberative Democracy?" press.princeton.edu/chapters/s7869.html
WHAT DELIBERATIVE DEMOCRACY MEANS¶ To go to war is the most consequential decision a nation can make. Yet most nations, even most democracies, have ceded much of the power to make that decision to their chief executives--to their presidents and prime ministers. Legislators are rarely asked or permitted to issue declarations of war. The decision to go to war, it would seem, is unfriendly territory for pursuing the kind of reasoned argument that characterizes political deliberation.¶ Yet when President George W. Bush announced that the United States would soon take military action against Saddam Hussein, heand his advisors recognized the need to justify the decision not only to the American people but also to the world community. Beginning in October 2002, the administration found itself engaged in argument with the U.S. Congress and, later, with the United Nations. During the months of preparation for the war, Bush and his colleagues, in many different forums and at many different times, sought to make the case for a preventive war against Iraq.1 Saddam Hussein, they said, was a threat to the United States because he had or could soon have weapons of mass destruction, and had supported terrorists who might have struck again against the United States. Further, he had tyrannized his own people and destabilized the Middle East.¶ In Congress and in the United Nations, critics responded, concurring with the judgment that Hussein was a terrible tyrant but challenging the administration on all its arguments in favor of going to war before exhausting the nonmilitary actions that might have controlled the threat. As the debate proceeded, it became clear that almost no one disagreed with the view that the world would be better off if Saddam Hussein no longer ruled in Iraq, but many doubted that he posed an imminent threat, and many questioned whether he actually supported the terrorists who had attacked or were likely to attack the United States.¶ This debate did not represent the kind of discussion that deliberative democrats hope for, and the deliberation was cut short once U.S. troops began their invasion in March 2003. Defenders and critics of the war seriously questioned one another's motives and deeply suspected that the reasons offered were really rationalizations for partisan politics. The administration, for its part, declined to wait until nonmilitary options had been exhausted, when a greater moral consensus might have been reached. But the remarkable fact is that even under the circumstances of war, and in the face of an alleged imminent threat, the government persisted in attempting to justify its decision, and opponents persevered in responding with reasoned critiques of a preventive war.¶ The critics are probably right that no amount of deliberation would have prevented the war, and the supporters are probably right that some critics would never have defended going to war even if other nonmilitary sanctions had ultimately failed. Yet the deliberation that did occur laid the foundation for a more sustained and more informative debate after the U.S. military victory than would otherwise have taken place. Because the administration had given reasons (such as the threat of the weapons of mass destruction) for taking action, critics had more basis to continue to dispute the original decision, and to challenge the administration's judgment. The imperfect deliberation that preceded the war prepared the ground for the less imperfect deliberation that followed.¶ Thus even in a less than friendly environment, deliberative democracy makes an appearance, and with some effect. Both the advocates and the foes of the war acted as if they recognized an obligation to justify their views to their fellow citizens. (That their motives were political or partisan is less important than that their actions were responsive to this obligation.) This problematic episode can help us discern the defining characteristics of deliberative democracy if we attend to both the presence and the absence of those characteristics in the debate about the war.¶ What Is Deliberative Democracy?¶ Most fundamentally, deliberative democracy affirms the need to justify decisions made by citizens and their representatives. Both are expected to justify the laws they would impose on one another. In a democracy, leaders should therefore give reasons for their decisions, and respond to the reasons that citizens give in return. But not all issues, all the time, require deliberation. Deliberative democracy makes room for many other forms of decision-making(including bargaining among groups, and secret operations ordered by executives), as long as the use of these forms themselves is justified at some point in a deliberative process. Its first and most important characteristic, then, is its reason-giving requirement.¶ The reasons that deliberative democracy asks citizens and their representatives to give should appeal to principles that individuals who are trying to find fair terms of cooperation cannot reasonably reject. The reasons are neither merely procedural ("because the majority favors the war") nor purely substantive ("because the war promotes the national interest or world peace"). They are reasons that should be accepted by free and equal persons seeking fair terms of cooperation.¶ The moral basis for this reason-giving process is common to many conceptions of democracy. Persons should be treated not merely as objects of legislation, as passive subjects to be ruled, but as autonomous agents who take part in the governance of their own society, directly or through their representatives. In deliberative democracy an important way these agents take part is by presenting and responding to reasons, or by demanding that their representatives do so, with the aim of justifying the laws under which they must live together. The reasons are meant both to produce a justifiable decision and to express the value of mutual respect. It is not enough that citizens assert their power through interest-group bargaining, or by voting in elections. No one seriously suggested that the decision to go to war should be determined by logrolling, or that it should be subject to a referendum. Assertions of power and expressions of will, though obviously a key part of democratic politics, still need to be justified by reason. When a primary reason offered by the government for going to war turns out to be false, or worse still deceptive, then not only is the government's justification for the war called into question, so also is its respect for citizens.¶ A second characteristic of deliberative democracy is that the reasons given in this process should be accessible to all the citizens to whom they are addressed. To justify imposing their will on you, your fellow citizens must give reasons that are comprehensible to you. If you seek to impose your will on them, you owe them no less. This form of reciprocity means that the reasons must be public in two senses. First, the deliberation itself must take place in public, not merely in the privacy of one's mind. In this respect deliberative democracy stands in contrast to Rousseau's conception of democracy, in which individuals reflect on their own on what is right for the society as a whole, and then come to the assembly and vote in accordance with the general will.2¶ The other sense in which the reasons must be public concerns their content. A deliberative justification does not even get started if those to whom it is addressed cannot understand its essential content. It would not be acceptable, for example, to appeal only to the authority of revelation, whether divine or secular in nature. Most of the arguments for going to war against Iraq appealed to evidence and beliefs that almost anyone could assess. Although President Bush implied that he thought God was on his side, he did not rest his argument on any special instructions from his heavenly ally (who may or may not have joined the coalition of the willing).¶ Admittedly, some of the evidence on both sides of the debate was technical (for example, the reports of the U.N. inspectors). But this is a common occurrence in modern government. Citizens often have to rely on experts. This does not mean that the reasons, or the bases of the reasons,are inaccessible. Citizens are justified in relying on experts if they describe the basis for their conclusions in ways that citizens can understand; and if the citizens have some independent basis for believing the experts to be trustworthy (such as a past record of reliable judgments, or a decision-making structure that contains checks and balances by experts who have reason to exercise critical scrutiny over one another).¶ To be sure, the Bush administration relied to some extent on secret intelligence to defend its decision. Citizens were not able at the time to assess the validity of this intelligence, and therefore its role in the administration's justification for the decision. In principle, using this kind of evidence does not necessarily violate the requirement of accessibility if good reasons can be given for the secrecy, and if opportunities for challenging the evidence later are provided. As it turned out in this case, the reasons were indeed challenged later, and found to be wanting. Deliberative democracy would of course have been better served if the reasons could have been challenged earlier.¶ The third characteristic of deliberative democracy is that its process aims at producing a decision that is binding for some period of time. In this respect the deliberative process is not like a talk show or an academic seminar. The participants do not argue for argument's sake; they do not argue even for truth's own sake (although the truthfulness of their arguments is a deliberative virtue because it is a necessary aim in justifying their decision). They intend their discussion to influence a decision the government will make, or a process that will affect how future decisions are made. At some point, the deliberation temporarily ceases, and the leaders make a decision. The president orders troops into battle, the legislature passes the law, or citizens vote for their representatives. Deliberation about the decision to go to war in Iraq went on for a long period of time, longer than most preparations for war. Some believed that it should have gone on longer (to give the U.N. inspectors time to complete their task). But at some point the president had to decide whether to proceed or not. Once he decided, deliberation about the question of whether to go to war ceased.¶ Yet deliberation about a seemingly similar but significantly different question continued: was the original decision justified? Those who challenged the justification for the war of course did not think they could undo the original decision. They were trying to cast doubt on the competence or judgment of the current administration. They were also trying to influence future decisions--to press for involving the United Nations and other nations in the reconstruction effort, or simply to weaken Bush's prospects for reelection.¶ This continuation of debate illustrates the fourth characteristic of deliberative democracy--its process is dynamic. Although deliberation aims at a justifiable decision, it does not presuppose that the decision at hand will in fact be justified, let alone that a justification today will suffice for the indefinite future. It keeps open the possibility of a continuing dialogue, one in which citizens can criticize previous decisions and move ahead on the basis of that criticism. Although a decision must stand for some period of time, it is provisional in the sense that it must be open to challenge at some point in the future. This characteristic of deliberative democracy is neglected even by most of its proponents. (We discuss it further below in examining the concept of provisionality.)¶ Deliberative democrats care as much about what happens after a decision is made as about what happens before. Keeping the decision-making process open in this way--recognizing that its results are provisional--is important for two reasons. First, in politics as in much of practical life, decision-making processes and the human understanding upon which they depend are imperfect. We therefore cannot be sure that the decisions we make today will be correct tomorrow, and even the decisions that appear most sound at the time may appear less justifiable in light of later evidence. Even in the case of those that are irreversible, like the decision to attack Iraq, reappraisals can lead to different choices later than were planned initially. Second, in politics most decisions are not consensual. Those citizens and representatives who disagreed with the original decision are more likely to accept it if they believe they have a chance to reverse or modify it in the future. And they are more likely to be able to do so if they have a chance to keep making arguments.¶ One important implication of this dynamic feature of deliberative democracy is that the continuing debate it requires should observe what we call the principle of the economy of moral disagreement. In giving reasons for their decisions, citizens and their representatives should try to find justifications that minimize their differences with their opponents. Deliberative democrats do not expect deliberation always or even usually to yield agreement. How citizens deal with the disagreement that is endemic in political life should therefore be a central question in any democracy. Practicing the economy of moral disagreement promotes the value of mutual respect (which is at the core of deliberative democracy). By economizing on their disagreements, citizens and their representatives can continue to work together to find common ground, if not on the policies that produced the disagreement, then on related policies about which they stand a greater chance of finding agreement. Cooperation on the reconstruction of Iraq does not require that the parties at home and abroad agree about the correctness of the original decision to go to war. Questioning the patriotism of critics of the war, or opposing the defense expenditures that are necessary to support the troops, does not promote an economy of moral disagreement.¶ Combining these four characteristics, we can define deliberative democracy as a form of government in which free and equal citizens (and their representatives), justify decisions in a process in which they give one another reasons that are mutually acceptable and generally accessible, with the aim of reaching conclusions that are binding in the present on all citizens but open to challenge in the future.3 This definition obviously leaves open a number of questions. We can further refine its meaning and defend its claims by considering to what extent deliberative democracy is democratic; what purposes it serves; why it is better than the alternatives; what kinds of deliberative democracy are justifiable; and how its critics can be answered.