Peer disagreement and two principles of rational belief


Two principles of rational belief



Download 69.12 Kb.
Page3/6
Date21.05.2021
Size69.12 Kb.
#137224
1   2   3   4   5   6
2. Two principles of rational belief

The problem seems to turn on which of these two attitudes, humility or self-assurance, we believe that rationality demands in cases of peer disagreement. Philosophers seem to be about evenly split on this question, with some arguing that rationality requires suspension of belief (or 'conciliation' as much of the literature has it) in all such cases, and others arguing that sticking to our guns (or 'dogmatism') is what is rational at least some of the time. But should we try to make a single sort of judgment as to what is rational in cases like this? Is rationality even a univocal concept? In general, it seems to mean something like this: reasoning in a way that leads reliably to true beliefs. But which true beliefs are we talking about, exactly? True beliefs for whom, and when, and under what conditions? Different answers to these questions could yield many conflicting judgments as to what is rational in this or that particular case. I believe that there are two philosophically essential ways to answer these questions, and that the often but not always inconsequential difference between them accounts for our conflicting intuitions about peer disagreement. So, let me propose two principles of rational belief, each of which I think defines one sense or sub-sense of the epistemic 'ought'.


The principle of probability is that you ought to believe whatever is most likely to be true, given your total pool of evidence. More precisely, you should believe with greater confidence whatever is more likely to be true, given the total evidence available to you, and to adjust that confidence accordingly whenever new evidence appears.
The principle of autonomy is that you ought to base your beliefs (or degrees of belief) solely on objective evidence, using your own best reasoning and judgment. You should consider the arguments of others on their merits, but you should not allow the simple probability that they are right to influence your thinking.
These principles determine different epistemic 'oughts' because they reflect different fundamental epistemic interests. The principle of probability reflects the goal of maximally justified belief at any moment, which is primarily a goal of individuals who need to act. When thinking only of your own immediate probability of being right on any issue, you should consider all of the evidence available to you, including testimony from a peer or any other source that you have reason to consider somewhat reliable. There is no reason to rule out any evidence at all, if the only thing you care about is the most probable truth right now. So, if you are being forced to bet your life, say, on some unestablished fact, then you should typically weigh the testimony of your epistemic peers more or less equally with your own prior opinion on the matter, and you should weigh more heavily the testimony of your epistemic betters, if you have any, even in your own areas of expertise. Thus, in medical decision-making where lives may be at stake, doctors are ordinarily expected to follow protocols or 'standards of care' that the consensus of their peers say yield the highest probability of good results, rather than their own, perhaps eccentric, theories.7

But we have other epistemic goals as well. We do not just want to place bets on which existing ideas are most probably right, but also to produce, defend, and criticize ideas and arguments in ways that ultimately lead to greater knowledge for ourselves and our societies. When faced with intellectual problems and puzzles, we try to solve them, not just to guess at what theories will turn out to be true. There are two connected reasons for this. One reason is that we desire as individuals to understand the world, not just to play the market, as it were, of probabilities. There is no knowledge worthy of the name without at least a fair degree of understanding. For example, I can say that I believe in quantum mechanics because physicists tell me that this is the best-established theory in their field, and I have reason to suppose that they are probably telling the truth. But I have only the wispiest notions of wave-particle duality and other concepts integral to quantum mechanics, hardly enough to say that I believe anything about quantum mechanics itself, as opposed to just believing that there is a theory called 'quantum mechanics' that is probably true. If I have any real interest in physics, such degenerate beliefs are of essentially no use to me. Even where I can clearly grasp the major claims involved (as with the thesis of anthropogenic global warming, say), I still can't claim to understand the issue as a whole, let alone know whether the thesis is true, without examining the arguments objectively. (For me even to know what I am talking about, as it is commonly put, means at a minimum that my statements must make sense to me along objective lines of reasoning.) Whether my peers agree or disagree with me has little bearing on the matter, except as it gets me to notice their ideas and arguments, which I can then evaluate strictly according to their merits. To the extent that rational belief aims at real knowledge, then, as opposed to mere successful bets on propositions, the principle of autonomy would seem to trump the principle of probability.

Our other essential reason for thinking autonomously is, ironically, that our deepest intellectual problems are typically too subtle and complex for one person to solve.8 We must think for ourselves in order that as many plausible theories as possible can be criticized and tested by other thinkers, also acting independently, in the expectation that the truth will someday emerge from this collective competition. No doubt, some philosophers or scientists are better at constructing theories than others, in the sense of being more likely to be proven right over the long run. There might even be some one philosopher superior to all the rest of us, so that if we had to bet serious money on any particular theory being true, it would be rational for us to bet on that person’s theory rather than one of our own. But why should we think we have to make such bets? We are not in this business just to gamble on which theories will turn out to be right. We are in it to work on solving hard problems over a long time, both as individual philosophers and collectively, as members of the philosophical profession. It would be absurd for us to leave the whole business to a single most-probably-correct philosopher, because even the best of us makes plenty of mistakes, and even the least of us is capable of contributing useful ideas to the ongoing discussion. For the same reason, we should not be discouraged when it turns out that our epistemic peers disagree with us. Of course they do, because it is part of our very job to come up with new ideas and new objective arguments to back them up. As philosophers, we are producers and critics of ideas, not just consumers, so if we do not think for ourselves, then we are not being responsible, effective members of our community.

Much of this creative sort of thinking can in fact be done in an entirely hypothetical spirit, with no violation of the principle of probability. In working on difficult problems we can, and often do, experiment with theories we consider unlikely to be true and see what develops, in the confidence that peers are working on other (and perhaps more plausible) conjectures. There is no reason in principle that we should believe in any of these theories to a degree beyond what all the evidence, including testimony from our peers, entails. In fact, if we are all completely rational and fully informed of each other's evidence and reasoning, we ought ideally to be able to agree on a single, shared subjective likelihood for each hypothesis that we consider, and this would not prevent us from continuing to work towards a more permanent consensus. Much current scientific practice is already like this, more or less. In industry, for example, the scientist is someone with a job; his leaders give him a project to work on, and whether he personally thinks the project will succeed is hardly relevant to what he has to do. And in medicine, researchers are particularly conscious of the complex social nature of their work, accepting that unlikely possibilities need to be carefully ruled out for the sake of completeness in their broad-based investigations.

In philosophy, though, and in more revolutionary science, we have four strong reasons to adhere to the principle of autonomy, working our own theories out as individuals regardless of the level of agreement from our peers. First, though theoretical diversity can in principle be maintained by people working with hypotheses they don't believe, philosophers are not just motivated to be helpful in communal projects; we also seek for truth and understanding for ourselves. So, it is natural for us focus our attention on hypotheses that strike us independently as the most probably true, rather than work against our own epistemic interests on theories we consider less likely to pan out. Second, as philosophers we are expected not just to produce new theories, but also to promote them and defend them in the public arguments that constitute our testing system. We are poor actors, most of us, so a good measure of sincere belief is usually needed for us to be effective advocates, especially for complex theories that demand years of debate. Third, we are also philosophically more competent defending our own theories sincerely than our opponents' hypothetically, because our own theories articulate perceptions of the way things really are, while our opponents' typically appear to us as sets of propositions that are false at best, and that at worst don't even make sense. And fourth, to develop and promote dissenting theories in particular demands persistence in the face of not just widespread disagreement, but often also ridicule, rejection, and even persecution from our peers, as witness Socrates or Galileo, and this is almost impossible absent the conviction that we are at least probably right.9 Not as a matter of ideal epistemology, then, perhaps, but psychologically, at least, it seems that we must believe in what we say in order to say it maximally well, to persist in its autonomous development over a long career, and to withstand the consequences of upsetting other interested parties.

Here, then, is my preliminary solution to the problem of peer disagreement. We have two different, equally useful principles that govern rational belief formation, and these define two corresponding uses of the epistemic 'ought'. Both employ the same inductive and deductive methods, so there is no difference in the rationality per se of these two principles; the only essential difference lies in what gets counted as appropriate evidential 'input' to the rational machinery.10 One principle takes in all available evidence, including testimony from reliable sources, and produces probabilized bets on facts. The other excludes evidence derived solely from testimony, and produces arguments and theories necessary for both understanding and objective progress in philosophy and much of science. Qua mere consumers of theories, then, we ought to suspend belief on probabilistic grounds when confronted with disagreement from people as likely as ourselves to turn out to be right. Qua producers and defenders of theories and qua seekers of understanding, we ought to stand by our own beliefs until we are convinced to yield them on objective grounds.




Download 69.12 Kb.

Share with your friends:
1   2   3   4   5   6




The database is protected by copyright ©essaydocs.org 2022
send message

    Main page