The only possible alternative approach to these four is to give up our assumption that beliefs must be consistent to be rational. This looks like nonsense on its face if we think of belief in the normal way, that is, univocally, so that for every proposition you consider, either you believe it or you don't. But let me try to distinguish between two sorts of belief or ways of believing, one for each of the two rational principles to govern exclusively. If a theory of this sort could be developed in a reasonable way, both principles could then be followed in peer disagreements without yielding contradictory beliefs of either of the two types.
There is evidence for just the right sort of ambiguity in the way that we distinguish what we call opinions from other beliefs. When we find ourselves challenged by peers who disagree with something we have said, we often retreat to some extent by saying things like, 'Well, I was only stating an opinion.' This suggests that the beliefs we call opinions leave room for debate and doubt in a way that other beliefs do not. In fact, unless we are acknowledging or anticipating disagreement of some sort, it seems to me we never call a belief an opinion. So we might think of something like my belief that Canada will someday rule the world as just a belief, not an opinion, until we discover that there are peers of ours who disagree. At that point, the belief in question either survives as an opinion or it stops being a belief at all – depending on whether or not we desire to maintain it in the face of disagreement from our peers. Let me extend the ordinary meaning of the work 'opinion' to include such potential as well as actual cases, so that any belief that you would tend to retain in the face of disagreement will count as an opinion in the sense I mean. And let me say that to believe something in this sense is to hold it as an opinion, or simply to hold it, so as to avoid the clumsy word 'opine'. So, I will say that I hold that Canada will someday rule the world, regardless of whether anybody disagrees. All beliefs are subject to disagreement, of course, but it is only beliefs we have worked out for ourselves to some extent that we are liable to maintain when faced with disagreeing peers; otherwise, we would have no concrete arguments to make. Opinions in my extended sense may be conceived, then, simply as beliefs derived according to the principle of autonomy.
Another sense of the word 'belief' respects the way that we perceive the world after all evidence has been considered. As I have said, beliefs in this sense are sometimes only probabilized repetitions of things that we have been told, with little understanding or autonomous justification. To reprise my earlier example: I believe that there is such a thing as wave-particle duality, based only on testimony from experts. If a peer were to challenge this belief of mine, I could hardly retain it as an opinion since I have no independent grounds at all for arguing the point. All I could reasonably do is lower the subjective probability I assign to the statement that wave-particle duality exists in light of the contrary testimony from a peer. Nevertheless, in the absence of actual peer contradiction, I believe that wave-particle duality exists, just in that it forms a part, however poorly integrated, of my probabilized model of the world. So, if I had to bet for or against the proposition that there is such a thing as wave-particle duality, I would bet for it – which is really all that believing something in this way amounts to. For want of a better single word, let me label all such beliefs perceptions.
Perceptions and opinions are best understood things of the same intrinsic type, differing proximately in their being derived from unrestricted and restricted sets of evidence, respectively, and ultimately on their serving different epistemic functions.14 They should not be seen as mutually exclusive classes. In fact, in most cases most of the time, there is no concrete difference at all between what we perceive and what we hold as an opinion. The two ways of believing only tend to come apart under the stress of peer disagreement, when we need to separate the probable from the productive and well-understood. Otherwise, beliefs are just beliefs.
For clarity's sake, here is an outline of a theory about peer disagreement that connects this distinction between types of belief to our original distinction between principles of rational belief.
(1) There are two principles of rational belief: the principle of probability and the principle of autonomy.
(3) In cases of peer disagreement, however, the two principles tend to conflict, producing what seem to be contradictory beliefs.
(4) But there are also two kinds of beliefs: perceptions, which are rationally governed by the principle of probability, and opinions, which are rationally governed by the principle of autonomy.
(5) In ordinary reasoning, there is only one relevant kind of belief, since the two governing principles produce the same results. That is, perceptions and opinions are ordinarily the same things, which we simply call beliefs.
(6) In cases of peer disagreement, however, the two sorts of belief come apart. We perceive one thing according to the principle of probability, and we hold something else according to the principle of autonomy.
(7) In such cases there is no one thing that we believe simpliciter. Rather, we believe two different things in different senses of the word 'believe'.
(8) There is no reason to favour one principle or one sort of belief over the other as uniquely rational in situations of peer disagreement.
I believe that this points to a satisfactory solution to the problem of reconciling the two principles of rational belief in situations of peer disagreement, hence to the problem of peer disagreement itself. It should be obvious that it works technically, that is, that it provides at least a superficially coherent way of structuring the necessary concepts. It also seems to me that it can serve to validate both of our contrary intuitions in peer disagreements without doing too much damage to the ordinary meanings of the words involved, and without forcing unnecessary choices about what to believe. It allows us to make sense of why we tend to suspend belief when faced with disagreement in matters like the accuracy of square root calculations, since there is nothing at stake for us here in the relevant sense: we have no reason to persist in working out or testing or asserting a personal position on which number is correct, hence no actual opinion on the matter. At the same time, we can understand why we tend to stick to our guns in things like philosophical disagreements, for here we do have a personal epistemic interest in constructing an integrated understanding of the point in question, and a further social interest in contributing autonomous ideas and arguments to the more general discussion. Thus, we can acknowledge in an abstract way that we are likely to be wrong, all things considered, while still sincerely urging that our own opinion is correct. As to which of these things we believe, the answer is that we believe both things in different ways. As to which one we really believe, there is no answer and no need for one.
Note that this theory resolves the problem with saying that we can believe what I am now calling opinions despite their likelihood of being wrong, namely that by definition, to believe something entails believing that it is at least probably true. In this theory, both types of belief conform to that principle: if we perceive something then we perceive that it is probably true, and if we hold something then we hold that it is probably true. The problematic inference from believing (in the sense of holding) something to believing (in the sense of perceiving) that it is probably true is now blocked.
The theory also suggests a neat solution to the problem of believing that we are right in each case of peer disagreement separately, but wrong in many of these cases when they are viewed as a group. Considering our disagreements with our peers in general, we tend to agree that we are just as likely to be wrong as they are, for this is a simple statistical perception governed by the principle of probability. At the same time, though, when each particular disagreement occurs we tend to insist that we are right about the point in question, for these are matters of opinion governed by the principle of autonomy. Both attitudes are rational; no paradox results as long as we discriminate between the two types of belief.
There is an intuitive cost to all of this, of course, in that it makes the concept of belief equivocal, permitting more than one doxastic attitude towards a single proposition. Do we really want to say that someone can both believe and not believe the same thing at the same time at all, let alone rationally? On balance, yes, I think we do. But does this not entail abandoning the epistemic principle of non-contradiction? I don't think so. If our beliefs do form two overlapping sets or systems, and neither one contains internal inconsistencies, then this should suffice to satisfy the principle. It is, admittedly, intuitively odd to say that each of us has two sets of beliefs that sometimes conflict. But if we fully appreciate the different functions of perceptions and opinions in our individual and social epistemic lives, and if we value a clean-cut solution to the problem of peer disagreement, then we can probably accept this as a price worth paying. This is, at any rate, what I believe at present. If I discover that peers disagree with me on this, then I'll perceive that I am likely to be wrong – but I will need to be shown how I am wrong before I give up my opinion.15