There is something unsatisfactory about a theory that posits two contradictory things that rationally ought to be believed, without saying which is finally to be preferred. In cases of peer disagreement, even allowing that opposed beliefs are rational in different ways (or that we ought to believe them in two different senses of the word 'ought'), the question remains: what, after all, on balance, should we really believe? It looks like the original problem of peer disagreement must now be recapitulated, not directly in terms of which belief is rational according to common sense (for they both are), but indirectly, in terms of which of standard of rational belief takes precedence when they conflict. Taking for granted that we cannot rationally hold two contradictory beliefs at once, there are four possible coherent answers to this latter question: probability, autonomy, neither, and both.
The first option is to say that in cases of peer disagreement, the only rational thing for us to do is to follow the principle of probability inwardly, respecting what we really believe, even while following the principle of autonomy outwardly in our debates with peers. On this approach, we ought to view our independently-developed arguments and theories with sceptical detachment, accepting that we are likely to be wrong while continuing to work on making concrete sense of the matter for ourselves and others. So, the achievements of Einstein and Wittgenstein are great, and may well have depended causally on an autonomous approach; nevertheless, these thinkers had no rational warrant to believe that they were right. So, in an epistemic if not in a practical or moral sense, they ought not to have believed in their own theories.
The second approach is to say that our real beliefs are our autonomous beliefs, and that mere probabilized statements that derive from testimony ought not to count. We will have practical reasons for considering peer disagreement when we need to guess at the truth for purposes of action. In philosophical disagreements, though, where no real decisions are required, the principle of probability has no force at all. We will observe that other people just as sharp and well-informed as we are see things differently, but we will have no strictly epistemic need to reconcile their different views with ours in terms of probabilities. Odd as it sounds, then, we can in fact believe something without believing that it is probably true.
The third possible solution is to claim that there is no fact of the matter as to which principle is more important, so that neither of the first two approaches is correct. Instead, belief can be determined in cases of peer disagreement only by the interests of the believer. If the believer seeks to be most-probably right, then he should follow the principle of probability. If he wants to understand things and develop new ideas, then he should favour the principle of autonomy. Our soldier in battle ought prudentially to run away, and ought morally to stand and fight – but how can it be clear which one he ought to do, all things considered? Unless there is good overall prudential reason for him to prefer the moral action, or good moral reason to prefer the prudent one, the soldier seems to be left with a brute choice to make, not a rational decision.11 The same can be said to hold for people like ourselves in cases of peer disagreement: there is no other choice but just to choose what we believe.
The fourth way around the problem is to claim that both principles can safely be followed at the same time, because they never actually produce contradictory beliefs. In fact, thinking autonomously will always maximize the probable truth of our beliefs. It is hard to see how this thesis can make sense as a general rule, for it seems to imply that each of any pair of disagreeing peers is more probably right than his opponent.12 But each of us could separately work around this implication by denying that we have any peers at all who disagree with us. If we claim that we can follow both principles together and end up with consistent beliefs, then we must accept that the mere fact that others disagree with us excludes them categorically as epistemic peers.
None of these options strikes me as satisfactory. The first approach fails in privileging the probabilist betting-on-things-right-now conception of rational belief over the constructive sort of rationality required both for understanding and for major progress in philosophy and science. This makes good blackjack players rational and great thinkers like Galileo not, which is a hard consequence to swallow intuitively. If philosophers and scientists aren't being rational in thinking for themselves, we need another word of epistemic praise that's just as good. The second approach has the opposite problem: it may be rational for me to stick to my guns in philosophical disputes, but it is surely still irrational for me to do so in peer disagreements over things like arithmetic, where comparative track-record constitutes most of our evidence. The third approach allows us to follow both principles, which is good, but forces us to choose what to believe whenever they conflict, according to our interests. What if our interests lie primarily in having rational beliefs? The third approach permits no answer. It also joins the second approach in licensing belief in things that we do not believe are even probably true. And the fourth approach, to say that nobody who disagrees with us counts as an epistemic peer, cannot succeed because few serious philosophers are quite so arrogant; and anyway, it's not a general solution. Many are working on the problem of peer disagreement, and this approach could only satisfy one person at a time.13