Comments welcome: email@example.com
Much of the literature on disagreement focuses on the issue of whether I am still entitled to believe that P, when I find out an epistemic peer believes not-P.1 One disagreement about disagreement is how much (if any) revision of doxastic attitudes is called for by awareness of peer disagreement. On the “conciliationist” view, recognition of epistemic peer disagreement requires a substantial revision of doxastic attitudes. On the “steadfastness” view, it is (epistemically) permissible to maintain one’s doxastic attitudes in the face of disagreement with epistemic peers.2 In what follows, we will think of conciliationism and steadfastness in their most extreme forms. The most extreme form of conciliationism is sometimes referred to as the “equal weight view”, which directs epistemic peers to give equal weight to the original view and the opposing view. A well-known example may serve as illustration:
TWO FOR DINNER: Suppose you and I go for dinner on a regular basis. We always split the check equally, not worrying about whose dinner costs more, who drank more wine, etc. We also always add a tip of 23% and round up to the nearest dollar when dividing the check. (We reason that 20% is not enough and 25% is ostentatious.) We both pride ourselves on being able to do simple arithmetic in our heads. Over the last five years we have gone out for dinner approximately 100 times, and twice in that time we disagreed about the amount we owed. One time I made an error in my calculations, the other time you made an error in your calculations. Both times we settled the dispute by taking out a calculator. On this occasion, I do the arithmetic in my head and come up with $43 each; you do the arithmetic in your head and come up with $45 each. Neither of us has had more wine or coffee; neither is more tired or otherwise distracted. How confident should I be that the tab really is $43 and how confident should you be that the tab really is $45 in light of this disagreement?3
The extreme form of conciliation requires assigning equal credence to the claim that the tab is $43 each and the claim that the tab is $45 each. The most extreme version of steadfastness permits my credence to remain unchanged, even in light of your excellent track record.
Sometimes disagreement is modeled in terms of belief/disbelief and withholding of belief,4 but for our purposes, it will help to borrow the Bayesian convention where numbers between 1 and 0 are used to indicate credence about some proposition.5 1.0 indicates maximal confidence (full belief), 0.0 indicates maximal nonconfidence (full disbelief), and 0.5 is where the proposition is neither believed nor disbelieved.
As noted, there is a continuum of possibilities between extreme conciliationism and extreme steadfastness. Let us think of ‘midway’ as the position where I attribute to myself 50% more credence than my epistemic peer, as compared with what conciliationism permits. To keep the arithmetic simple, let us suppose in connection with the previous example that initially I am supremely confident that the tab is $43 (credence = 1.0). I then find out you think the correct number is $45. Since conciliationism indicates that I should reduce my credence to 0.5, and midway permits 50% more than what conciliationism permits, then midway permits me to hold, with 0.75 confidence the claim that the tab is $43, and 0.25 credence the claim that the tab is $45.
An alleged consequence of conciliationism is skepticism: suspension of belief about P or not-P across a wide range of disagreements. However, given some plausible assumptions, including fallibilism and disagreement about contrary, rather than contradictory philosophical positions, it will be argued that conciliationism actually mandates a stronger conclusion: you should disbelieve many of your philosophical views, that is, we should believe that many of our philosophical beliefs are probably false. Furthermore, and what is perhaps even more surprising, is that the same stronger conclusion follows from midway as well. Finally, steadfastness faces a dilemma in light of these same considerations.
Three Concepts of Epistemic Peers
There are at least three different understandings of the notion of ‘epistemic peer’ in the literature:
Virtue Peers (VP): X and Y are (approximately) the same in terms of epistemic virtues with respect to P.
Correctness Peers (CP): X and Y have (approximately) the same probability of making an epistemic mistake about P.
Accuracy Peers (AP): X and Y are (approximately) equally likely to determine the truth about P.6
A representative statement of VP is as follows:
A and B are epistemic peers relative to the question whether p when A and B are evidential and cognitive equals with respect to this question—that is, A and B are equally familiar with the evidence and arguments that bear on the question whether p, and they are equally competent, intelligent, and fair-minded in their assessment of the evidence and arguments that are relevant to this question.7
A version of CP is offered by Trent Dougherty: “…neither A nor B have any reason to think that the probability of A making a mistake about the matter in question differs from the probability of B making a mistake about the matter.8 Worsnip offers a statement of AP: “What matters when it comes to disagreement is how likely my peer is to be right, that is, how reliable she is.”9
It is clear that these three concepts are distinct. Consider, for a start, the relationship between VP and CP. As Trent Dougherty suggests, VP seems more concerned with “inputs” and CP more about “outputs”.10 To see why, suppose we define VP as Lackey does above. In which case, it might be that X and Y are VP but not CP. Why? For one thing, there is the issue of whether we have correctly identified all the relevant virtues. Consider that Kelly has “thoughtfulness” as part of his understanding of epistemic peers, whereas Lackey does not include this on her list.11 If we define VP with Lackey, but think that thoughtfulness is relevant to the issue of making a mistake, then X and Y might be VP but not CP.
It seems that CP is neither necessary nor sufficient for AP. A modified version of TWO FOR DINNER provides a counterexample to the claim that CP is necessary for AP. Imagine things much as before except that you claim that God whispered $45 to you. In the past, you have said that when calculating a tab there is nothing going on in your head until a number just pops up, which you attribute to God whispering the answer. I have pointed out on numerous occasions that in general you have a terrible track record with respect to God whispering hypothesis, e.g., when I flip a coin it turns out that God whispers the correct answer to you only half the time. The only time that God whispering hypothesis appears successful is when you are doing mental calculations. I suggest that some people have the ability to do mental math subconsciously and this explains your excellent track record calculating our dinner tabs. It also explains why you fail in your predictions in so many other cases. You reject this explanation as heretical. Accordingly, I reject the idea that we are CP about the tab, but accept that we are AP on this issue. The example also shows a case where VP is not necessary for AP, since it is plausible that there is some sort of epistemic vice associated with your failure to accept defeat for your God whispering belief.
To understand why CP is not sufficient for AP, we need to look briefly at the issue of uniqueness versus permissiveness. Uniqueness is the thesis that there is only one maximally rational doxastic response to a given set of evidence, while permissiveness allows instances where there is more than one maximally rational doxastic attitude to a given set of evidence.12 On the face of it, it seems that permissiveness is friendly to steadfastness. And indeed, this is the correct conclusion if we are thinking about epistemic peers as CP. To see why permissiveness cases licenses steadfastness, at least in some instances, consider the following example.13 Suppose we are hiking out of electronic communication with the outside world. The American presidential election took place the day before, and we are speculating about who won. Let us assume it is a permissive case: an instance where more than one maximally rational doxastic attitude is permitted. We both reason correctly using the same set of evidence. You think it is slightly more likely that the Republican candidate won (p = 0.54), while I think it is slightly more likely that the Democratic candidate won (not-p =0.54). Since by assumption we have both reasoned correctly based on shared evidence, we are CP. Since we both have reasoned correctly, we are fully justified. So, the fact that we disagree provides no evidence for doxastic revision.
Notice, however, this means that in permissive cases, the fact that I am fully justified in believing P provides no reason to suppose that P is likely to be true. In other words, in permissive cases, I may say to myself: “I am fully justified in believing P, yet I still wonder whether P is more likely than not true.” The reason is that both of our credences cannot consistently satisfy the axioms of probability: I can’t say I am correct that pr(P = 0.54) and you are correct that pr(not-P = 0.54). Permissiveness says we are both justified, so CP; permissiveness does not say that we are both accurate, so permissiveness does say that we are AP.14 Good thing, since both of us can’t be right.
So, those who use permissiveness to defend steadfastness are correct that disagreement is not evidence for belief revision. This is because the issue of disagreement is orthogonal to the question of accuracy, given permissiveness. This is confirmed when we realize agreement in permissive cases is also not evidence that some belief is likely true. Suppose the whole world joins hands and sings the praises of P in choric unison. All the agreement in the world does not answer the question of whether P is likely to be true if P involves a permissive case, since the same evidence permits the whole world to sing the praises of not-P as well. But this is just to say the price of permissiveness is that the idea has to be abandoned that being justified about P gives one reason to think that P is likely true.15
So, there is a certain irony in thinking that one can avoid the widespread skepticism that conciliationism seems to suggest by using permissiveness to defend steadfastness. For the connection between permissiveness and skepticism about the truth of P or not-P is even more direct—one need not even worry about agreement or disagreement to generate skeptical doubt. Suppose I am the only one who has ever consider a certain permissive case. I come to believe P on the basis of some set of evidence e. The fact that there is no one disagreeing doesn’t affect the worry that my evidence doesn’t make it likely that P is true, since e could be used to show that not-P is justified.16 Permissiveness is a good means to cut out the middleman of disagreement and go directly to skepticism.
In what follows, AP is the primary sense of ‘epistemic peer’ we will be interested in. Occasionally, we will also reference the idea of CP. The reasons for focusing on these two will become obvious as we proceed.