Disagreement and belief



Download 1.16 Mb.
Page1/18
Date06.04.2016
Size1.16 Mb.
#4957
  1   2   3   4   5   6   7   8   9   ...   18


DISAGREEMENT AND BELIEF

Ted Everett

(full draft, August 2012)

CONTENTS
Preface

Introduction



1. Disagreement

1.1. The problem of disagreement

1.2. The usual explanations

1.3. Three dimensions of belief



2. Perception

2.1. Sensation, memory, and reasoning

2.2. Articulations of perceptive beliefs

2.3. The concept of knowledge

2.4. Skeptical problems and solutions

3. Testimony

3.1. Testimony as perception

3.2. Testimony and induction

3.3. Other minds

3.4. The external world

3.5. Moral perception



4. Authority

4.1. Rational deference

4.2. Epistemic communities

4.3. Epistemic doctrines

4.4. Epistemic gravitation

5. Philosophy

5.1. Conflicting experts

5.2. Speculation

5.3. Opinions and arguments

5.4. Socratic and Cartesian methods in philosophy

6. Science

6.1. Rational repression

6.2 Epistemic altruism

6.3. Dissent and authority in modern science

6.4. Consensus, controversy, and induction

7. Action

7.1. Judgments

7.2. Convictions

7.3. Higher-order convictions

7.4 Autonomous integrity

7.5 Fanaticism and hypocrisy



8. Politics

8.1. Oppression and liberation

8.2. Democracy

8.3. Freedom

8.4. Equality

8.5. The reasonable society



9. Good Sense

9.1. Pure types of believer

9.2. The reasonable person

9.3. The seeker after wisdom

Afterword

Bibliography

Index

Preface


I have always found it hard to handle disagreements, and even harder to avoid them. I went to college during the great uproar over the war in Vietnam, a time when politics was never far from students’ minds. Like most of my friends in college and afterwards, I got into many heated arguments about the war, as well as race, sex, inequality, and all the other main themes of my generation’s politics. I found these arguments exciting and often went out of my way to provoke them, though without a clear idea of what I really wanted from them. Was it to win others over to my point of view? Was it to gain a better understanding of the issues? Or was it just to sharpen and show off my wits, so that people would think that I was smart? In any case, I found that I often felt really shaken and confused after such arguments. Shaken, because of the hard feelings they could bring about, to which I seemed to be unusually sensitive despite my own aggressiveness. And confused, because I simply couldn’t understand how bright, seemingly decent people could turn out to be so blind to what struck me as matters of plain fact and simple logic.

Some people shrug off disagreements like this with a simple “diagnosis” of their opponents, for example that they are a bunch of morons, Nazis, out of their freaking minds, or in some other way defective as human beings. But this option was generally closed to me because the people I argued with the most were typically my closest friends. They tended to take more or less the same positions on the issues of the day, and I thought that most of these positions were obviously wrong and ill-considered. But I couldn’t just diagnose my friends as stupid, crazy, wicked, or anything of the sort, because I knew them too well as rational and thoughtful individuals outside of political arguments, and as tolerant and loyal friends despite what struck them as my blindness to plain facts and simple logic. So, judging my friends as intellectual inferiors was out of the question (though I can’t deny having felt tempted on occasion). But keeping my mouth shut, or tip-toeing around important questions to avoid giving offense, would leave me feeling like a coward. And I found these controversies over war and peace, etc., too urgently interesting simply to ignore. So I kept pounding away at family, friends, friends’ families, unsympathetic teachers, hostile acquaintances, bemused professors, and bewildered dates, and anybody else who’d argue with me about serious things throughout my young adulthood, always in hopes of reaching some kind of agreement about things that matter, but rarely getting beyond a frazzled and, for me, increasingly depressing sort of stalemate.

Meanwhile, the corresponding public debate over these issues grew more and more bitter as the progressive factions of my generation wrestled for power both within and against existing institutions. Academic society, in particular, was becoming ever more alienated from mainstream America (outside the areas of government that it was able to influence) and more openly contemptuous of traditional religious and moral beliefs, while Christian and patriotic conservatives enthusiastically returned the sentiment. Even among scientists, objective discussion of empirical facts (concerning nuclear power, for example) was giving way more commonly to accusations of corruption and irrationality, and thoughtful consideration of dissenting views to gape-mouthed incredulity. On both left and right, even tolerating the expression of opposing arguments was increasingly seen as pointless, or even indecent. It was as if the two sides, by this time paying no serious attention to each other’s arguments and isolated socially as well as intellectually, actually lived in different worlds.

When I was close to forty, an idea occurred to me that changed my way of thinking about disagreement and, over a long time, changed my behavior too. I think that keeping this idea in mind, and working out its consequences conscientiously, has made me a happier person, and I hope also a more understanding friend, than I was when I was younger. Here is that basic idea. The reason that other people disagree with us is not ordinarily that there is something wrong with them that causes them to be unreasonable, but rather that their evidence is different from ours, leading them logically to different conclusions. This point is obvious when people are simply uninformed about a public fact: provide the missing information, and the disagreement usually goes away. But in our most contentious arguments, the difference in evidence that counts is not a matter merely of missing or misleading public information. It is instead a matter of our having different pools of private information in the form of trusted testimony, theirs coming from sources (colleagues and friends, newspapers, magazines) that they trust, and ours coming from different sources that we trust. And the problem isn’t that they are trusting sources that they shouldn’t trust while we rely on sources that we should. The crux of the problem is that they have just as good reason to trust their sources as we have to trust our own. Hence, even our bitterest religious and political opponents often have little rational choice but to believe the things they do. This doesn’t mean that none of us is actually wrong about the facts, of course, or that we never make mistakes in reasoning, or that no one is psychologically or morally corrupt. But it does mean that we ought to be far more careful in judging each other’s statements and arguments – and intelligence, and soundness of character – than we usually are.

This book is the result of twenty years’ work developing that basic idea into a general theory of belief and disagreement that encompasses morality, religion, politics, philosophy, and science, as well as ordinary conflicts of opinion. There is, of course, immensely more to say on disagreement and belief than I try to say here, but I think that what I’ve written is sufficient to the goal I have in mind, which is to make it clear how reasonable people disagree as reasonable people, not as people most of whom are psychologically or morally screwed up. On that main point, this essay stands in opposition to a recent trend – almost a new genre – of popular “diagnostic” literature, some relatively lighthearted,1 some serious and sympathetic,2 but most of it more or less openly contemptuous of some group of opponents.3 It is that growing contempt, especially, that concerns me when I think about the harsh religious and political divisions that we suffer with today, and that I hope this book will help to counteract. And when I look back at my own long history of stressful arguments, I wish that I could send this book back to the earnest, argumentative, and anxious person that I was in college. I hope whoever reads it now will find it useful.
[Acknowledgements]

INTRODUCTION
This book begins with a problem about disagreement and belief. When we think about our disagreements with people we respect, whose basic intelligence, education, and good sense make them about as likely as we are to be right about issues like the one in question, it seems arrogant for us to insist that we are always in the right and they are always in the wrong. So, as reasonable people, we are sometimes tempted to hold off believing controversial things until such disagreements can be worked out to everybody’s satisfaction. But to do so would effectively erase many of our deepest, most cherished beliefs. And it seems wrong, even cowardly, to stop believing in something just because others disagree. We know that it is better to think for ourselves than to depend on other people’s ideas. And we know that it is better to stand up for our beliefs when faced with opposition than to automatically relent or compromise. But if we are not actually more likely to be right than our opponents, exactly why should we be sticking to our guns?

I try to solve this problem in terms of a basic distinction among three aspects or dimensions of belief that I call perception, opinion, and conviction. I argue that only in the dimension of perception must we to temper our beliefs according to their probability of truth, that in the dimension of opinion we can hold any belief that might be useful in discussions with others, and that our beliefs in the dimension of conviction represent moral and practical commitments that are largely immune from correction by new evidence. The main part of the book presents a gradual, layered account of how beliefs are ordinarily constructed and maintained by rational people with different histories of evidence. Along the way, I try to explain how people come to radically opposed beliefs in all three dimensions of belief, and how our very rationality sometimes renders these disagreements intractable in matters of religion, politics, and even science. At the end, I talk about what reasonable people can do to cope with disagreement, both as active social beings and as individuals in search of deeper understanding.


A few terms and distinctions need to be introduced up front.

First, I want to emphasize that this is a book about epistemology. Epistemology is sometimes called simply the study of knowledge, but I want to stress that its working meaning covers the study of rational belief in general, even when this falls short of the certainty that we require for knowledge as such (the word epistemic just means having to do with rational belief or knowledge). My central focus is on what rationally justifies people’s beliefs, given all of their evidence and the order in which they have acquired it, not on what precisely qualifies as knowledge, though I do address the latter question at some length in Chapter 2. One of my main themes is that justified belief is easier to acquire than people usually think, while knowledge is much harder, and that confusing the two leads to a lot of trouble.

Epistemology is different from psychology. I am primarily concerned here with the rational foundations of our beliefs – explaining why we ought to hold them – not with their psychological causes. But I will not ignore psychology entirely. It matters to my argument that people usually believe the things they do because they are rational, and that competing psychological sources of belief (self-interest, fear, complacency, and the like) are generally less important than we think. So, I will be totally right about the nature of belief only if I am also right about the power of reason as a psychological force. Ideally, there should be a rough correlation between epistemological and psychological accounts of belief in any case, but only a rough one. This is because we do not learn things in a perfectly rational way, especially as children, but rather through a dynamic mixture of experience and those instincts that have evolved to make our species the successful animals we are. This success has much to do with our predispositions to believe things that are true, of course, and for reasons that actually justify these beliefs. But there are other causal factors in the acquisition of belief as well, such as our built-in modular capacities for learning languages and recognizing faces, that are not directly relevant to the question whether we rationally ought to hold on to the beliefs that we have already acquired, or to how we ought to go about acquiring new ones, so I will not have much to say about them.

I also need to distinguish reasonableness from mere rationality, and practical rationality from epistemic rationality. Rationality in general is making proper inferences: if those are the premises, then this is the correct conclusion. The content of rationality can be defined in part mathematically, in terms of deductive logic. Some aspects of rationality, say in scientific reasoning, are harder to define, but still share an objective nature that governs the relationship of evidence to theories. Epistemic rationality is rationality with respect to true beliefs: if this is someone’s evidence, then that is what the person ought to believe, assuming he wants to believe what’s true (or probably true). Sometimes we also talk about rational behavior, too, for example when we say that it’s rational for a bank robber to knock any security guards unconscious, so that he and his gang can get away safely. This practical rationality plainly depends on epistemic rationality: the bank robber should believe on all sorts of evidence that knocking out the guards will help him get away, so this is what a rational robber will do if getting away is one of his goals. Since this book is mostly about what people what people ought to believe, I’ll use the words “rationality”, “rational”, and so on mostly in the more basic, epistemic sense. Reasonableness, by contrast, is the human virtue of dealing with other people’s beliefs and actions in a properly understanding way. Rationality is obviously a central ingredient in reasonableness, but it takes a lot more than pure reason to be reasonable. You might take a perfectly rational position on something, i.e. one backed up by plenty of good reasons, but if you don’t present those reasons in a way that other rational people can understand, or if you refuse to respect their rational objections or alternative positions, then you are not being reasonable. Reasonableness is even consistent with some amount of irrationality, since we can make occasional errors in reasoning and still be reasonable people, as long as we are willing to correct such errors once they have been pointed out to us in a way that we can understand. But a reasonable person can’t be totally irrational.

I will also distinguish sometimes between Socratic and Cartesian approaches to disagreement and belief. I will explain these two approaches when I come to them, but in advance I want to say that this is nothing fancy. Socratic just means following the ancient Greek philosopher Socrates, whose ideal method was to begin as pairs or groups of people with existing disagreements and work down through discussion toward mutually agreeable beliefs. This is why Plato presents his accounts of Socrates’ work in the form of dialogues. Cartesian just means following the early modern French philosopher Rene Descartes, whose ideal method was to begin as individuals with certain beliefs that cannot be doubted, and work up from those ones to beliefs that no one else who follows the same method properly could disagree with. This is why Descartes wrote his most important works in the more common form of monologues. This book favors the Socratic approach in principle, but has been mostly written as an ordinary monologue, which is a lot easier in practice. I will, however, break into dialogue occasionally to illustrate my points about how people disagree.

Finally, in distinguishing among my three dimensions of belief, I will need to “precisify” the meanings of the words “perception”, “opinion”, and “conviction” to some extent, since their meanings are not altogether clear and distinct in ordinary English. Everywhere else, though, I have done whatever I could do to avoid unusual or technical definitions of familiar terms. I do not want what I say to depend, or seem to depend, on any kind of subtle usage. Instead, I will rely only on commonsense understandings of our ordinary epistemic concepts – give or take a little nudging – so that my arguments can be developed in an organic and intuitive way.


The structure of this book follows the structure of the theory that I want to present – no big surprise there – and I think that the steps are laid out pretty well in order. But these layers of argument are probably complex enough to justify a quick synopsis in advance, so that you know more or less what’s coming as you move along (if you prefer suspense, don’t read it).

Chapter 1 presents what I call the problem of disagreement, plus my general approach to a solution. The problem stems from finding ourselves disagreeing with our epistemic peers, people who are more or less as likely overall, given everything we know about them, as we are to have true beliefs in the subject at hand. If they are as likely as we are to be right, then the probability of our belief being true is at best 50%, in which case it seems that we should withhold judgment on the issue until better evidence becomes available. But this is not how we usually deal with disagreements among peers – in fact, we often take pride in believing things that most other equally smart and educated people consider false, or even preposterous. How can this be? I discuss two broad sorts of approach to this problem. One is to accept a relativistic or skeptical view of disagreement, according to which we are all equally right, or equally wrong, in maintaining controversial beliefs. The more common approach is to deny that those who disagree with us are actually epistemic peers, despite appearances, using a variety of “diagnostic” explanations to account for their errors. I think that neither approach offers a real solution to the problem. My own approach is more complex, but ought to seem familiar once it has been explained. I claim that there are three overlapping, but sometimes conflicting, principles that govern our beliefs, which I call rationality, autonomy, and integrity. Each principle is proper to one of the three dimensions of belief: perception, opinion, and conviction, respectively. These three aspects of belief perform essential functions with respect to thought, speech, and action in our lives. Disagreements among peers are as confusing as they often are, I argue, because it is so often unclear in which dimensions of belief we are contending, hence which principles ought to govern each dispute. In our most interesting controversies, differences of perception tend to stem from our rationally trusting different sources of information, while differences of opinion originate in our socially useful, but not always perceptively rational, practice of thinking and speaking autonomously, and differences of conviction stem from those epistemic commitments that make acting with integrity possible, but also shield us from evidence that we might be wrong.

In the succeeding chapters, I explain the common structure of beliefs in order of their rational development, from the individual epistemology of perceptions (Chapters 2-4), to the social epistemology of opinions (Chapters 5-6), to the prudential, moral, and political epistemology of convictions (Chapters 7-9). This order of presentation is a necessary one, but also necessarily an artificial one. In real thinking life, every epistemic process that I want to talk about is going on at once. But I can’t say everything I want to say all at the same time, so my explanations will need to be somewhat idealized and incomplete until I have all three dimensions of belief in play.

Chapters 2 and 3 discuss foundational issues in individual and social epistemology: the definitions of belief and knowledge, traditional skeptical problems, and the evidential role of testimony. Chapter 2 concerns direct perceptual beliefs, the ones that we construct out of our own sensations, memory, and reasoning. I give a brief account of the holistic, largely pictorial nature of our raw perceptions, and argue that these must be articulated into sentences from their original state before they can be explicitly believed. Such articulate beliefs take different logical forms, including categorical (yes-no) or conditional (if-then) sentences, statements of degree or probability, or any combination of these forms and others. A thorough understanding of belief has to be sensitive to these modes of articulation. I discuss the traditional skeptical argument that threatens our reliance on perception as a guide to true belief: how do we know that we’re not just dreaming, or floating as disembodied brains in vats of nutrients, or something just as bad? Aren’t we just assuming something that we cannot know? I argue that appropriate conditional articulations of perceptive beliefs (e.g. “If I am not dreaming, etc., then I am eating a bagel”) resolve this problem in a practical way, and that similar conditionalizations are able to resolve related philosophical problems about memory and inductive reasoning. For ordinary conversations, though, our ordinary categorical articulations (e.g. “I am eating a bagel”) are clear enough if used with proper understanding of their background assumptions.

In Chapter 3 I talk about the second layer of perceptual beliefs, those that arise from testimony. We use evidence from testimony to form correct beliefs about the world beyond our own direct experience. But testimony also complicates our understanding of the world in several ways. We receive it mainly in the form of articulate sentences, which have to be incorporated somehow into our mainly inarticulate perceptive models of the world. Such sentences are also liable to be vague or ambiguous, or to refer to unfamiliar concepts, so that we often fail to understand completely what we have been told. I show how beliefs based on testimony are rationally constructed out of more basic perceptual beliefs, by means of what I call second-order induction. But testimony stands apart not just because it radically expands the range of ordinary knowledge that is available to us, but also because it grants us access to a whole category of facts that is blocked off from direct perceptual examination, namely the inner mental states of other people. It is because other people tell us what they think and feel, and because we have reason to believe in their reliability in general, that we can rationally infer that they are not just mindless robots or hallucinations, but real thinking beings like ourselves. And because these trustworthy external sources confirm the great majority of our immediate perceptions, and tell us that we are also reliable to them as sources of information, we may infer that we can usually trust our own senses and memory. In this roundabout way, we can enhance our justification for believing that what we perceive directly is a genuine external world. I also argue in this chapter that our basic moral concepts necessarily derive from testimony. First-person experience simply does not contain the information necessary to construct distinctly moral beliefs, since these beliefs require non-self-centered understandings of moral terms like "right" and "wrong", and these cannot be rationally acquired except through taking moral statements as the truth. Thus, we come to believe that hurting the cat is wrong initially because we have good reason to believe that statements like "hurting the cat is wrong" are true, having heard them from our parents and other generally reliable sources.

In Chapter 4, I argue that to be fully rational perceivers, we must defer to the beliefs of experts, eyewitnesses, and other authorities whenever we have sufficient evidence that they are more reliable than we are in the matters at hand. To the extent that someone’s religious faith is based on testimony from evidently reliable authorities (including parents, clergy, and respected texts), this is a rational, not an irrational, form of belief, even when it is maintained in the face of much direct contrary evidence. It is this subjective rationality of faith, passed on unanimously from each generation to the next, that explains the great historical stability of religious and other traditional beliefs. On this understanding, it makes perfect sense that not just isolated “backward” tribes, but also great peoples like the ancient Egyptians, Hindus, Mayans, and Chinese have believed persistently in things that modern, scientific Westerners consider gross absurdities. Religious rationality can thrive in cosmopolitan societies, too, but only within relatively closed communities of thought that include principles of solidarity like “you ought to obey your elders” and “you should not trust outsiders” among their doctrines. Subjectively rational faith in authority can sometimes produce “epistemic black holes” – beliefs so well protected by unreasonable attitudes that it is impossible for the believer to perceive contrary evidence as having any weight. In this situation, the same people can be both perfectly rational in the subjective sense, and so irrational objectively that they cannot be reasoned with about their core beliefs.

Different problems arise when our authorities themselves produce conflicting testimony, leaving us with no clear epistemic path to follow. In Chapter 5, I argue that the most rational response to such conflicts is not to abandon our authorities and base all our beliefs exclusively on our own perceptions and inferences, as Descartes recommends, but rather to withhold judgment until such experts as exist arrive at a consensus. This raises another question, though. If, as I argue, traditional beliefs are rational where authorities agree, and no belief is rational where authorities differ, how can any real philosophy – brand new ideas, derived from independent thought – ever arise? Part of the answer is that rational belief is not the only goal of intellectual life. A lot of good can come from speculative thinking, too. It is rewarding to explore the world as it appears to us as individuals, and to express our own ideas to others, even if we have no hope of knowing whether they will turn out to be true. On occasion, we might even come up with new theories that conflicting experts all come to accept, though there is no Cartesian formula that can guarantee this outcome.

In Chapter 6, I argue that modern science and other progressive forms of inquiry require a systematic kind of irrationality in order to flourish, which explains why they are rare in human history. Novel thinkers must be willing to persist in their own opinions, even in the face of rationally overwhelming evidence against them in the form of disagreement from authorities and peers. The new thinkers must prefer believing propositions that are actually less likely to be true than what they have been told by reliable sources, simply because these ideas seem more plausible to them intrinsically. Such unreasonable confidence in their own opinions can have good long-term consequences for their society if it takes place in a context of ongoing critical discussion, where ideas are permitted to compete for general acceptance. This dynamic, dialectical approach to solving problems is hard to extend beyond the practical realm of daily life into progressive philosophy and science, given the rational dominance of tradition in most serious areas of belief, in most places, most of the time. But once such a progressive system of critical thinking gains a foothold, it can be equally hard to dislodge. We live in a world so subtle and complex that long-term experimental competition among theories is needed in order to probe its nature very deeply, however irrational it may be for individuals to place their confidence in any particular new idea. So, in order to make independent research an effectively rational form of behavior on a wide scale, modern societies have rather ironically adopted intellectual autonomy as something like authoritative doctrine, while leaving unresolved its tense relationship to justified belief for individuals. With some exceptions, they have also promoted thoughtful dissenters from the lowest to among the highest social ranks, providing comfortable careers to people who, in other ages, might have been burned at the stake.

Chapter 7 explores the relationship between belief and action. Individual thinkers cannot come close to knowing everything that they would like to know, especially in controversial matters. Nevertheless, we are often called upon to act based on the limited information that we do possess. Our need to judge facts, not just experience the flow of thoughts and sensations, begins with our most basic perceptions, like seeing a rabbit on the lawn instead of mere white and green patches in our visual fields, and extends to all beliefs that enter into practical reasoning. Maintaining totally justified beliefs is not a reasonable goal in practical or moral thinking, since some of the things we value depend on firm conviction more than on perfect perception. Judgments provide a fixed basis for action, as when I judge categorically that there is a tiger coming towards me, so that I stop considering the evidence and start running away. Convictions are committed judgments that compose the conscious basis of our practical and moral lives. In adopting a conviction we purposely stop deliberating and adopt some principle as integral to our character, in order to control our actions with a firmer hand and to let others know that we will not to be moved by further argument. It is the very stubbornness of our convictions that counts as the measure of virtues like loyalty and courage. But this same stubbornness – what Friedrich Nietzsche calls the “will to stupidity” – can render contrary evidence practically invisible once a decision is made, sometimes placing disagreements over matters of conviction beyond the reach of rational discussion.

Chapter 8 extends my analysis of disagreement and belief into what is sometimes called political epistemology. Progressives complain about the maddening tenacity of unjust traditional beliefs and practices, to which conservatives appear indifferent. Conservatives complain about progressives’ careless reliance on untested theories and rejection of historical wisdom. I argue that a cautious experimentalism is the key to maintaining a reasonable balance between perceptive rationality and social progress. This is hard to achieve, because it requires the acceptance by progressives and conservatives alike of more intellectual and moral diversity than could be justified from either of their separate points of view. But a proper intellectual humility ought to dispose us to temper our political convictions with respect for the different convictions of other people whether we sympathize with them or not, and therefore to let ourselves be jostled into a working equilibrium with those who disagree.

In Chapter 9, I examine what it takes to be a reasonable person in the face of practical and moral demands for action and their implications for belief. I contrast the epistemic roles of rational perceiver, intellectual producer, and principled agent as requiring different ways of balancing perceptions, opinions and convictions. Most of us function as complex combinations of those pure types of believer, balancing and rebalancing criteria of belief according to the changing conditions of our lives. While I can offer no general program for regulating anyone’s beliefs, I do suggest certain parameters outside of which someone could fairly be called unreasonable. These parameters include having a good understanding of the nature of our own and other people's beliefs and evidence – if not in theory, then at least implicitly – so that we can readily tell the difference between matters of fact, opinion, and principle. This ability to keep the structure of belief in mind is, I think, a central requirement for dealing with disagreements in a reasonable way. The book concludes with more stringent advice to those whose intellectual ambitions go beyond mere reasonableness into the philosophical pursuit of wisdom.


A final note. Perceptive readers may detect a certain lack of scholarly rigor throughout this book, something that bothers my conscience as a professional philosopher. Since the book is intended for students and general readers as well as my colleagues, I have tried to avoid technical discussions while still making the arguments as clear as I can. Inevitably, conflicts between the two goals have arisen, and in making compromises I have always favored the general reader over the professional. This is a "big-picture" sort of presentation in any case, and I have not been able to consider nearly as many objections, or to connect as much of what I say to the current professional literature, as I would like. Where I present my own positive view of things without considering alternative theories, I try at least to note that I am saying something controversial.

Chapters 2 and 3 address some philosophically foundational issues and provide some stricter definitions and supporting arguments that I feel are needed for a complete presentation of the theory. These chapters (as well as the occasional technical footnote) may safely be skimmed through by non-philosophers without detracting from the central thesis of the book. If you are interested in the philosophical details, though, I should note that much of the core material has been covered in greater depth in articles over the past ten or twelve years. The main arguments in Chapter 2 are discussed in "Antiskeptical Conditionals", Philosophy and Phenomenological Research 73(2), 2006; the main arguments in Chapter 3 appeared in "Other Voices, Other Minds", Australasian Journal of Philosophy 78(2), 2000; and the main arguments in Chapters 4 through 6 are covered in "The Rationality of Science and the Rationality of Faith", Journal of Philosophy 98(1), 2001.



Download 1.16 Mb.

Share with your friends:
  1   2   3   4   5   6   7   8   9   ...   18




The database is protected by copyright ©essaydocs.org 2022
send message

    Main page