|Cognitive Biases and Moral Luck
David Enoch and Ehud Guttel
Some of the recent philosophical literature on moral luck attempts to make headway in the moral-luck debate by employing the resources of empirical psychology, in effect arguing that some of the intuitive judgments relevant to the moral-luck debate are best explained – and so presumably explained away – as the output of well-documented cognitive biases. We argue that such attempts are empirically problematic, and furthermore that even if they were not, it is still not at all clear what philosophical significance they would have.
Many of us believe in some version of the control condition on moral responsibility and blameworthiness. We believe, roughly speaking, that we are only responsible for what is under our control. In particular, we believe that if two people are alike in what is under their control, they are equally morally responsible, blameworthy, or praiseworthy. Many of us believe also that of two equally negligent drivers – both of whom morally blameworthy to a certain degree – the one who actually kills a pedestrian is more morally blameworthy than the one who (luckily) does not.
The problem, of course, is that (given that the difference between the two drivers is not a matter that is under their control) these two common beliefs are mutually inconsistent. If blameworthiness depends only on what is under an agent’s control1, then the two drivers are equally to blame. If they are not, then the control condition must be false. The philosophical literature on moral luck attempts to reject the first common belief (thus claiming that moral luck is possible, and indeed actual), or the second (thus claiming that there is – and indeed there can be – no moral luck), or to qualify them, or to show that they are not after all in conflict, or something of this sort.
In some recent articles, however, a different path is explored. Both Edward Royzman and Rahul Kumar2 and Darren Domsky3 attempt to make headway on the problem of moral luck by making use of the findings of some recent empirical psychology. In particular, they attempt to show that the intuitions that seem to indicate the reality of moral luck – the moral-luck-intuitions, we will call them – are best explained – and so, presumably, explained away – as the output of well-documented cognitive biases. Royzman and Kumar argue that such intuitions are not in fact in conflict with the control condition. Rather, they follow from the control condition once we plug into it certain (false) background factual beliefs about what was and what was not under the relevant agent’s control. Being vulnerable to the hindsight bias, we tend to attribute current knowledge to agents in the past, and judge them accordingly. Of course, if a potentially negligent driver knew that his negligence will lead to the death of an innocent pedestrian, he ought to have been much more careful, or perhaps to have stayed at home. Not having done so, the death was under his control after all, and the difference in blameworthiness between the two drivers is then accounted for consistently with the control condition. The hindsight bias, then, explains our relevant factual beliefs, which – when plugged into the control condition – yield the luck-sensitive judgments. But, of course, these factual beliefs are false, and so are the normative judgments based on them.
Domsky likewise makes use of cognitive biases, but in a different way. He argues that our adherence to the moral-luck-intuitions can be explained in terms of the selfish and the optimistic biases. Given the selfish bias, we tend “to select and uphold moral theories and beliefs according to how they stand to benefit [us]” (455). Given the optimistic bias, just as well-documented, we tend “to make unrealistically low estimates of [our] relative likelihood of experiencing unlucky outcomes” (457). So a thinker who is both selfishly and optimistically biased is very likely to believe in moral luck: Given the optimistic bias, she is likely to believe that such a belief will benefit her compared to others, because others are more likely to suffer bad luck (and get discredit for it) than she is; and given the selfish bias she is likely to have this (supposedly) self-benefiting belief. Thus, when we believe that there is moral luck, we do so “for reasons we are not even aware of and would never consciously accept or tolerate” (446). Our reasons for believing that there is moral luck “were never any good” (447). And as Domsky’s title suggests, realizing that this is so is the way to finally solve the problem of moral luck, establishing that there can be no such thing.
In this paper, we want to raise worries of two different kinds regarding such attempts to settle the problem of moral luck by reference to empirical studies regarding cognitive biases. The first family of worries is internal to the project: we give reasons (in sections 1.1-1.3) to doubt that such attempts – in particular, the ones by Royzman and Kumar and by Domsky – succeed on their own terms, as empirically respectable explanations of our relevant intuitions, and we raise (in section 1.4) an original methodological worry about the force of explanations in terms of cognitive biases to generate predictions based on extrapolation and the conjoining of different biases in forming a single explanation. The second family of worries is more external: we argue that even if the proponents of such explanations of our moral-luck intuitions can be made to work, it remains unclear what the philosophical significance of such explanations are.
Do the Explanations Work?
In this section we raise doubts about the suggested psychological explanations of the moral-luck intuitions. Those in sections 1.1 and 1.2 are clearly empirical, and they apply both to Domsky's and to Royzman and Kumar's suggested explanations. The worry in section 1.3, also empirical, applies only to Domsky's explanation. And the worry in section 1.4 is best seen as a more general methodological worry, applying both to Domsky's and to Royzman's and Kumar's explanations, and indeed more generally to explanations utilizing cognitivie biases.
Which Explanation, Then?
The first suspicion relevant here comes from the fact that the two papers – whose shared methodological commitments will remain unquestioned until section 2 below – offer very different psychological explanations of the moral-luck intuitions. Not only doesn’t it seem likely that both explanations are fundamental, that both do fundamental explanatory work here, but – much worse – it is not even clear that the two explanations are consistent.
According to Royzman and Kumar, it will be remembered, we wholeheartedly and unqualifiedly accept the control condition on moral responsibility. The relevant cognitive bias only kicks in, they say, as the explanatory story underlying some of our factual beliefs to which we later apply the control condition. On their view, then, we are not being inconsistent in our adherence to the control condition and to the moral-luck-intuitions, given the mistaken factual beliefs to which the relevant cognitive bias is responsible. But if Domsky’s explanation is right, creatures (like ourselves) who are vulnerable to the selfish and optimistic biases are not expected to endorse the control condition in the first place4. That is because Domsky agrees – indeed, insists – that the control condition and the moral-luck intuitions are inconsistent, and that his cognitive-bias-explanation shows why we (unreflectively) accept the moral-luck-intuitions (which entail, on his view, the denial of the control condition). If both Royzman and Kumar’s story and Domsky’s story are equally supported by the empirical evidence, and if they are jointly inconsistent, we should presumably not be very confident in either. Of course, it is logically possible that different explanations apply to different people here: Perhaps John believes in moral luck because of Domsky-type biases, and Jane because of Royzman-Kumar ones5. There is no contradiction in such a speculation. But both competing explanations aspire to full (human) generality. Restricting their scope takes away from their plausibility. And restricting their scope without offering an explanation for such a restriction makes them, it seems, too implausible to believe.
Now, this is not directly a problem either for Royzman and Kumar or for Domsky, as each may want to argue – on empirical grounds, presumably – against the explanation offered by the other. And it may be possible to qualify one or both of these explanations so as to allow them not to be in tension with each other, and perhaps also to complete each other in some way. It’s just that as things stand, there is here some cause for concern, indeed some reason to be suspicious about the purported findings and their implications.
Royzman and Kumar (332) explicitly restrict the scope of their explanation to the intuitions supposedly supporting the existence of resultant moral luck, or moral luck in consequences, of the kind exemplified by the two-negligent-drivers example. Domsky (448) likewise restricts the scope of his explanation to negligent behavior. These restrictions of the scope of the explanandum, we now want to argue, cast doubt on the adequacy of the suggested explanans.
First, the restriction to just moral luck in consequences seems entirely ad hoc6. One of the other kinds of moral luck Nagel7 identifies, for instance, is circumstantial moral luck, luck in the morally relevant circumstances we find ourselves in, or in the moral tests we have to undergo. In cases of resultant moral luck, luck intervenes (as it were) causally after the agent's action, influencing8 the consequences of the action. In cases of circumstantial moral luck, however, luck intervenes causally before the agent's action, determining whether an action is even performed. Consider, for instance, Judith Thomson’s9 example of the two corrupt judges, both equally willing to take bribe, except only one of whom is actually offered a bribe. Adherence to the control condition on moral responsibility entails that we should judge these two judges – the one who took bribe, and the one who would have, given an opportunity – alike. But, as Nagel powerfully puts it, “here again, morality is at the mercy of fate”, as we cannot resist the inclination to “judge people for what they actually do or fail to do, not just for what they would have done if circumstances had been different.”10
The problem of moral luck seems to be the very same problem whether it is luck in consequences or in circumstances, and is typically so treated in the litearture. For all we’ve been told, the empirical psychological explanations of our moral-luck-intuitions only apply to those relevant to moral luck in consequences. But what we are theoretically after is a unified explanation of the relevant intuitions, or a unified solution to the problem of moral luck. At the very least, such a unified solution is to be preferred on methodological grounds to less unified solutions, absent some story of why it is that – appearances to the contrary notwithstanding – resultant and circumstantial moral luck are sufficiently different theoretically to justify very different treatment. And this means that the suggested solutions we have before us do not score as highly on the list of theoretical virtues as their authors seem to suggest.
Of course, a restriction of the scope of the explanandum would not be ad hoc if an adequate rationale for the restriction was given11. But consider the rationales suggested – very quickly – by the proponents of the suggested solutions. Royzman and Kumar say – in a footnote (332) – that other kinds of moral luck “have no bearing on judgments of accountability” and are thus irrelevant to the control condition, which is “meant to regulate only interpersonal judgments of responsibility”. But the two-potentially-corrupt-judges example refutes this claim, as clearly we will tend to hold the judge that actually took bribe interpersonally accountable in a way that the other judge is not.
Domsky’s (448) suggested rationale for the restriction of the explanandum to cases of negligence is that only there do our moral-luck intuitions come into play. Again, examples of circumstantial moral luck refute this claim. And note also how Domsky’s restriction of the scope of the explanandum can work as a double-edged sword. For Domsky (454) puts this point to use when rejecting Harman’s (sketchy) suggestion, according to which moral-luck intuitions are the result of another bias, the fundamental attribution error, a tendency we all seem to share to overlook the background circumstances of the case while attributing the actual outcome to the (seemingly) bad character of the agent. Domsky claims that this explanation is unconvincing because it cannot distinguish between negligent and non-negligent behavior. If moral-luck intuitions stem from the attribution error, argues Domsky, one would expect these intuitions to arise also in non-negligent contexts. But, Domsky argues, they do not.
Surprisingly, though, a very similar criticism can be applied to Domsky’s own suggested explanation. Given the empirical data on over-optimism, if the optimistic bias (together with the selfish bias) is what explains our moral luck intuitions, it is utterly mysterious why they do not arise with regard to non-negligent actions. Studies demonstrating the optimistic bias show that individuals believe their risks to face bad outcomes are substantially smaller than the risks of their counterparts. Crucially, this conviction exists both with respect to bad outcomes that result from negligent behavior and with respect to bad outcomes that do not, bad outcomes that are purely “bad luck.” For example, whether asked about their chance of negligently causing a car accident, or about their chance of driving safely and faultlessly causing accident, individuals consistently believe that the risks they face of both kinds are smaller than those the average person faces.12 Likewise, individuals perceive their chances of suffering illness – whether illness that typically results from inadvertent practices13 (sexually transmitted diseases, skin cancer) or illness that is typically uncontrollable14 (breast cancer, hearing deficiencies) – as substantially lower than that of their counterparts.15 As it appears, we tend to believe that bad outcomes, whatever their source, are unlikely to occur to us. But if so, why is it that, as Domsky argues, the moral-luck intuitions do not apply outside the context of negligent behavior? Both with regard to negligent and non-negligent behavior, we seem to expect that others are likely to see their risk materializes, while we will miraculously evade unfortunate results. Given that over-optimism does not distinguish between negligent and non-negligent behavior, how can Domsky’s explanation? The problem for Domsky is, then, that his suggested explanation doesn't fit the scope of what he himself takes to be the relevant explanandum (only resultant-moral-luck-intuitions).16
Domsky may consider revising his official view of the relevant explanandum, conceding that moral-luck intuitions do apply – though perhaps not as forcefully17 – to some cases of non-negligent behavior as well. Such revision would be neither out of line with the relevant literature, nor entirely implausible: Arguably, we do intuitively think that there is an important moral difference between a driver who killed and one who didn't, even when neither was at all negligent. If Domsky revises his view in this way, the objection above would no longer apply. But then, of course, neither would Domsky’s objection to the explanation in terms of the fundamental attribution error. Domsky cannot have it both ways.
How Do We (Think that We) Benefit from Moral Luck?
Domsky argues that – here as elsewhere – our moral convictions are designed such as to favor us. Driven by subconscious selfish motives, we adhere to moral rules that privilege us at the expense of others. As an example of this general phenomenon, Domsky uses studies showing that the rich generally favor lower taxes, the poor support higher minimum wage, and so on (457). But while it is rather clear how the rich and the poor benefit from – or at least believe that they benefit from – low taxes and higher minimum wage (respectively), it is not at all as clear that our moral-luck intuitions benefit us, nor is it clear that they seem to benefit us (given other things we believe, perhaps irrationally).
Suppose that I am optimistically biased. Then I will believe that others are more likely than I am to inflict harm. But if so, I should think that the reality of moral luck and a common belief in moral luck would provide an incentive for others to act prudently. I should expect that those surrounding me, looking to avoid moral condemnation, will behave in un-risky (or at least less risky) ways, thus decreasing my chances of being harmed by others. But this explanation is incompatible with the true nature of the optimistic bias. As noted, the research concerning the optimistic bias indicates that individuals are not only optimistic with respect to how likely they are to inflict harm but also with respect to how likely they are to suffer injury. If individuals indeed believe they are not likely to be harmed by others, it is unclear how they can see moral-luck as serving their interests. In fact, given that the risk to be injured is borne (so we all seem to think) by others, then moral luck seems to serve the interests of others, and so our moral-luck intuitions seem to have an altruistic flavor: if anything, they serve to augment the protection of others, that is, of those likely to suffer injury18. At the very least, this consideration offsets the effect of the selfish bias Domsky tries to utilize in his explanation of the moral-luck intuitions.19
Biases, Extrapolation, and Consistency
Both suggested explanations of our moral-luck intuitions attempt to make use of cognitive biases that are well-documented in other contexts and to extrapolate them into the moral-luck context. But it is not completely clear that such extrapolation is acceptable.
Though this point applies equally to Royzman and Kumar’s explanation in terms of the hindsight bias, it can be introduced more clearly using Domsky’s explanation. This explanation, to repeat, goes something like this: A thinker who is selfishly biased will tend “to select and uphold moral theories and beliefs according to how they stand to benefit” (455) one. A thinker who is optimistically biased will tend “to make unrealistically low estimates of one’s relative likelihood of experiencing unlucky outcomes” (457). So, as explained above, a thinker who is both selfishly and optimistically biased is likely to believe that belief in moral luck will benefit her compared to others, and so is likely to have this self-benefiting belief.
Domsky’s reasoning, then, relies on a conjunction of cognitive biases: Domsky claims that the combined effect of the selfish and optimistic bias is (in our case, at least) the tendency to believe what logically follows from the deliverances of each bias separately. And though this may very well be true, it may also be false. For it is an empirical, not a logical question how biases interact. And it is not at all clear that they interact in a logically respectable way.
Indeed, it is hard to see why they should be expected to interact in a logically respectable way, so that people will tend to believe what follows from conjoining the deliverances of several distinct biases. In many contexts it makes sense, perhaps, to assume that a thinker believes what immediately follows from other things she believes, that thinkers do not believe in (fairly transparent) contradictions, that they reason in systematic ways, and so on. But surely these are not the kinds of assumptions we can safely make while studying cognitive biases, mechanisms that inhibit rational, consistent, thinking. The hidden assumption that Domsky relies on – that (in our case at least) these biases interact in a logically respectable way – is at best an empirical hypothesis, not an a priori truth. And it needs to be supported by empirical evidence.
For a closely related problem in Domsky’s reasoning, consider his explanation of agent-regret. Agent regret is that special feeling – somewhere in-between remorse and regret – that an agent often does, and arguably should, feel when his action brings about a bad outcome. Agent regret seems to be roughly proportionate to how bad the outcome is and not to how responsible the agent is for bringing it about, but can nevertheless only be felt by the agent and not by spectators (who can equally regret the loss to the victim). And, of course, agent regret plays a central role in the literature on moral luck. Domsky explains (463) that given our optimistic bias, we believe that we have a special talent to control luck. If we have this talent, and nevertheless our action results in harmful effects as a matter of luck, this means that we have failed to exercise our talent in the appropriate way. And if so, we should indeed feel guilty.
Now, this explanation is both creative and intuitively convincing, but it is nevertheless not empirically grounded. For it is an empirical question whether the optimistic bias extrapolates in the way Domsky here assumes. And one cannot simply assume that we are all completely consistent in our succumbing to the optimistic bias, given that this bias itself (as well as others) sets constraints on the scope of our rationality and consistency. This assumption – that people will tend to be consistent both in succumbing to the relevant biases, and in believing what follows from conjoining their deliverances – is an assumption that calls for empirical support, not a priori extrapolation. Indeed, so long as we are engaged in a priori speculation about the extrapolation of psychological biases, it may be thought that Domsky’s explanation of agent-regret flies in the face of the selfish bias he so forcefully emphasizes elsewhere. For it is, after all, not in my interest to feel agent-regret.
Similar worries apply to the use Royzman and Kumar make of the hindsight bias. Well-documented though this bias is, it remains an empirical hypothesis that it nicely extrapolates to the contest relevant to the explanation of our moral-luck-intuitions. (Unlike Domsky, though, Royzman and Kumar (342) explicitly agree that theirs is an empirical hypothesis, in need of further empirical support.)
Let us not overstate the point, which we want to offer here in a somewhat tentative tone. The objection here does not show that the kind of explanation Royzman and Kumar and Domsky are after cannot be supplied. It only shows that they haven’t yet supplied it, because the suggested explanations rely on conjoining and extrapolating biases in an empirically suspect way. Perhaps further empirical research can fill this gap. But there is no way of bypassing the need for such research by just assuming that we can safely conjoin biases, or that people are consistent in their inconsistencies and irrationalities.
On the Limited Force of Debunking Explanations
Assume these problems away. Suppose, in other words, that Royzman and Kumar’s explanation, or perhaps Domsky’s, or perhaps some combination of the two is perfectly adequate, so that we accept the moral luck intuitions because of the effect of some cognitive bias or other. What would be the philosophical implications of such findings?
It may be tempting to characterize Royzman and Kumar's and Domsky's reasoning here as an instance of the genetic fallacy, and rest there. They do, after all, argue from premises about the historical or causal sources of some beliefs of ours to their falsity. But this temptation should be (to a large extent, at least) resisted. For surely, sometimes offering debunking explanations of competing intuitions is of significant philosophical value20. What is needed, rather, is a more nuanced understanding of the philosophical significance of debunking explanations21.
In thinking about this general question, we find it useful to start with an example that Nagel brings in another context22. Suppose you find out that you only came to believe that 2 + 2 = 4 because you were in love with your second-grade arithmetic teacher, eager to believe anything she said. And suppose further that now you only believe that 2 + 2 = 4 because ever since first coming to believe that this is so, you never questioned this belief. Having found out that this is the causal story underlying your belief that 2 + 2 = 4, what should you believe now?
The first thing to note is that the presence of a debunking explanation of a belief – an explanation that presents the belief in a bad epistemic light, or that counts against treating the belief as justified – is perfectly consistent with the belief being true (this point, we take it, is just what makes the genetic fallacy a fallacy). Even if you only believe that 2 + 2 = 4 because you were in love with your teacher, it is still true that 2 + 2 = 4. Perhaps your belief to that effect is not justified. But as the example shows quite conclusively, it may nevertheless be true.
So even if a successful debunking explanation of our moral-luck-intuitions has been presented, they have not yet been shown to be false. For all that has been so far shown, it is quite possible that we hold such beliefs because, say, we are selfishly and optimistically biased, but that these beliefs are still true, so that there is moral luck23.
So perhaps the debunking-explanation is better understood in some other way. True, a debunking explanation of a belief does not entail its falsehood, but it does – it may be claimed – undermine its evidential force24. Upon having learned that you only believe that 2 + 2 = 4 because you were in love with your teacher, the strength of your intuition that 2 + 2 = 4 no longer has any evidential force, it no longer serves to justify your belief that 2 + 2 = 4. You now have to question that belief, and try to find other evidence for (or against) it. Analogously, if some debunking explanation of our moral-luck-intuitions succeeds then they loose their evidential force, and they can no longer justify the belief that there is moral luck. We must now question that belief, and try to find other evidence for (or against) that belief.
This may be so. Certainly, if a debunking explanation of our moral-luck-intuitions works, we should at least be suspicious of these intuitions. But let us make the following three points in reply to this line of thought.
First, as Nagel’s example shows (and as Nagel himself insists), it is not even clear that a debunking explanation of an intuitive belief always undermines its evidential force. Having learned about the origin of your belief that 2 + 2 = 4, you decide to think hard about the matter. But however you try to question it, it just seems unquestionable to you. It just seems to you that 2 + 2 = 4, indeed that 2 + 2 could not have failed to be, precisely, 4. Aren’t you then entitled to take that as your reason for believing that 2 + 2 = 4? Won’t you then be justified in so believing? If this does suffice for justification25, it shows that at least sometimes a debunking explanation of an intuitive belief does not undermine its evidential force. And it may be argued that the same applies to the case of moral luck: Perhaps, say, Royzman and Kumar are right about why it is that we have those moral-luck-intuitions. But if now, thinking about the matter seriously and open-mindedly, we still cannot but believe that, say, the negligent driver who kills is more blameworthy than the negligent driver who does not, or that the former should feel agent-regret of a kind or intensity that the latter should not, perhaps this suffices for a prima facie justified belief in moral luck. Perhaps, in other words, debunking explanations of the kind suggested by Royzman and Kumar or by Domsky are strictly speaking philosophically irrelevant.
Second, it is important to notice that both sides to the moral luck debate can play the debunking-explanation game (a point neither Royzman and Kumar nor Domsky even mention). It is, after all, quite possible that our intuitive belief in the control condition and in the denial of moral luck originates from some cognitive bias or other, or perhaps is in another way subject to a debunking explanation26. Indeed, it is quite possible that both our intuitive belief in the control condition and our intuitive belief in moral luck can be given debunking explanations. What would be needed in order to win the debunking-explanation game is not just a debunking explanation of the moral-luck-intuitions, but also the claim that it is a better, more debunking, explanation than whatever explanations available to those trying to support the moral-luck-thesis by explaining away our intuitive adherence to the control condition. And we have so far been given no reason to believe that this is so.
Third, all this shows that the debunking-explanation game is not – indeed, cannot be – the only game in town. In order to find out whether moral luck is real, it just is not enough to engage in the explanatory project Royzman and Kumar and Domsky are engaged in27. It is also required to do the philosophical work28, to try to pursue the philosophical implications of affirming or denying moral luck, to distinguish genuine moral luck from similar phenomena that may be confused with it by those affirming or denying moral luck, and so on. Now, Domsky is highly critical of the philosophical texts he surveys here, and some of his criticism is certainly justified. But that some philosophical attempts to deny moral luck are not wholehearted and perhaps also suffer from other flaws is no reason to abandon that project. It may instead be a reason to try to engage in that project in a better way29.
Pace Domsky, then, we do not think that the problem of moral luck has been finally solved. It is not clear that the empirical explanations of our moral-luck-intuitions offered by Royzman and Kumar and by Domsky are explanatorily adequate, and even if they are, they cannot do all the philosophical work that needs to be done here.
But none of this means that attempts at offering debunking explanations of our moral-luck-intuitions do not – much less cannot – advance the moral luck debate. For, as already noted, in this debate powerful intuitions are to be found on both sides. And it seems clear that at the end of the day we are going to have to discard one family of intuitions, paying considerable intuitive price whichever way we go. It is then that debunking explanations of our moral-luck-intuitions – if they can be made to work – have significant philosophical value. Suppose you found out that one of your firmly held beliefs is inconsistent with something else you firmly believed. In such a case, realizing that a debunking explanation of one of these intuitive beliefs can be supplied may be important: It may make the intuitive price of discarding this intuitive belief lower, perhaps to the point of acceptability. In our context, debunking explanations of our moral-luck-intuitions can play such a role, making the intuitive price of consistently and uncompromisingly adhering to the control condition acceptably low. In this way, then, and together with rather than instead of philosophical argumentation, the study of relevant cognitive biases may yet advance the moral luck debate.