Unmeasurable risks should still be mitigated – 9/11 proves
(Richard A., Judge on US Court of Appeals for the 7th circuit, Catastrophe: Risk and Response, 2004, pg. 171-2
We know that people sometimes overreact, from a statistical standpoint, to a slight risk because it is associated with a particularly vivid, attention-seizing event. The 9/11 attacks have been offered as an illustration of this phenomenon.66 But to describe a reaction to a risk as an overreaction is to assume that the risk is slighter than people thought, and this presupposes an ability to quantify the risk, however crudely. We do not have that ability with respect to terrorist attacks. About all that can be said with any confidence about 9/11 is that if the enited States and other nations had done nothing in the wake of the attacks to reduce the probability of a recurrence, the risk of further attacks would probably have been great, although we do not know enough about terrorist plans and mentalities to be certain, let alone to know how great. After we took defensive measures, the risk of further large-scale attacks on the U.S. mainland fell. But no one knows by how muchand anyway it would be a mistake to dismiss a risk merely because it cannot be quantified and therefore may be small-for it may be great instead. Unfortunately the ability to quantify a risk has no necessary connection to its magnitude. We now know that the risk of a successful terrorist attack on the United States in the summer of 2001 was great, yet the risk could not have been estimated without an amount and quality of data that probably could not have been assembled. To assume that risks can be ignored if they cannot be measured is a headin-the-sand response.This point is illuminated by the old distinction between "risk" and -uncertainty," where the former refers to a probability that can be es- timated, whether on the basis of observed frequency or of theory, and the latter to a probability that cannot be estimated. Uncertainty in this sense does not, as one might expect, paralyze decision making. We could not function without making decisions in the face of uncertainty.We do that all the time by assigning, usually implicitly, an intuitive probability (what statisticians call a "subjective" probability) to the uncertain event. But it is one thing to act, and another to establish the need to act by conducting fruitful cost-benefit analyses, or using other rational decision-making methods, when the costs or benefits (or both) are uncertain because they are probabilistic and the probabilities are not quantifiable, even approximately. The difficulty is acute in some insurance markets. Insurers determine insurance premiums on the basis of either experience rating, which is to sayan estimate of risk based on the frequency of previous losses by the insured or the class of insureds, or exposure risk, which involves estimating risk on the basis of theory or, more commonly, a combination of theory and limited experience (there may be some history of losses, but too thin a one to be statistically significant). Ifa risk cannot be determined by either method, there is uncertainty in the risk-versus-uncertainty sense; and only a gambler, treating uncertainty as a situation of extreme and unknowable variance in possible outcomes, will write insurance when a risk cannot be estimated. Or the government, as with the Terrorism Risk Insurance Act of 2002,67 which requires insurance companies to offer coverage of business property and casualty losses due to terrorism but with the federal government picking up most of the tab.68 The act excludes losses due to nuclear, chemical, or biological attacks, however, so it has limited relevance to the concerns of this book. Insurance companies are permitted to decline to cover such losses, and typically they do. As a result, estimates of the probability of such losses cannot be reliably estimated from insurance premium rates.