Two months after the terrorist attacks of 11 September 2001, Dick Cheney was told about a meeting that Osama bin Laden and Ayman al-Zawahiri had had a month before the attacks around a campfire in Kandahar with Sultan Bashiruddin Mahmood, a past chairman of Pakistan’s atomic energy commission. Cheney’s intelligence advisers began speculating about the probability that Mahmood was offering to assist al-Qaida in obtaining information that might enable them to construct or acquire a nuclear device. Mahmood was a founding member of UTN (Umma Tameer-e-Nau, or ‘Islamic Revival’). A Libyan informant told the CIA that UTN had approached Libya at one stage, to ask whether Libya wanted their help in building nuclear weapons. Had UTN made a similar offer to al-Qaida? Or was there some other reason for the meeting?
We’re told in Ron Suskind’s The One Per Cent Doctrine (2006) that Cheney cut through all the talk about indications and probabilities: ‘If there’s a one per cent chance that Pakistani scientists are helping al-Qaida build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response . . . It’s not about our analysis, or finding a preponderance of evidence. It’s about our response.’
The One Per Cent Doctrine: it’s a striking methodology and a liberating one, and many people think it’s the only way to respond to the threat of low-probability, high-impact events. With it, the endless evidence-gathering and analysis that characterises traditional intelligence policy gives way to clarity. Nothing any longer needs to be conditional. We no longer say, ‘If X has happened, then we need to do Y,’ with all our effort being devoted to finding out whether X has in fact happened or (in an uncertain world) what its probability is. Instead we say, ‘If there is the smallest significant chance that X has happened, then we have no choice but to do Y.’ If X may lead to a catastrophe that must be avoided at all costs (like a nuclear attack on an American city), then we need to swing into action immediately and do Y. No further questions.
Think about the real-world consequences of focusing in this way on the worst-case scenario. After 9/11, the United States acquired (or believed it acquired) a significant amount of information about terrorist intentions by torturing captured al-Qaida leaders such as Khalid Sheikh Mohammed and Abu Zubaydah. (The treatment of Zubaydah in 2002 was so bad that the CIA director of operations at the time ordered that the videotapes of his interrogation be destroyed.) Was the use of torture justified? Many opponents of torture make a special reservation for what they call ‘ticking bomb cases’: if a detainee knows the whereabouts of a nuclear device, set to go off in London or New York, and the only way he can be induced to reveal the location of the device, so that we can disarm it and save hundreds of thousands of lives, is by waterboarding him, surely, they say, torture in this case is permitted (perhaps even morally required for the sake of the people who will be saved), abominable as it may be in all other circumstances.
Ticking bomb cases are mostly imaginary – hypothetical cases in a law-school classroom or fantasies on TV – and the scenarios they imagine are open to criticism on all sorts of grounds, not least the fanciful assumption that we could know in advance (without the omniscience of television or classroom simulation) that a nuclear device has been planted, that this man knows where it is and that we can get to the device in time if only he can be induced to talk. How likely is all of that? With Cheney’s doctrine, it doesn’t matter. If our knowing that this device exists and that this man knew its location would justify torture, then for the purposes of response, not analysis, the smallest significant probability that this device exists and this man is aware of its location is to be treated as tantamount to certainty. The One Per Cent Doctrine justifies torture. (That’s one of the reasons some of us refuse to play along with the ticking bomb hypotheticals.)
Here’s another example of the way the One Per Cent Doctrine affects real-world decision-making. After we invaded Iraq, it turned out that Saddam Hussein did not have any weapons of mass destruction in his possession and did not seem to have been working on a WMD programme for a considerable time. Some American officials and politicians had lied when they said, before the war, that they were convinced he had weapons of mass destruction. They wanted to make a case for war, and they thought this argument (far-fetched as it was) would work with the public; they believed the public would find it harder to understand or accept the geopolitical premise on which the war was really based: to teach the world a lesson, to show what happens when a regime in an area of economic or strategic interest defies the United States. Others, however, sincerely believed it was possible that Iraq had weapons of mass destruction, though they were by no means certain. Of those who believed it was possible, very few would have estimated the probability as low as one per cent. They might have said that in at least one in three cases like this (think about Iraq, Iran and North Korea), or one in five, it will turn out that the rogue regime in question really does have or is developing weapons of mass destruction. Clearly, there was a not insignificant probability. And that, for the Cheney doctrine, is enough. Even if the probability of Iraq’s developing nuclear weapons or the capability to weaponise chemical or biological agents that could be unleashed in a crowded Western city is as low as one per cent, the event in question promises catastrophe, and we have no choice – so the doctrine runs – but to act to avert that worst-case scenario.
It’s not hard to see how Cheney’s methodology can run out of control and become the basis of an undifferentiated paranoia. In The One Per Cent Doctrine, Suskind introduces us to steganography. After al-Qaida became aware that US computers were scrutinising the billions of international emails, phone calls and wire transfers in the world for words and phrases like Osama, blow up, or a thousand virgins in paradise, they started embedding information about their plots in numerical matrices hidden in other messages (in the pixels of an electronically transmitted photograph, for example). They would use these means to pass on the geographical co-ordinates of the Statue of Liberty, for example, or Grand Central Station, and steganographers were able to extract it. Or so the US anti-terrorist authorities suspected. Improbable, perhaps, but on the One Per Cent Doctrine, we don’t waste time with probabilities. If it’s possible, we act on it. And so Grand Central Station was flooded with soldiers at various times in 2003 and the public threat-level indicator flared regularly up from yellow to orange. It turns out that there was about as much to steganography as there was to the numerical patterns that Russell Crowe’s character, the mathematician John Nash, saw in magazine articles and Dow Jones reports in A Beautiful Mind. Nash in his illness started seeing the patterns and the threats they conveyed everywhere. He had no reality check, no filter and no way of ordering priorities. Like the American homeland security apparatus, he just had to race off in every direction at once.
Clearly someone needs to pause and think through the assumptions and strategies that adopting the One Per Cent Doctrine involves. Cass Sunstein has done this in Worst-Case Scenarios. Sunstein has written a lot on constitutional theory and judicial decision-making, and in recent years has turned his attention to descriptive and normative theories of human rationality. I think this is his 30th book. Sunstein writes engagingly, though in a way that scolds us a little for our irrational foibles; and he can illuminate very complex areas of rational choice theory – controversies about future discounting, for example (most of us prefer the certainty of $10,000 now to the certainty of a larger sum ten years hence, even adjusted for inflation), and commensurability (the assessment of such diverse consequences as monetary loss, moral loss and the loss of a zoological species in some common currency of analysis) – so that intelligent thought about decision-making in conditions of uncertainty is brought within reach of the sort of non-specialist reader who is likely to have a practical or political interest in these matters.
Sunstein devotes a certain amount of time – not a lot – to Cheney’s One Per Cent Doctrine, as one instance of a species of decisional strategy that he calls Precautionary Principles. Precautionary Principles are principles of decision (‘response, not analysis’) that require us to focus on worst-case scenarios rather than deal neutrally with the probabilities attaching to all possible outcomes – catastrophic, bad, rather good and really lucrative – in some sort of generalised Cost-Benefit Analysis. And the threat of terrorists with weapons of mass destruction is only one problem to which the choice of Precautionary Principle v. Cost-Benefit Analysis might apply.
The problem on which Sunstein focuses most of his attention is climate change. We don’t fully understand the mechanisms that are converting carbon emissions into rising global temperatures, rising sea levels and changing weather patterns. In the last few years scientists have been unpleasantly surprised by the feedback mechanisms that seem to be leading to a greater or quicker loss of polar ice than the most plausible models had previously indicated. Apparently temperatures and sea levels are rising; but no one quite knows what to expect. It is possible, though most scientists think it unlikely, that temperatures will rise by ten per cent in the next hundred years; it is possible, though most scientists think it unlikely, that sea levels will rise by twenty feet, enough to inundate Bangladesh or much of the settled eastern seaboard of the United States (from Miami to Washington, DC, Boston, Philadelphia and much of New York). These outcomes would be plainly intolerable, but it would be a brave climate scientist who would say that their probability is so low that it can be dismissed out of hand. The question is how we are to approach them: with a Precautionary Principle or with some more open and neutral form of Cost-Benefit Analysis?
Sunstein is quite sceptical about Precautionary Principles, at least in a simple-minded form. The trouble with the One Per Cent Doctrine, for example, is that it does not say enough about the costs that may be involved in our response – that is, in our acting to avert the (perhaps) unlikely catastrophe. Many of the cases Sunstein considers are environmental, like global warming, or are in areas, such as pharmaceutical policy or the genetic modification of food sources, where government regulation seems to be called for. A new medicine has been invented, but terrible medium and long-term consequences may attend its use, as happened with thalidomide. Or varieties of rice have been modified genetically to produce crops more resistant to pests; but there are frightening scenarios involving the impoverishment of the natural genetic stock and unexpected ecological implications. In these cases, we rely on regulatory regimes to investigate the consequences of the introduction of the new product. But the regulatory process takes time, and time may produce its own catastrophes. Many people believe, Sunstein says, that prohibitions on genetic modification or its over-regulation ‘might well result in many deaths’, presumably from hunger in developing countries which the stronger crops might have helped alleviate. Or the disease to which a new drug was addressed will continue to take its grisly toll while the laborious process of testing and regulatory approval lumbers on. That too is, in its way, a catastrophe. ‘Some evidence suggests,’ Sunstein writes, ‘that any expensive regulation will have adverse effects on life and health.’ The money spent on testing could have been spent on other aspects of health policy; jobs are lost (or not created) in the industries that are subject to expensive regulation; and so on.
Sunstein says he’s not endorsing these speculations, though he notes that they are supported by many studies and repeats the general thesis many times. The point is to alert us to ‘substitute risks’: ‘hazards that materialise, or are increased, as a result of regulation’. If governments take responsibility for avoiding catastrophic outcomes, they must also take responsibility for the catastrophes that attend their efforts at avoiding these outcomes, including other catastrophes that are not addressed because of the expense of addressing this one. We see this obviously enough in the case of Iraq: as a result of acting to avert the chance that Saddam Hussein had weapons of mass destruction which he might make available to terrorists, we brought about the deaths of tens or hundreds of thousands of Iraqis, sectarian violence and civil war which will probably tear the country apart, and an apparently unending and extremely expensive military occupation, leaving the United States unable to use armed force in support of the One Per Cent Doctrine anywhere else.
All this is easy enough to say when we are criticising George Bush or Dick Cheney. But Sunstein directs his logic against liberals as well. In an article published recently in the Stanford Law Review, he and Adrian Vermeule scold opponents of the death penalty for focusing only on the worst-case scenarios attending the infliction of capital punishment – the possible execution of the innocent – and ignoring the likely consequences of abolition, which (according to Sunstein and Vermeule) may be an increase in the murder rate. Some studies show, he says, that each execution in the United States deters 18 murders. A few months ago, the legislature of New Jersey abolished the death penalty; the implication of Sunstein’s analysis is that the government of New Jersey must now accept responsibility for the deaths of murder victims that keeping the death penalty might have averted. Maybe the legislators thought that they would rather be responsible for those innocent deaths than for the fewer and less probable innocent deaths (of people wrongly suspected of capital murder) that the death penalty in its worst-case scenario might bring about. Probably they didn’t think about it at all. A lot of Sunstein’s recent work has had this quality: scolding us for our self-righteousness and pointing out the human dimensions of various issues that we have failed to take rationally into account.
In Worst-Case Scenarios, the scolding tone becomes more unpleasant when Sunstein confronts the critics of the US refusal to ratify the Kyoto Protocol, aimed at reducing carbon emissions. Many of the critics, he says, come from countries where the likely effects of climate change will be very grave and where the costs of subscribing to the Kyoto carbon caps are quite low. The reverse is true in the United States: the costs (in terms of jobs and probably lives) of lowering the very significant level of carbon emissions is unacceptably high, and the bad effects of climate change will not be felt in the US so much as in other parts of the world. So Sunstein devotes a long second chapter to a defence of the American position. He acknowledges that it’s a self-interested calculation: only costs, benefits and catastrophes for Americans are considered. Sunstein understands that this sort of calculation may be morally inappropriate:
If one nation is imposing significant harms on citizens of another, it should not continue to do so even if, or because, a purely domestic analysis suggests that emissions reductions are not justified from the point of view of the nation that is imposing those harms . . . The emission of greenhouse gases could even be viewed as a kind of tort, producing damage for which emitters, and those who gained from their actions, ought to pay. For example, energy and gasoline prices in the United States have been far lower than they would have been if those prices had included an amount attributable to the increased risks from climate change – risks that threaten to impose devastating harm on people in other countries.
One would have thought that this dimension of worst-case analysis is all-important, and that Sunstein is just the person to explore systematically the difference that attention to the moral aspects of the distribution of costs and harms would make to the modes of analysis that he considers. He certainly never tries to hide the distributive dimension (as many economists or decision-theorists do). But his consideration of the moral aspects of the distribution across persons and nations of costs, benefits and catastrophes is perfunctory or, at best, sporadic. He concedes that Americans may have a special obligation to mitigate the harm they have caused. He points out several times, however, that poor people suffer too as a result of over-regulation. And he is reluctant to abandon a method of measuring losses by how much people would pay to avoid them, even though it is hopelessly flawed by the fact that poor people would pay less simply because they have less. (We measure the value of a life by asking how much people would pay to avoid its loss, under various scenarios. Now, as a matter of fact, a poor person will not pay $100,000 to avoid a 10 per cent chance of death from cancer, because the poor person has no access to $100,000; so a poor person’s life must be worth less than a million dollars; and so it is not clear how the government can justify imposing taxes for a scheme that spends many millions of dollars to avoid this sort of hazard. That’s the sort of argument this book is inclined to defend.) Sunstein asks:
Why should people be forced to pay an amount for regulation that exceeds their willingness to pay? People are making their own judgments about how much to spend to avoid various risks – and those judgments should be respected . . . To be sure, we might believe that a measure of redistribution is appropriate . . . But as a practical matter, regulation need not, and often does not, amount to a subsidy to those who benefit from it . . . When the government eliminates carcinogenic substances from the water supply, water companies do not bear the cost; it is passed on to consumers in the form of higher water bills.
Sunstein knows that matters are not as straightforward as this, and that the distributive issues that occasionally trouble him indicate deeper and more structural difficulties with the kinds of analysis he favours. Mostly he just observes that these questions are all very complicated and that he prefers to ‘return to simpler matters’, i.e. rational choice calculations uncontaminated by distributive complexities.
Sunstein illuminates a whole array of difficult and technical issues: the logic of irreversibility, the basis of low-level probabilistic calculations, the ‘social amplification’ of large single-event losses, different ways of taking into account effects on future generations and ways of thinking about the monetisation of disparate costs and benefits. So there is a considerable opportunity-cost to the rest of us in his failure to devote more sustained attention to issues of rich and poor, advantaged and disadvantaged. Justice is out of fashion among rational choice theorists, and it is a pity that Worst-Case Scenarios does not fly in the face of fashion in a more determined way. It would have been a better book had it spent more time on the issues of distributive and corrective justice that attend the prevention of catastrophic harm.
Send Letters To:
The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN
letters@lrb.co.uk
Please include name, address, and a telephone number.