This book is presented as a pessimist’s primer, full of circumstantial evidence for the vanity of human wishes. It offers a portfolio of sharp blows to the back of the head, as good intentions boomerang. Dieting makes you fatter. Green washing-powder only replaces the algal blooms in the Adriatic Sea with mats of floating slime. But Edward Tenner’s book is dedicated, half-successfully, to subtler propositions about the contrariness of stuff. His argument, you could say, turns on the implicit difference between Sod’s Law (everything goes wrong) and Murphy’s Law (if something can go wrong, it will). While the first is vacuous – or a matter of the psychology which ensures we remember the times things go wrong and forget the times they don’t – the second is an engineer’s motto about the scope we allow to disaster. After a record-breaking run on a rocket sled at Edwards Air Force Base in the Forties, Captain Edward Murphy Jr discovered that all the recording instruments had been mounted backwards. The axiom he put in joke form had less to do with the human error involved than with the way the mechanical system had tolerated the error. The design of the gauges permitted them to be fitted wrong; the complexity of the system prevented the mistake from becoming apparent until results failed to appear. Why Things Bite Back, likewise, directs its attention towards the behaviour of systems, and our relationship with the particularly susceptible kind described as ‘tightly-coupled’, where the components act jostlingly on each other. It is about managing the unmanageable; it seeks intelligent responses to a world of malfunctioning cash dispensers.
Since about 1800, Tenner argues, we have been constructing devices that are elaborate enough to give us the same sort of problems that nature does. For example, ecosystems and software-driven telephone systems behave sufficiently alike to exhibit similar thwarting potential. In both cases ‘solutions’ – glorified guesses – are likely to set revenge effects in train. That’s because these ‘solutions’ depend on treating a problem as isolated, when in most conceivable biological situations and many technological ones, the ‘problem’ being addressed is a piece of a system that has a line drawn around it for clarity of analysis. Any intervention into a tightly-coupled system whose interactions are too numerous for you to track invites unintended consequences. If you poison a particular bug without knowing where the poison will go afterward, or what effect the bug’s absence will have, something which you don’t expect is almost certainly going to happen. The balance of probabilities is that way round – as it would be if you stuck a spanner into a running engine. The balance of probabilities reflects the balance of your knowledge and your ignorance. And very often it may not even be possible to alter that ratio of known to unknown, however the absolute amount of your knowledge grows: some systems are simply not mechanistic in a way that allows their changing states to be predicted. The famous example is weather.
Fortunately, as Tenner points out, it is not often the case that unintended consequences work directly against our interventions. We can intervene successfully and powerfully, as long as we measure success within the same box that has delimited the problem. Our guesses are potent. We can eliminate smallpox; walk on the moon; divert rivers to water Los Angeles. The Promethean force of our technologies is not an illusion. When presented with acute difficulties, in particular, the urgency and the unambiguous definition of the need spur us to an equally narrow-focused reply. For this reason, disasters and wars accelerate innovation; for this reason, airliners tend to be maintained well even in poor countries with rackety road networks. You can neglect to fill a pothole for years and there will still be a road (sort of). But you can’t sort of keep a 747 safely in the air. Though the task of maintenance has to be repeated over and over, it is an acute task each time, discrete as the failure it wards off. If solved problems like these bite back, it’s not because the solution itself comes unstuck. Instead the unplanned effect is the transformation of an acute problem into a diffuse difficulty of the kind that (like a Third World road system) does not respond to a single technological push. A chronic problem replaces the original one. For example, in the late Seventies the experience of high-speed medical evacuation in Vietnam, combined with a federal programme of action, radically reduced the number of people who died of serious head injuries before reaching American hospitals. Consequently a very much larger number of people with severe brain damage now need day-by-day care and support. Traffic congestion is a chronic problem; and so is repetitive strain injury for screen workers; and so is bribe-taking. None can be detached easily from the whole behaviour of the systems they affect, and by the boldness of our attacks on acute targets we can actually multiply the chronic conditions that we have to contend with.
The shift from acute to chronic is one of the main patterns Tenner extricates from the snakes’ nest of different baulkinesses. The general class of ‘revenge effects’ sub-divides into rearranging, repeating, recomplicating, regenerating and recongesting effects. The illustrations of these make a madly assorted mosaic. (Did you know that Japanese baseball fans savour the high ping of a ball striking an all-aluminium bat?) What Tenner cannot do, though, is identify a general mechanism for revenge effects, which, spanning biological and technological instances, can absolutely reassure us that they are the same phenomenon.
This difficulty emerges clearly in the three chapters Tenner devotes to the perversities of pest control. He considers the example of resistant pests, ‘superpests’, which appear a measurable amount of time after a new pesticide is introduced in agriculture. Here the mechanism is perfectly plain. A human action – spraying pesticide, say – effectively sets terms for natural selection to produce an organism which can gain the immense advantage of thwarting that human action. A beetle that has the sole ability among its competitors to survive the pesticide on the crop can treat the neatly sown rows as a meal set exclusively for it. The same mechanism applies when antibiotics select for micro-organisms immune to them. But when we turn to the world of non-living things, there’s nothing in our own technology that has an evolutionary interest in thwarting us – no reproductive prize for malfunctioning microcode. A human intention is no longer encountering a separate actor with a separate programme to pursue. Nor – the other possibility – does Tenner believe that we encounter, in technology, a stable object which defies our attempts to alter it by virtue of some sort of inertia. He doesn’t endorse the various theories of a self-compensating effect in human interactions with technology, such as the Conservation of Catastrophe, which suggests that things go wrong about as often, no matter what changes, or ‘risk homeostasis’, which argues that we habitually maintain risk at a steady level by, for example, driving safer cars faster. Technology is demonstrably more open-ended, less restricted to a fixed repertoire of causes and effects. For that reason, we must not subscribe to a Newtonian model of revenge effects according to which we’d press against a rubber wall of problems in one place, and it would press back equally elsewhere.
Categorisations are a variety of explanation, so Tenner’s enjoyable distribution of revenge into different classes does do some explanatory work. The book should not be called simply Why Things Bite Back. But in the technological division of his zoo, where nothing about a phone system stuck in a software loop means to thwart us in the sense that pests mean to be eating wheat (though they don’t care that it was wanted for bread), we seem to be left with systems indifferently exhibiting some behaviour which it happens we find troublesome. Here, the decisive fact is our judgment that we have been acted against. Tenner is, for example, tellingly fuzzy about the important difference between revenge effects and side effects. A side effect, he says, happens when an action works and produces a further unrelated consequence. A drug cures leukaemia, but causes hair loss. A revenge effect, on the other hand, comes about when the successful action sets off a chain of causation, usually more than one cycle long, that ultimately works against the original success. The distinction between these depends on how you frame the process you’re looking at; and in particular how you name the significant intention. Which kind of effect is it – one of Tenner’s examples – that the application of technology to sporting equipment and sports medicine produces more boring sport? It depends on whether you conceive the intention of innovation to have been to make sport more interesting. And when you can construct a satisfactory model – of, say, TV coverage of daring skiing making the skiing look easy, an undoubted revenge effect – it isn’t possible to pursue the pattern into the detail of the chain of events. Break down the effect into stages, and it vanishes. The ski-run appears tame because the camera angle has flattened out the slope, but the choice of camera angle was not itself dictated by some aspect of the wish to film thrilling downhill action. This makes Tenner’s thesis fundamentally unlike, for instance, Chaos Theory, in which disorderly and unassimilable events are shown to have a fine structure that repeats at all scales of magnification, because they are generated from comparatively simple algorithms. Tenner can offer some categories or principles which in that sense uncover a structure in revenge effects, but he discovers no systems algorithm as their generator.
He is not, in any case, a mathematical thinker, but someone writing from the crossroads where cybernetics, sociology and even management studies meet. This book is probably at its most charged in its bearing on economic wishful thinking. The idea that the world is certain to outflank our attempts to control it feeds the ideas and the emotional climate of free-market activism. Free-market ideologues like the idea of a world too swirly to be planned, independently evolving its behaviour at a million points of separate activity. They rely on a gut sense that the market operates like a nature that cannot be resisted; they like the idea that a subsidiser of a nationalised industry is a King Canute, telling the sea to go back. Or they think they do; in fact they’re engaging in magical thinking, according to which nothing can be made to happen by a government but almost anything can be achieved by private human ingenuity.
There is a functional distinction to be drawn between an effort – like pushing water uphill – to compel things and persons into an order irrespective of their natures, and the enterprise fundamental to us as social beings, of arranging actions and objects in the world for benefit. But there is no reason outside magical thinking why the line between those two activities should follow exactly the line between public and private. Tenner writes from an American intellectual tradition that lends the market a certain axiomatic authority; but his acknowledgment that chronic problems don’t offer a local incentive for a solution, his recommendation of finesse and low intensity where bite-backs are possible, and his observation that the future will not oblige by providing anyone’s favoured brand of difficulties, all disrupt any tidy exemption of free enterprise from the vengefulness of things. This doesn’t add up to a picture of that comfortable world ruled by the Invisible Hand, in which all things work together for those who love gold.
Send Letters To:
The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN
letters@lrb.co.uk
Please include name, address, and a telephone number.