He who would keep a secret must keep it a secret that he hath a secret to keep.
Sir Humphrey Appleby
What is the opposite of a secret? It can’t be something that everybody knows, since there’s nothing that’s known to everyone and all secrets are known to somebody. A secret is a bit of knowledge that certain people know and certain others are intended not to know. Information doesn’t want to be free – as Stewart Brand put it in the 1980s – but it does often require a lot of effort to select the things to keep close, to guard and administer them, and, eventually, to thin out the stock of secrets and let some of them loose. Stores of secrets are supposed to be watertight, but historically they have leaked like sieves; nothing stays secret for ever; sometimes secrets get lost and nobody in charge knows where they are; and sometimes the keepers of secrets forget what the secrets are and why they were secret in the first place. Official secrets often have expiry dates because the reasons for keeping them eventually no longer pertain and continuing to keep them would cast a shade of illegitimacy over those secrets still locked away. And, as Sir Humphrey said, even knowing that there is a secret gives you some information about what it might be: you can infer something about the secret from the bits you are permitted to know. Secrets are never absolute; they’re never totally secure; and they’re never for ever.
We all have our secrets, and we resent it when people pry into them. Commercial secrets are kept to protect corporate profitability, and capitalist polities have long recognised their legitimacy and regulated the way they are handled. State secrets are different: their official rationale is the welfare of the nation. ‘Secrecy is the first essential in affairs of state,’ Cardinal Richelieu said. You can’t expect to win a war if the enemy knows your resources, tactics and strategies, and you cannot successfully conduct international trade negotiations if the other country knows your real sticking point. Statecraft and secret-keeping go together and they go back a long way: coded communications, invisible writing, simulations, dissimulations, feints, Trojan horses.
It’s often said that state secrecy is at odds with the idea of democracy. What citizens can’t know, they can’t control. ‘Elite rule is an inevitable by-product of secrecy,’ the American political scientist Robert Dahl noted in 1953. ‘Those who effectively influence policy can scarcely exceed the number of those who possess the information to act.’ Yet democratically elected governments have always marked out things their own citizens are not allowed to know. Even if you think that ‘the people’ have a ‘right to know’, there’s always a risk that some of them will tell those who don’t have that right. What was new and consequential towards the middle of the 20th century was the vast expansion in the size and reach of state secrecy – the emergence of a group of government bureaucracies whose job was the administration of secrets. The sphere of secrecy was then expanded beyond discrete pieces of information to include whole classes of knowledge, producing mountains of material that state institutions now routinely classify and sequester.
We are accustomed to thinking of state secrets as being kept secure in a closet off the main rooms of an otherwise open house of knowledge. The reality is different. It has been estimated that every year the US government classifies many more pages of documents than are added to the Library of Congress. In 2015 there were 53,425 new classification actions, as well as 52.8 million ‘derivative classifications’ for new documents containing previously classified material. Those numbers have declined in recent years, but all this official secrecy doesn’t come cheap: the cost of the classification system to the US taxpayer was almost $17 billion in 2016. And the responsibility for classifying, archiving and, in time, declassifying secret documents belongs to a great, creaking, clunking bureaucratic juggernaut whose everyday workings are themselves secret and whose ultimate justification has been largely forgotten.
This reality was born with the idea of an atomic bomb. Alex Wellerstein’s Restricted Data describes the origins of nuclear secrecy and tracks its history through the Cold War and beyond, showing how the framework of secrecy built around nuclear weapons developed into the apparatus of the modern national security state. It’s a stunning achievement: a historical exercise that documents not just all the things we cannot know but all the things we only thought we couldn’t know, and which Wellerstein’s dogged research has dug out. Secrecy regimes sow the seeds of their own dissolution, since they mandate the preservation of documents that might otherwise be lost or shredded. Once declassified, former secrets find their way into public archives, and the Freedom of Information Act may secure access to others.
The vast Manhattan Project, which designed and built the Bomb, was a very great secret. The Axis enemies weren’t supposed to know, but when Hiroshima was obliterated the biggest secret was out, which was that such a thing was possible; that the US had done it; and that, if others knew what the Americans knew and deployed the resources the Americans commanded, they too could make a Bomb. The first question for an atomic secrecy regime was who should know and who should not know. Openness is often identified as a defining virtue of science, but the physicists who, in the late 1930s, discovered the possibility of an atom-splitting chain reaction voluntarily sought to keep some things secret. There were many scientists – both Allied and German – who believed it impossible to weaponise the physics of fission or, at least, who thought it would be too much of a drain on resources to pursue in wartime. The American government wasn’t easily persuaded that the job could be pulled off or, if it could be, that this enormously expensive weapon wouldn’t just be a more powerful tactical addition to its explosive arsenals but a strategic, war-ending, world-altering technology.
When the Manhattan Project was launched in 1942, the military was fully on board and totally in charge. The army knew all about secrecy in weapons development and how to ensure it: people were vetted; fences were thrown up around installations; communications were censored; and, above all, compartmentalisation was made an organisational imperative. No one should know any more than they needed to know to do their job; specialisation spelled security. The most important group of people whose knowledge of Bomb design and fissile fuel-making was restricted were many of the elite scientists working on the Manhattan Project, while thousands of lower-level workers knew nothing at all about the project’s intended product. The problem, however, was that the key workers were civilian scientists accustomed to relatively open communication, not enlisted men used to following orders.
Robert Oppenheimer, the scientific director of the Manhattan Project, won this battle with its overall director, General Leslie Groves. Oppenheimer told Groves that if he wanted technical progress, he had to let scientists talk freely and across disciplinary specialisations. Oppenheimer set up periodic colloquia at Los Alamos, where the leaders of the research divisions shared problems, achievements and suggestions. Nuclear knowledge might flow among the top scientists and engineers, among the military brass and a select group of elected officials, but the intention was strictly to limit access for essentially everyone else. Wellerstein guesses that before Hiroshima maybe 1 per cent of the Manhattan Project’s employees knew that an atomic weapon was its goal.
There were obvious strategic reasons for keeping the knowledge from the Germans and Japanese, but the Soviets were also excluded, already inked in as enemies in waiting. Groves said that the US interest was ‘to keep as much knowledge as possible from all other nations’, so that America would emerge from the war as the sole nuclear power. Wellerstein thinks that these ‘unspecified “other nations” surely included the United Kingdom’, which had wound down its own Bomb project and seconded its nuclear scientists to the American programme. Practically from the outset, the Manhattan Project was kept secret from the American people. There was, of course, the fear that loose talk might cost lives, but those in charge of the project also worried that having too many American officials in the know would endanger funding for this hugely costly enterprise – the elected politicians controlling the public purse might decide that the Bomb was actually a boondoggle. President Roosevelt agreed that the knowledge of the project’s existence should be kept very close. As success neared, Groves counted the congressmen who had been officially informed about the atomic bomb: there were seven of them. Harry Truman became president on Roosevelt’s death in April 1945. When Truman met Stalin at Potsdam two weeks before Hiroshima and told him that the US had a terrible new weapon, the Soviet leader seemed oddly unimpressed, probably because he already knew – by way of the British spy Klaus Fuchs – and had kept it secret from the Allies that he knew. But then Roosevelt had seen no reason to tell his own vice-president; Truman was let in on the secret only after he assumed office.
Just days after Hiroshima, the American government made the remarkable decision to publish a history of the Manhattan Project, Atomic Energy for Military Purposes, a document which became known as the Smyth Report, after its author, the Princeton physicist Henry DeWolf Smyth. This dry-as-dust text became a bestseller: more than 103,000 copies sold by the end of 1946; no copyright restrictions; translated quickly into Russian and avidly read by Soviet scientists. Even American physicists working on the project learned from it for the first time much of what had been done outside their own speciality. The Smyth Report told the story of how the work was administered and organised; much about the basic techniques of making fissile materials; about the fundamental physics of chain reactions; and some bare-bones facts about how the weapon was assembled. The release of so much information, so suddenly, and against the background of long-maintained total secrecy, was astounding – unsurprisingly, many key people opposed its publication. But the official judgment was that everything in the published report was already common knowledge among scientists and engineers in the field, or was bound to come out soon anyway, or was inessential to any other nation wanting to build a Bomb. The Smyth Report defined what could be safely known – thus far and no further. The general problem remained: how to recognise and manage information that it might be unsafe for others to know?
The problem was urgent and its resolution depended on a range of suppositions about the scientific and political future. There was speculation about the way science and technology would unfold. Could any other nation work out for itself how to build a Bomb? If they could, how long would it take? Was it in the nature of scientific progress that the knowledge would inevitably be discovered by any group of competent scientists? (Almost no one seriously suggested that it was possible to put the genie back in the bottle, unwinding the scientific and technological progress that led from the discovery of fission – which was public knowledge – to the building of the Bomb.) If a secrecy regime was put in place, how secure could it ever be? Was the knowledge bound to leak out? Could spies get access? Other considerations included the likely nature of international relations and domestic politics in the new atomic age. Could global policies be devised for the internationalisation of nuclear technology? Would the newly established United Nations be up to the job? And if internationalisation was possible, could any such plan survive the American political process? The great fear was of a Soviet Bomb, but there were arguments over the policy to be adopted towards Britain and France. And some were of the opinion that if the nuclear secret was kept from the Soviets, this would only encourage them to set out on their own. There were ethical issues too – what was the moral course of action with respect to the secrets of such terrifying weapons? – but these concerns, often eloquently expressed, were for the most part noises off.
Inspired by the Danish physicist Niels Bohr, many scientists favoured internationalisation. Secrecy was a wartime necessity, but it wasn’t thought proper or practical in peacetime. Scientists thought it inevitable that other countries would soon have Bomb-making knowledge and resources – maybe within as short a time as five years. (That prediction proved a touch optimistic: Joe-1, the first Soviet atomic test, occurred in August 1949.) The recognition, in the aftermath of Hiroshima and Nagasaki, of the terrible power of nuclear weapons was a never to be repeated opportunity to reshape the world order. It didn’t happen. The Americans’ Baruch Plan for an International Atomic Development Authority was loaded with conditions obnoxious to the Soviet Union, which countered with a proposal for a ban on all atomic weapons. The US baulked at that; stalemate persisted; and within a few years, internationalisation was the road not taken. Proponents of global control predicted that American attempts to keep atomic secrets would lead to an arms race, and they were right about that too. After Joe-1, there was no longer any question of keeping ‘the atomic secret’ from the Soviets, and the emphasis shifted to secrecy as a way of staying ahead.
The best outcome, to the Americans’ thinking, was a US nuclear monopoly, and that justified keeping ‘the secret’ of making the Bomb. The next best would be to win the international nuclear arms race, and that justified keeping the secrets behind better Bombs. Success in achieving either, as the physicist and historian David Kaiser has argued, depended on resolving a problem about secrecy that was at once philosophical and political. If scientific and technological knowledge was crucial, did it come in discrete units, some of which needed special protection while others were less critical? How to identify which was which? The secrets that had to be kept close would then be what Gilbert Ryle – coincidentally, writing at exactly this time – called ‘knowing-that’: facts, theories, propositions, even tables, graphs, drawings. Or was the key element procedural or practical knowledge – in Ryle’s vocabulary, ‘knowing-how’: the unverbalised tacit knowledge needed to make and operate things? You cannot, Ryle maintained, derive knowing-how from knowing-that. If knowhow was a crucial consideration, what forms did it take? If it wasn’t the sort of thing that could be written down, then it was embodied – something contained within knowledgeable people or scientific-industrial-managerial systems. Or was the key not knowledge at all but material? In American deliberations after the war, a significant body of opinion held that secrecy was far less important than the mining and refining of fissionable material – the new UN, it was suggested, should control the world’s supplies of uranium and thorium ore. The Acheson-Lilienthal Report of 1946 endorsed the control of material over the control of ideas: secrecy about atomic information would become unnecessary and ‘knowledge will become general.’ As Wellerstein says, ‘Facts and plans both transmit easily and are concealed easily; thousands of tons of uranium ore, and the installations necessary to process and use them, do not.’ But this proposal was doomed too – the victim of political contingency, muddle, and the pervasive mistrust of the early Cold War.
In the US, the political resolution of these questions was effected by the Atomic Energy Act (AEA) of 1946. It was the bluntest of instruments but it shaped secrecy policy for decades to come. The AEA enshrined the notion that it was knowledge, not material, that needed to be guarded, and the scope of that knowledge was construed in the broadest terms. ‘Restricted data’ was defined as ‘all data concerning the manufacture or utilisation of atomic weapons, the production of fissionable material, or the use of fissionable material in the production of power’. Anyone communicating such data ‘with intent to injure the United States or with intent to secure an advantage to any foreign nation’ would face life imprisonment or the death penalty. And the data specifically included any relevant ‘document, writing, sketch, photograph, plan, model, instrument, appliance, note or information involving or incorporating restricted data’. This was something quite new in American information control: all knowledge deemed relevant to atomic weapons – no matter how, where or by whom it was produced. The AEA considered atomic knowledge to be, as was later said, ‘born secret’.
The legal category of ‘restricted data’ and the notion that certain sorts of knowledge were classified at birth proved far more problematic than the framers of the legislation had imagined. Some government officials had to adjudicate on which data concerned nuclear weapons and which did not – and, since so many technologies were involved, that wasn’t straightforward. They had to take a view on what was already generally known, what was new, and what would inevitably become common knowledge. They had to deal with the ‘foreign nations’ – friendly former allies such as the UK – that already possessed stocks of restricted data. They had, in principle, to police atomic knowledge generated by people who were not in government employment or might not even be American citizens. Government secrets before the Manhattan Project were temporary, but the AEA instituted what has been called ‘a permanent gag order affecting all public discussion of an entire subject matter’. The category of restricted data changed the whole idea of official secrecy and is the origin of much of the modern administration of state secrets.
Many American politicians believed that other countries could build a Bomb only if proprietary secret knowledge – data, theories and inscriptions – got loose, and that the circulation of this knowledge could, with vigilance, be prevented. The politicians, however, had been told otherwise, notably by scientists who emphasised how much was already in the open scientific literature; how quickly Soviet scientists would work out the rest by themselves; and how damaging sweeping secrecy would be to America’s own weapons development. But the politicians were in no mood to listen to subtleties.
Problems with both the idea of restricted data and the practical management of secrecy soon emerged. Academic physicists who had no connection with Bomb-building and were drawing on publicly available scientific knowledge began to publish books and popular magazine articles on the subject, with titles like ‘The Secrets of the Atomic Bomb’ and ‘How to Make an Atom Bomb’. The newly established US Atomic Energy Commission (AEC) was charged with classification, declassification and the day to day management of restricted data. What were they to do about these popular writings? The AEC could censor them – but that would be a confirmation that they contained genuine secrets, an admission that might itself be illegal. They could approve publication – but that would seem to confirm their contents. They could suggest, cajole or hint at desired deletions and changes, or they could say ‘no comment’ – which, Wellerstein writes, ‘would become standard AEC policy for private speech’.
Then there was espionage. During the war, General Groves had worried more about careless leaks than about Soviet spies. He was well aware of communist links among some of the scientists working on the project, but he didn’t equate left-wing allegiances with disloyalty and realised that if draconian political tests were imposed, the project would suffer. Some politicians were initially persuaded that the control of uranium was key to retaining an American atomic monopoly and believed that secret-stealing was of slight importance. In September 1945, President Truman told his secretary of state that there was no ‘precious secret’ to be stolen. Winston Churchill agreed. Speaking to the House of Commons in November, he said that what the Americans ought to keep secret were ‘the practical production methods, which they have developed at enormous expense and on a gigantic scale. This would not be an affair of scientists or diplomats sending over formulas.’
The arrest of Klaus Fuchs in London in 1950 was the worst possible news. David Lilienthal, the chairman of the AEC, was stunned: ‘This man was not on the edge of things, he was in the middle.’ Fuchs seems to have known little about plutonium production, but he did provide the Soviets with a wealth of design information. The trial of Julius and Ethel Rosenberg followed shortly afterwards, and the security apparatus made sure that the proper lesson was learned: ideas and their representations were being stolen, so secret knowledge must be the key, not knowhow or industrial systems or material. Secret-keeping must be made even more secure. The exposure of Soviet espionage fuelled the McCarthy-era enthusiasm for commie-hunting, but there was a price to be paid: preventative measures such as political vetting and compartmentalisation jeopardised the build-up of the American arsenal just as the US was moving into the thermonuclear age and national weapons labs needed as many talented scientists and engineers as they could get. The uncovering of spies strengthened the hand of those assuming the importance of secret knowledge, while also illustrating some of the limitations and contradictions of the secrecy regime. The US Venona project had long been decrypting Soviet intelligence communications, and it provided information that could potentially have been central to the Rosenbergs’ prosecution. But the evidence couldn’t be presented in court without compromising the secrecy of Venona. (In fact, Soviet spies had told Moscow about the Americans’ decryption work, while in the US Venona was deemed so secret that Truman, yet again, was kept in the dark.) The AEA thought the focus should be on espionage with intent to ‘injure the United States’, but J. Edgar Hoover of the FBI objected: what about idealistic spies who didn’t accept that damage was done to America by assisting a wartime ally or by furthering the cause of world communism?
On top of all this, there were the cock-ups and accidents that belong within the normal range of human foibles and absent-mindedness. Secrecy depends on the reliability of secret-keepers, but – as they say – things happen. The crash programme in the US to develop the immensely more destructive H-bomb was the deepest of deep secrets, the worry being that premature disclosure would further fuel the arms race and encourage the Soviets to accelerate their own work on thermonuclear weapons. In November 1949 the cat was let out of the bag by what Wellerstein calls a ‘staggering leak’ by a senator speaking live on TV, advocating for heightened secrecy measures. American scientists, he blurted out, were working on a weapon ‘a thousand times’ more powerful than the Hiroshima bomb: ‘That’s the secret, that’s the big secret that the scientists in America are so anxious to divulge.’
Loose lips were recognised as one sort of problem; loose documents were another. The Teller-Ulam design for the H-bomb was talked about as the ultimate secret: one of the very few bits of design knowledge which, if it fell into enemy hands, might substantially reduce the Americans’ advantage. It was also something that ‘could be given away on the back of a napkin’, Wellerstein writes, though if any such napkin came into the possession of foreign agents, they would have to decide whether it was genuine or a plant. In 1953, the hawkish congressional staffer William Borden, convinced that Oppenheimer was a Soviet agent who had obstructed the development of thermonuclear weapons, prepared a 91-page ‘top secret’ history of work on the hydrogen bomb to set out his case. He entrusted extracts to the Princeton physicist John Wheeler to check the accuracy of his account of the Teller-Ulam design. Intending to read the document on the overnight train from Princeton to Washington, Wheeler managed to lose it. The sleeper car was taken apart in a desperate search, but the document was never located. When Eisenhower found out, he was furious. Borden was sacked, and Vice-President Nixon suggested that he be investigated – a circular firing squad of spy accusations.
Cock-ups continued to be a problem, but during the 1950s and 1960s secrecy was more seriously compromised by the tensions between the respective agendas of atomic weapons and atomic energy. From the outset, celebration of the destructive power of atomic fission was buffered by the promise of ‘peaceful’ civilian uses. Radioisotopes were produced for both medical and industrial purposes, but the holy grail was an atomic reactor that would generate abundant and clean electricity, ‘too cheap to meter’. By 1953, Eisenhower had embraced the notion of ‘Atoms for Peace’. Civilian atomic power was to rebrand the ‘fearful atom’ as the ‘peaceful atom’: nuclear energy would be opened up to American capitalism and ‘peaceful’ atomic technology distributed as a way of firming up Western anti-Soviet alliances. When atomic knowledge was enlisted in these sorts of enterprise, the effect was normalisation – profit-making as usual, international relations as usual.
The 1954 revision of the AEA made modest changes in the management of restricted data, liberalising arrangements for the exchange of American nuclear information with other countries, notably the UK, although such international co-operation remained thickly hedged about with conditions. The trouble was that it was impossible unambiguously to distinguish the technologies integral to atomic weapons from those involved in civilian atomic power. High levels of secrecy were mandated for military technology, but secrecy in ‘peaceful’ uses was problematic – there has been continual friction over secrecy between the US government and both domestic industry and friendly nations. Could companies, for example, generate restricted data even if they had no access to government’s restricted data? What about foreign companies? And what about foreign states? Official US government responses to questions like these were as consequential as they were complicated, ad hoc and ultimately incoherent.
The gas centrifuge – a device for separating the fissile isotope U-235 from the vastly more abundant U-238 – wasn’t of any significance during the Manhattan Project. Gas diffusion and electromagnetic separation were the methods of choice. But centrifuge technology was further developed throughout the 1950s and 1960s – by the US and the Soviets, of course, but also by the West Germans, the Dutch and the British, with the Brazilians and Japanese also showing interest. Gas centrifuges were ‘dual-use’: they could produce fuel for civilian power plants but they could easily be operated to produce fuel for nuclear weapons. For Bomb-making, they had the advantage of being relatively cheap and simple to conceal. Whatever significance design ‘secrets’ might have, it was the ability to produce fissile material that drove proliferation. Britain exploded its first Bomb in 1952, France in 1960 and China in 1964, but there were worries about countries whose anti-Soviet allegiances were less secure: Brazil, Argentina, Indonesia, Egypt. On the one hand, the US was eager to assist friendly countries and American companies to develop nuclear power industries; on the other, that assistance was meant to remain subject to US security controls. Gas centrifuges were central to Cold War international relations, and the US struggled to devise and maintain a stable foreign policy on their use.
By the early 1960s, the US and the UK had developed a working relationship on gas centrifuge technology. The advantage to the British was access to American restricted data; the advantage to the US was control of the British and, via them, possibly also of the Euratom coalition which, from the late 1950s, had been working on improved centrifuges. Under AEA regulations, foreign countries using American data could not effectively commercialise their technology without American approval. Tony Benn – then minister of technology in Harold Wilson’s ‘white heat’ administration – both marvelled and chafed at these arrangements. In November 1968, after a meeting with UK Atomic Energy Agency officials, he wrote in his diary:
What came out of the meeting, which I had suspected but had never been properly told, was that … we are absolutely tied hand and foot to [the Americans], and we can’t pass any of our nuclear technology over to anybody else without their permission. Naturally, they don’t want to see us taking advantage of our nuclear knowledge, which would make money for ourselves; the harsh reality is that de Gaulle is right, that although the special relationship doesn’t give us political advantages any more, it certainly ties us … This is another aspect of our complete dependence on the Americans, and one day we’ll have to sort it out. I don’t know how, but we shall have to.
American restricted data practices had by this time been transplanted into the heart of European diplomacy. Collaboration on gas centrifuges between the UK, the Netherlands and West Germany could, provided the US didn’t effectively ruin it, be represented as a way of forestalling an independent West German nuclear capability – a prospect that terrified both the Dutch and the Soviets – but it also had a bearing on British relations with France. Anglo-Dutch-German centrifuge work was likely to produce cheaper fuel than proprietary French technology, thus offending de Gaulle, who had already vetoed British membership in the Common Market because of the US-UK special relationship. And, from the British point of view, the tripartite centrifuge project would also have the advantage of weakening a French-West German alliance. In this way, much of the history of British engagement with the EEC – and then the EU – was shaped by nuclear technology and the American doctrine of restricted data.
American secrecy policies were originally framed as a way of maintaining its nuclear monopoly. However, both the justification and the management of secrecy had to change once the Soviets had the Bomb, and then as more countries joined the club and civilian nuclear power plants sprouted like mushrooms – 17 in four countries in 1960; ninety in fifteen countries in 1970; 253 in 22 countries in 1980. Long before 9/11, the US security apparatus had also begun to think about nuclear terrorism carried out by non-state actors. Terrorists didn’t have to steal secrets; they just needed to get access to people with knowhow, or to lay hands on a stray Bomb, or even to find some radioactive material they could spray on cities or dump in reservoirs. Again, it was stuff and its manipulation, not data, that was crucial to security. Whether by way of espionage, accident, the normal processes of scientific discovery, or through the relationship between military and civilian uses, all sorts of nuclear knowledge was increasingly on the loose in the world, and the early sceptics who reckoned that this knowledge could never be effectively restricted were being proved right.
Both proliferation and civilian uses contributed to the normalisation of nuclear knowledge, but the American determination to win the arms race could also be mobilised as an argument against secrecy. Edward Teller, a Strangelovian figure who never met a nuclear weapons project he didn’t like, was an opponent of secrecy. Herbert York, the first director of the Lawrence Livermore National Laboratory, didn’t think Teller’s advocacy for openness was principled. He told Wellerstein that Teller wanted to declassify nuclear knowledge so that you could ‘get every department of applied science in America working on nuclear weapons’.
Circumstances change and justifications change, but nuclear secrecy regimes lumber on. All bureaucracies tend to perpetuate themselves, and the bureaucracy tasked with managing nuclear secrecy was a paragon of perpetuity. It classifies and declassifies, and, when confronted with Freedom of Information Act petitions, it redacts and procrastinates. Just as nuclear weapons proliferated, so too the practices of nuclear secrecy provided a pattern for the control of many other things governments don’t want you to know.
Politicians campaign on promises of openness; when elected, they find all sorts of reasons to keep things close. ‘The appeal to “national security” offers a handy reason to avoid scrutiny of neglect, mistakes and abuses,’ the Swedish-American philosopher Sissela Bok wrote in Secrets: On the Ethics of Concealment and Revelation (1983). ‘As the number of secrets grows, bureaucracies and executives seek the stamp of secrecy to protect themselves, not just the nation.’ Wellerstein isn’t a huge fan of the ‘anti-secrecy’ activists who, from the 1970s, adopted what he calls a ‘deliberately antagonistic, oppositional stance’ towards the idea that the state can legitimately keep things secret. He thinks of ‘anti-secrecy’ as a crusade with a tendency to mutate into ‘secret seeking’ for its own sake (he name-checks Chelsea Manning and Edward Snowden), whose power took a quantum leap with the emergence of new digital technologies that made it possible to compress masses of secret data onto a hard drive or memory stick.
What did all this secrecy achieve? American restricted data regimes didn’t prevent the Soviet Union and China from getting the Bomb: espionage may have helped things along, but both were more than capable of generating the science and technology without it. Did the restriction of data serve to slow proliferation? Here, Wellerstein endorses the sensibilities of scientists in the postwar period who urged the control of uranium, and suggests that ‘sensitive information is less important than many other factors (such as export control over difficult-to-fabricate technological components, diplomatic interventions and treaties, and other matters).’ And, as he says elsewhere, ‘I’m not convinced that all this secrecy has got us a whole lot of security … You could get rid of all the secrecy tomorrow and the world would not measurably become more dangerous.’ The regime of nuclear secrecy is ‘at best a form of “security theatre”’ – a show without much substance. Wellerstein isn’t a moralist: he tells the story of restricted data as a jumble of accidents, ideologies, political expediencies, bureaucratic self-interest and unintended consequences. He’s not against the idea of state secrets. He just thinks that the history of American attempts to keep nuclear secrets is what you get when politicians think badly about what scientific knowledge is, about the relationships between knowledge and technology, about the conditions in which capabilities can travel, about whether there are such things as nuclear secrets, about whether such secrets can be kept, and about whether secrecy really is a guarantee of safety.
Send Letters To:
The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN
letters@lrb.co.uk
Please include name, address, and a telephone number.