The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography 
by Simon Singh.
Fourth Estate, 402 pp., £16.99, September 1999, 1 85702 879 1
Show More
In Code: A Mathematical Journey 
by Sarah Flannery.
Profile, 292 pp., £14.99, April 2000, 1 86197 222 9
Show More
Privacy on the Line: The Politics of Wiretapping and Encryption 
by Whitfield Diffie and Susan Landau.
MIT, 346 pp., £10.50, April 1999, 0 262 54100 9
Show More
Show More

The English mathematician G.H. Hardy, who worked in the purest of all mathematical fields, the theory of numbers, used to boast in his patrician way that nothing he did in mathematics would ever be useful. He must be turning in his grave at developments in the ‘science of secrecy’ over the last quarter of a century. Like so many other practices, it has been transformed into a species of applied mathematics by the digital computer, with Hardy’s beloved prime numbers playing a leading role. How this came about is the subject of Simon Singh’s The Code Book, a very readable and skilfully told history of cryptography. Singh’s method is to attach the abstract ideas involved to someone who thought of them, failed to think of them, championed them, or suffered their consequences – this last allowing him to include Mary Queen of Scots, whose unfortunate contribution to the art of secrecy was to correspond with her conspirators using an insecure cipher.

One of the earliest ciphers, familiar to anybody who played with codes as a child, is the Caesar shift, in which each letter of the alphabet is replaced by another a fixed number of places from it. The Caesar shift is an example of a monoalphabetic cipher, all of which, as Arab mathematicians demonstrated in the tenth century, are easily broken because letter frequencies are consistent across texts in a given language – the commonest letter in any such encryption of an English text will correspond to E. The way round this is to use a polyalphabetic cipher, which changes the encrypting alphabet, in a pre-assigned way, at each successive letter. Thus, the word ADA might be encrypted by changing the first A to E (Caesar shift of four places), the D to E (shift of one place), the second A to Z (reversal), and so on; the enciphering of ADA as EEZ hides the repetition of A and defeats attempts at letter frequency analysis. The technique, prefigured in Alberti’s meditations on codes and formulated by several individuals in the Renaissance, is named after its 16th-century rediscoverer, Blaise de Vigenère. For several centuries, the Vigenère cipher gloried under the title of ‘chiffre indéchiffrable’, only to fall in the middle of the 19th century to the efforts of a retired Prussian officer, Friedrich Kasiski, and, independently, to the English inventor Charles Babbage.

The cipher’s weak point, it turned out, was not its encoding strategy, but the length of the key (usually a single word or phrase) which has to be repeated many times until it covers the message if it’s to designate an alphabet for each plaintext letter. Any repeat or duplication gives decrypters a toehold, and both Kasiski and Babbage exploited this to reduce the polyalphabetic coding to an interweaving of monoalphabetic ones, each of which could then be cracked by analysing the letter frequency. So why not use message-length keys consisting of passages from a pre-assigned book? Unfortunately, these introduce another kind of repetition – words such as ‘and’ occur many times – which again facilitates decipherment. Alternatively, one could use a long, random sequence of numbers as a fresh key – a ‘session’ key – for each new message. The resulting cipher, known as a ‘one-time pad’, is indeed unbreakable, but the problems of generating and securely distributing enough keys has restricted it to situations combining low use with the highest security, such as traffic on the White House-Kremlin hotline.

Keys, then, rather than encryption algorithms, seem to lie at the heart of the science of secrecy, a principle, Singh reports, that was first explicitly stated as an axiom by the Dutch linguist and cryptographer August Kerckhoffs von Nieuwenhof in 1883. The fate of coding since then bears this out: each of the two major advances in its history this century – the cracking of the German Enigma code and the invention of public-key encryption – has turned on keys in some way or other.

Suppose, instead of trying to jumble the messages in each session by means of a long, randomised key, one could send a short key allowing a message scrambled by a machine to be unscrambled. The process could be made to work by equipping both sender and receiver with identical machines and codebooks of keys, each good for one day, say. The sender would transmit a short (hence secure) key consisting of the particular setting of the machine the sender had used; the receiver could use this, along with the key for the day, to unscramble the message. Might not the result be as safe as a one-time-pad version of Vigenère’s cipher? Some such reasoning seems to have lain behind the construction of machines like the Enigma. Invented in 1919, and adopted by the German Armed Forces only when they learned (from a book written by Winston Churchill in 1923) that their existing codes had been known to the British for some time, the Enigma machine dominated Second World War cryptanalysis.

The story of how the Enigma was eventually cracked by the combined forces of first the Polish and then British intelligence at Bletchley Park, with Alan Turing in the genius role, has been told many times. To make sense of how it was done, you need to analyse the Polish contribution and know the details of the machine’s innards, of detachable rotors and plugboard settings, as well as its operating procedure. Singh requires all of forty pages, together with diagrams and comfortingly concrete anecdotes, to make the principles involved clear, so I won’t try to summarise them here. Suffice it to say that the Poles got hold of an Enigma machine in the early days, and that the Germans blundered initially by allowing the session key to be duplicated at the beginning of transmissions. This gave the Poles an opening and enabled them to invent a kind of parallel computational device called a bombe, which could check tens of thousands of possible combinations and gave Turing and Co an invaluable leg-up.

The Enigma was a response, using early 20th-century technology, to the crucial role played by radio communications in World War One, and to the vulnerability of the messages so conveyed. The prodigious number of simple, repetitive computations required to crack the codes (not to mention the related but less well-known and even more difficult Lorenz cipher) led to the construction of the first programmable computer and signified, certainly in relation to the technology of encryption – and, indeed, to the very concept of a machine – the eclipse of the electro-mechanical era.

The 1960s saw the application of computational methods to produce ever more uncrackable ciphers, and this success, together with a shift in demand, from keeping military secrets to securing commercial ones, brought home the truth of Kerckhoffs’s axiom of the primacy of keys over codes. Banks, for example, needed to distribute keys, a separate one for each customer, but were trapped in the catch-22 situation of needing a secure channel by which to do this; couriers carrying them in briefcases chained to their wrists proved impractical for anything but a small number of transactions. How to distribute keys became a crucial issue. The solution to it – public-key cryptography – formulated by mathematicians in the mid-1970s, transformed the field of cryptography and delivered, in a neat, easily implementable package, the security of transaction on which the mass use of credit cards and the global operations of financial capitalism, from multinational trade and finance to internet shopping, now rest.

Public-key cryptography appears to be a paradox. If keys aren’t secret, what’s the point of them? How can a cipher work to conceal a message if everybody has access to the information designed to encrypt it? The answer, pioneered by Whitfield Diffie and Martin Hellman in the early 1970s, is to reject a principle of cryptography apparently so obvious as to go unchallenged until then: namely, that deciphering is the reverse of enciphering and that encryption and decryption keys are identical. Rejecting this principle makes it possible to think in terms of having two keys: a public one used to encrypt a message, and a private one, known only to the recipient, that she can use for decryption. For such an idea to work, encryption, being a public act, must be a one-way process, since being able to reverse it would allow a third party to retrieve the message, but it must also be such that what the recipient alone knows can circumvent this irreversibility.

A computational process that could achieve this was invented by three mathematicians at MIT, Ronald Rivest, Adi Shamar and Leonard Adleman. Their solution, known after their initials as RSA cryptography and used now in millions of transactions daily, is built from a pair of oneway mathematical functions. The first of these is multiplication: it takes only a second or so for a computer to multiply two very large prime numbers together and arrive at their product N, but it can take billions of times longer to reverse the process and find the two factors of N. The second function is (a modular form of) exponentiation, i.e., the raising of one number to the power of another, a process which once again covers its own tracks just as multiplication does.

The RSA system is a public algorithm, essentially a formula based on the functions into which numbers can be slotted. If Alice wants to receive a message from Bob, she chooses two very large prime numbers, p and q, and posts their product N as her public key, along with a small number e that is needed to make the formula work; her private key is her knowledge of p and q. Bob inputs the number M he wishes to encrypt, along with N and e, and sends the result to Alice. The system is so constructed that only Alice, who knows p and q, can use the number sent to her to reconstitute M. The entire procedure has to be reversed – Bob must now choose his own p and q and post their product – when Alice sends him a message. Encrypting only numbers is not in fact a restriction on communication, since any message composed of letters can be converted via ASCII characters into a unique number.

There is no guarantee that such a system is foolproof: it’s the size of the number N and the time it takes computers to factorise it that make messages secure. Advances in number theory and an increase in computing power could weaken the system. Indeed, a group of researchers in Amsterdam, working over a six-week period last summer, cracked a medium-hard version of the RSA system by harnessing a supercomputer to hundreds of PCs. Also, neither the irreversible functions nor the RSA formula incorporating them are unique: there are other one-way mathematical functions and no doubt other algorithms. Indeed, as Singh makes clear, it appears that public-key cryptography was actually invented some years earlier by the English mathematicians Malcolm Williamson and Clifford Cocks at GCHQ Cheltenham, but was kept secret at the time for reasons of security. Independently of this, a mathematically gifted Irish schoolgirl, 16-year-old Sarah Flannery, recently invented a public-key algorithm based on multiplying 2x2 matrices that was thought to be as secure as RSA but many times faster to execute. It now turns out, as she had warned, that her algorithm, fast and ingenious though it is, is not secure. In In Code, written with her father, who is also her mathematics teacher, she tells in a fetchingly artless way of her rise into the mathematical headlines, while also illuminating in concrete terms the mathematics behind public key cryptography.

Public-key encryption has transformed cryptography. Not only did it facilitate electronic commerce and make the issue of privacy a very public one in the United States, but it has democratised the organs of secrecy. It is now possible for everyone strongly to encrypt their e-mail and other communications by buying inexpensive public-key software; one such algorithm called Pretty Good Privacy, introduced and distributed free by Philip Zimmermann, a libertarian software engineer, has for years been a thorn in the flesh of the security lobby. To be able to encrypt one’s messages without expensive hardware, arcane knowledge or centralised administration, has, unsurprisingly, generated an extreme paranoia about the loss of control over secrecy by the agencies of law enforcement, national security and defence and has made a stand-up confrontation between the US Government and advocates of the freedom to encrypt inevitable.

Diffie and Landau offer a wise, meticulously researched and, given the bureaucratic language of policy making, surprisingly readable guide to the power struggles now going on within the constitutional, legal and technical tangles of this confrontation. On the face of it, there ought to be no contest: the police and security agencies, long used to wiretapping, are unlikely to accede to uncrackable telephone traffic and any state can prevent its citizens from hiding messages from its prying eyes by pleading national security and refusing to give details. But in this case, the struggle is complicated by the fact that the freedom-of-speech lobby and sundry advocates of the right to privacy are fighting on the same side as the banks, credit card agencies and big guns of electronic commerce, for whom strong encryption is a necessity.

Privacy on the Line warns us against assuming that any simple carry-over of security issues is possible from the pre-digital age to the situation today. It points, for example, to the fundamental similarity in law between the US Government’s power to intercept communications and its power to search physical premises, while remarking the serious difference between the right to listen (tapping) and the right to understand (decrypting) what has been overheard. The book also details the rejection (so far) of the so-called Escrow Key Scheme, by which the Government would gain access to all encryption keys (which would be held in escrow by trusted third parties) should they be able to prove the need. The scheme, together with the National Security Agency’s determined effort to prevent US citizens (let alone others who might be permitted to buy its encryption technology) from exchanging messages the Agency itself cannot decipher, is currently being re-jigged ahead of the next round of the battle.

As digitality continues its transformation of society, there’s more happening in the realm of privacy than a fight over encryption. In its first incarnation c.1970, era of the mainframe and giant databases, digital computing laid the groundwork for turning what had previously been personal and confidential – medical records, insurance claims, phone bills, credit card use, mortgage payments and so on – into public commodities. Private details are now routinely bought and sold on the data market. Mobile phones, moreover, are altering the terms of any private/public opposition more intimately even than the outing of personal data, and there are those notorious sites on the Internet on which video cameras are left permanently filming in all the rooms of someone’s house, attracting many thousands of eagerly prying (and paying) visitors. The word ‘private’ is coming to signify an odd, even quaint, preference belonging to the pre-digital world.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences