The Millennium Problems: The Seven Greatest Unsolved Mathematical Puzzles of Our Time 
by Keith Devlin.
Granta, 237 pp., £20, January 2004, 1 86207 686 3
Show More
Show More

In May 2000 the Clay Mathematics Institute announced that it was offering seven prizes, worth $1 million each, for the solutions to seven mathematical problems, which had been identified by a group of internationally acclaimed mathematicians as the seven most difficult and most important unsolved mathematical problems of the day. They became known as the Millennium Problems.

The announcement had a historical as well as a mathematical significance. Exactly one hundred years earlier, the mathematician David Hilbert had delivered a lecture in which he had identified what he took to be the (23 in his case) most important unsolved mathematical problems of his day. Hilbert’s problems were a spur to some of the most productive mathematical research of the 20th century. By 2000 all but one of them had been either solved or shown to have some feature that precluded a definite solution. The remaining problem survives as one of the Millennium Problems.

In his excellent new book, Keith Devlin introduces the problems to a lay audience whose sole qualifications need be ‘a good high school knowledge of mathematics’ and ‘sufficient interest in the topic’. He makes clear from the outset that the problems are far too deep and far too complex for there to be any prospect of rendering them completely accessible to his intended audience. Thus he disarmingly says in the penultimate chapter: ‘Although I have been a professional mathematician for over thirty years . . . it took me considerable effort, spread over several weeks, aided by discussions with experts . . . before I understood [this] problem sufficiently to write this chapter. I would not even attempt to try to solve it.’ And in the final chapter, on what he describes as ‘easily the least accessible’ of the problems, he writes: ‘It is a highly technical question, buried deep in a forest of highly abstract advanced mathematics known to few professional mathematicians.’ What Devlin seeks to do is ‘to provide the background to each problem, to describe how it arose, explain what makes it particularly difficult, and give . . . some sense of why mathematicians regard it as important’. He succeeds admirably.

The first problem, the one surviving from Hilbert’s list, is whether the Riemann Hypothesis is true. Formulated in 1859 by Bernhard Riemann, the hypothesis concerns the distribution of primes among the positive integers. (A prime is any positive integer, other than 1, that is divisible only by itself and 1. The first ten primes are 2, 3, 5, 7, 11, 13, 17, 19, 23 and 29.) There are infinitely many primes, but they get rarer and rarer as the sequence of positive integers extends further and further. Thus, of the first eight positive integers, half are primes, but of the first hundred only a quarter are, and of the first million only about one in thirteen are. This raises the question of whether there is anything significant to be said about the precise way in which the proportion gradually decreases. Both the early pattern of primes and what we know about later patterns are discouraging. For instance, the gaps between the first ten primes are 1, 2, 2, 4, 2, 4, 2, 4 and 6, a series which does not exhibit any obvious regularity. Furthermore, in Devlin’s words, ‘no matter how far out along the [positive integers] you go, you can find clusters of several primes close together as well as stretches as long as you like in which there are no primes at all.’ The distribution of primes is the part of mathematics with the greatest feel of contingency about it: primes seem to crop up at random, like rocks scattered on a barren landscape. Yet mathematicians have achieved some understanding of the way in which the proportion of primes decreases. (This understanding draws on a branch of mathematics that appears totally unrelated to the theory of positive integers but is concerned, instead, with the continuous variation of one quantity with respect to another.) There remain significant lacunae, however, and a proof that the Riemann Hypothesis is true would help to fill them. It might also have implications for both physics and communications technology.

A second problem, which may be the most accessible, is called the P v. NP Problem. Computer scientists distinguish two types of task that can be undertaken by a computer. Tasks of type P can be undertaken ‘efficiently’. Tasks of type E, by contrast, require a certain amount of ineliminable slog: it is impossible for a computer to carry out even a very simple task of type E without taking many more steps than there are atoms in the known universe. But there is a third type of task, type NP, which includes most of the big tasks that industry and commerce would like computers to be able to do. A task of type NP can be undertaken efficiently by a computer as long as, at certain critical stages at which the computer requires the answer to a question, it is given the answer rather than having to work it out for itself. (Of course, this is primarily of theoretical interest. In practice, there would always be the question of where the answer came from.) If a computer were blessed with this facility, would the range of tasks that it could undertake efficiently be significantly increased? One would think so. But perhaps not. Perhaps all tasks of type NP are in fact of type P. That is, perhaps the efficiency that would accrue from these ready-calculated answers can accrue anyway, from suitably clever programming. The P v. NP Problem is to determine whether or not this is so. A proof that it is so would have repercussions for industry, commerce and internet security.

A third problem concerns the Poincaré Conjecture, made by Henri Poincaré at the beginning of the 20th century. Suppose you have a rubber band, an apple and a ring doughnut. If you stretch the rubber band around the apple, then you will be able to shrink it down to a point by rolling it along the surface of the apple – from the equator to the pole, as it were – without tearing it and without its leaving the surface. But if the rubber band were stretched around the surface of the doughnut, then there would be no way of shrinking the rubber band to a point along the surface of the doughnut without breaking either the rubber band or the doughnut. Poincaré conjectured that something analogous happens in four-dimensional space (where the idea of a ‘space’ with more than three dimensions is given a suitable mathematical definition). Mathematicians have subsequently proved that something analogous does happen in a space with more than four dimensions, but whether the original conjecture is true remains unknown. If the matter were settled, the solution could have implications for the design and manufacture of various electronic devices.

There is less that can usefully be said at this level about the four remaining problems. One is concerned with a theory that describes the interaction of elementary particles; another, with equations that describe the motion of fluids; another, with a conjecture related to the Riemann Hypothesis that also has implications concerning primes; and the last, with another conjecture relating to spaces that have more than three dimensions.

Now, suppose a glory hunter told you that he was interested in tackling all seven problems at once. Your first thought might be to regard this as utter lunacy. True, some of the problems are known to be connected. And it is a familiar feature of mathematical life how frequently problems that are thought not to be connected subsequently turn out to be. (The Riemann Hypothesis bears witness to this.) So it would be foolhardy to insist that no advances in mathematics could possibly be relevant to more than one of these seven problems. The fact remains that they lie in disparate areas of the discipline, and afford no evident prospect of being used to cast light on one another.

Nevertheless, the glory hunter’s ambition of making simultaneous progress on all seven fronts might not be quite as crazy as it sounds. His hope might be that, by turning his attention to proof theory, the branch of mathematics concerned with the range of techniques and methods used in solving problems of this kind, he could find something in the mathematicians’ toolkit which indicated how to advance from the framing of each of these seven problems to its solution. Indeed, he might have an even more ambitious hope than this, to find the holy grail of all mathematics: a single algorithm for solving any given mathematical problem; in other words, a single procedure which could be followed in a rigorous step-by-step manner, which required neither insight nor ingenuity, and which was guaranteed, in a finite number of steps, to issue in a solution to the problem in question. Is it obvious that this would be a forlorn hope?

No. Even so, it would be forlorn. And that it would be so is itself a very deep fact about mathematics. There are fundamental reasons of principle that, when it comes to settling unanswered mathematical questions, we shall never be able to dispense with divination and inspiration in favour of a mechanical application of rules. (But even if the holy grail had existed, and even if we had found it, it might have been of purely theoretical interest. Its implementation in addressing any given problem might have been a task of type E. Of what practical significance would an algorithm have been if it had enabled us to determine whether or not the Riemann Hypothesis is true, but only if we had been able to spend a trillion times as long on the problem as it will take for the earth to be swallowed up by the sun?)

In order to see why there can be no single algorithm for solving all mathematical problems, we need to invoke two definitions and take two things for granted. (Not because they are articles of faith: each of them can be proved. It is just that proving them is not possible within these confines. As it happens, each of them is fairly plausible even without proof.)

The first definition is this. A set of positive integers is arithmetical if it can be characterised using such standard arithmetical terminology as ‘plus’ and ‘times’. Thus the set of primes is arithmetical. So is the set of squares. So is the set of positive integers that are less than 1. (This last set, of course, has no members: it is the ‘empty’ set.)

The second definition is this. A set of positive integers is decidable if there is an algorithm for determining whether any given positive integer belongs to it. All three sets mentioned above can again serve as examples. That might lead us to suppose that a set of positive integers is arithmetical if and only if it is decidable. But the supposition would be a rash one. There is as yet no warrant for ruling out an arithmetical set that is not decidable, or for ruling out a decidable set that is not arithmetical.

That brings us to the first of the two things which we need to take for granted – call it Lemma 1. Lemma 1 is that the second of these can in fact be ruled out: in other words, every decidable set is arithmetical.

Lemma 2 is that there is an algorithmic way of listing all the arithmetical sets. Roughly, the set that has the least complex characterisation is listed first, the set that has the second least complex characterisation is listed second, and so on. There is no unique way of measuring complexity, however. One obvious criterion is the number of symbols involved in the characterisation, but this still requires some way of discriminating between characterisations involving the same number of symbols. At various points, arbitrary decisions have to be taken. All that matters, as far as Lemma 2 is concerned, is that there is an algorithm which enables us, when given any positive integer, to determine which set appears at that position on the list, and, when given any arithmetical set – or more precisely when given any characterisation of an arithmetical set – to determine at which position on the list that set, so characterised, appears. Since sets can have more than one characterisation – for instance, the set of multiples of 9 is also the set of positive integers whose digits, in decimal notation, sum to a multiple of 9 – they can likewise appear at more than one position on the list.

Lemma 2 can be used to specify a set of positive integers that is not arithmetical. Suppose that we have an algorithmic list of all the arithmetical sets. And let us say that a positive integer is incongruous when it does not belong to the set that corresponds to it. (By way of illustration, suppose that the set listed 243rd is the set of squares. Then 243, which is not a square, is incongruous. And suppose that the set listed 811th is the set of primes. Then 811, which is a prime, is not incongruous.) Now let S be the set of positive integers that are incongruous. Then S is not arithmetical. For if it were, it would appear at some position i on the list. But then for i to be incongruous would be for it not to belong to S. This means that i would belong to S if and only if it did not belong to S. And this would obviously be a contradiction.

Now suppose that there were a single algorithm for solving all mathematical problems. Then, among other things, this algorithm could be used to determine whether any given positive integer belongs to any given arithmetical set. For example, it could be used to determine whether 809 belongs to the set of primes; and it could be used to determine whether 2 belongs to the set of positive integers that exactly divide another number that is not the sum of two primes. (The answer in the former case would be yes; what the answer would be in the latter case is not currently known.) This in turn means that the algorithm could be used to determine whether any given positive integer belongs to S. For we could first determine which set corresponds to the integer, then use the algorithm to determine whether the integer belongs to that set, then give the opposite verdict on whether the integer belongs to S. By definition, then, S would be decidable. Hence, by Lemma 1, S would also be arithmetical. But we have just argued that S is not arithmet-ical. The only possible conclusion is that there is not, after all, a single algorithm for solving all mathematical problems.

That blocks one particular route to scooping the entire seven million dollars. There is no alternative, it seems, but to go back to each of the seven different drawing boards. If you are a professional mathematician already at one of the drawing boards, then good luck to you. If you are an amateur mathematician keen to make a million dollars, then you are advised to devote your time and energy to something else. But you will derive much pleasure from Devlin’s entertaining guide to what the pros are up to.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Letters

Vol. 26 No. 17 · 2 September 2004

Most readers who enjoyed A.W. Moore's brisk demonstration that an arithmetisation of meta-mathematics produces provably unprovable propositions (LRB, 22 July) will have heard of Kurt Gödel, and some will recognise this as the work for which Gödel is famous. Without any mention of Gödel's theorem, however, Moore's article seems incomplete.

Paul Taylor
University College London

send letters to

The Editor
London Review of Books
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address and a telephone number

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences