‘Perhaps the mind stands to the brain in much the same way that the program stands to the computer.’ That is the vision behind this admirable book for newcomers. Introductions to cognitive science are seldom neutral. They’re not like beginners’ textbooks of Norwegian grammar or topology, nor do they much resemble popular science-writing about quarks or gene-splicing. Instead they are evangelical. Alongside Dr Johnson-Laird’s friendly and often charming account of ingenious computational ideas, there’s the message – which is his own conception of cognitive science, of psychology, of the mind.
The title itself conceals the message. The Computer and the Mind suggests that we are in for comparisons and contrasts between the two. That’s not promising. You can compare two things only if they belong to the same bunch of categories: that is, if most of what’s true of the one is at least true or false of the other. I can compare my laptop pc to my brain for the laptop weighs 11 pounds; it is heavier than my brain. Both would be crushed by a bus. Even the metaphors aren’t half-bad – both my brain and my laptop need recharging from time to time. It makes some sense to start comparing their functions and abilities. But comparing and contrasting minds and computers is a non-starter, for there is hardly anything true of computers that is true or false of minds. Minds and computers are in fundamentally different categories. That is the beginning of a lesson well taught by Gilbert Ryle’s The Concept of Mind forty years ago.
Indeed minds, unlike computers, are not things at all. We use the noun ‘mind’ often enough, but not to pick out a class of entities. It does not even work much like the matching words in the French or German whose Cartesian heritage we share. Although esprit is the official translation of ‘mind’, a great many of our common utterances using ‘mind’ don’t go over into expressions using esprit. The metaphors in turn are completely out of kilter. There’s a line of yuppie clothing called Esprit, but don’t try marketing Mind après-ski wear. One reason why the metaphors shear apart in the different languages is that the nouns don’t refer to a definite class of thing. Some of our philosophical tradition notwithstanding, neither our word nor its cognates in other languages denotes any thing. That is one of the reasons why the much-maligned Descartes said that mind and body are different substances, i.e. of different categories, of different kinds, and not different-kinds-of-thing.
We do have many abilities and perform many acts which for lack of a better word we call ‘mental’. And now perhaps we do have a better label than ‘mental’: ‘cognitive science’ and whatever that is about. The phrase was devised in the 1940s to mean the study of some of our mental doings named by gerunds such as knowing, thinking, perceiving, remembering, learning; or mental skills like the ability to recognise your acquaintances, improvise music, follow a chain of argument, understand an English sentence or do sums. Western thought has usually seen some sort of commonality among these knowing-how and knowing-that aspects of people. It has put them at some distance from feeling upset or even the capacity to share the sorrows of another. There has been a division of opinion, over the past couple of thousand years, as to whether seeing a rhododendron bush in bloom is more like knowing (you know that it’s a flowering shrub, don’t you?) or more like feeling sore (it just strikes you as glorious, doesn’t it? – the seeing of it, which is experiencing, not knowledge).
Cognitive science puts perceiving and knowing together. That prompts us to ask: is it right or wrong to do so? Are they two species of cognition? As if there were a fact of the matter! It is not that there is no fact of the matter, but that there are too many. It is not evident which facts matter. Wittgenstein’s Philosophical Psychology is, among other things, a brilliant taxonomy of the mental. It is deliberately descriptive and anti-explanatory and is put entirely in terms of our talk, our concepts, our sensibilities. It is quite indifferent to anything that happens in the brain. Many people, especially those who can neither stand nor understand Wittgenstein, would in contrast find it deeply interesting if, for example, a uniformly identifiable bit of everyone’s brain was essential to recognising rhododendrons and improvising jazz, but irrelevant to doing sums or speaking English. But would that define two different types of mental, as opposed to physiological, activity? We have no idea how to answer, for the question is a bad one. Likewise one thinks that the grouping of some but not all mental doings at the core of cognitive science is a hypothesis, but it isn’t. What would one do to verify or refute it?
The topics in terms of which cognitive science defines itself are not grouped on empirical principles at all. They are grouped by one idea: the idea of achievement. They study success, the kind of success that you can achieve by, as we put it colloquially, using your head. We solve a problem, recognise a plant, notice a movement, whistle a tune, revise a speech, understand an order, recall a telephone number, convey some information, or make it through the kitchen in the dark after we’ve been told how the furniture has been rearranged. Achievements like these are the subject of Johnson-Laird’s book, and serve as his examples. Cognitive science is the science of ‘mental’ achievement. I use Ryle’s word: before cognitive science was in place he had noted the classification of achievement verbs for describing one aspect of what we call mental life. And his distinctions indicate what is troubling about putting seeing a shrub in the same box as finding your way through the kitchen in the dark: the latter is an achievement and the former isn’t. Cognitive science opts in general for the achievement side of perception, such as recognising the species or noticing the change in colour after the frost.
Historians of culture, with their bad habit of making ironic comparisons, will some day note with glee that when America’s watchword was business, and when doing things or being busy was good in itself (idle hands are the devil’s workshop), Behaviourism swept American psychology. But nowadays, when mere activity is not enough, when you must be a ‘high achiever’ to be fulfilled, cognitive science is the only game in town. Not that an ideology brought cognitive science into being or made it flourish, in any self-aware way. It is computer science – somehow a science in no need of authenticating – which nourished cognitive science and furnished it with authority.
This does not go without saying, for it is deeply connected with the focus on achievement. A theory of achievement has to be a theory of how you achieve, together with a sub-theory of what you achieve and when you have achieved. To achieve is to bring to a successful conclusion. Typically there is a process, a series of actions leading to the desired conclusion. Thus the beginnings of cognitive science took the general form of attempts to investigate a series of actions or events that lead to a designated goal. It need not of course be a single series; it can be a tree composed of many series culminating in the goal. But a series of what? Don’t we just recall the phone number or notice the blooming bush?
The right kinds of series come from computer science, which has two parents, if we are to name names. One is John von Neumann, whose team built the first electronic digital computer. The other was Alan Turing, who pioneered the logical theory of computation. It would be quite wrong to characterise the one man as applied, the other as pure. Turing was fascinated by all sorts of mechanical and electrical computational devices, while von Neumann was a mathematical logician of the first rank. Still, it is handy to distinguish the two lines of development. Turing gave us what may turn out to be, in a sense, the last word on what is computable in a finite number of steps using any procedure whatsoever. At the same time he showed that only four types of step are needed (or fewer). Hence any computation can be achieved by sufficiently many trivial steps chosen from a small number of possibilities. The resulting structure is extraordinarily rich. Right from the start Turing had a result soon seen to be equivalent to Gödel’s theorem. All this was pure logic which could be given material form as a computer in any number of ways. Some are amusing. A limitless supply of poker chips, a couple of holes as deep as you want, and something to move a chip from one hole to the other – that is all you need for a Turing machine that will compute anything computable. It looks as if any series that achieves a result in a definite way can be represented in Turing’s abstract schematism, which might then, ideally, be given material form by physical devices of many sorts, like brains, for example.
Von Neumann’s digital computer is more familiar than Turing’s logic. It started out by doing a great many Turing-like steps quite quickly. As we know, it never looked back. Serious computers will work less and less like von Neumann’s original as time goes on, for it started, like Turing, with the idea of a single series of events, of switchings or whatever, which would terminate in an answer. It is vastly better to have a lot of processes going on simultaneously, with some ‘communication’ between them, and various elements in a hierarchy for putting their output together. But (as Johnson-Laird says quite often) these inventions in the direction of economy can be represented in Turing’s ideal framework of a computation going on in a single series of steps.
Cognitive science has grown between the logical and the material sides of computer science. Some workers go in for making devices which simulate human achievements. Some of the inventions are playful, little robots that imitate children building towers of blocks. Others are of immediate use, such as machines that can scan text aloud and read it to blind people. Other workers are more concerned with logic, setting out the formal steps that a processor must undertake, in order to simulate some human kind of achievement. Just as Turing and von Neumann did not have mutually exclusive interests, nor do cognitive scientists. Johnson-Laird makes hardware, but his heart is in the logic, in the form of the series or hierarchy of steps that lead to achievement. In short, he cares about programs.
‘Perhaps the mind stands to the brain in much the same way that the program stands to the computer.’ On the dust-jacket of The Mind and the Computer there is a drawing of a brain against a backdrop of arrays of digital equations (0 = 1+1 etc) which could suggest bits of a computer program. If we decode, it seems that the hidden title of the book should be ‘The Mind and the Computer Program, the Brain and the Computer’. The brain, the computer, the poker chips in the sand – these are all material embodiments that can be used by programs to work in their own biochemical, electrical or mechanical ways. Programs are abstract objects. We represent them by one or another symbol system. We use them to get a computer to behave in certain ways. I said that comparing the mind to a computer was a non-starter. But there is nothing absurd in starting to compare the mind to a set of programs. There is nothing ghostly about a program, Johnson-Laird assures us, alluding to Ryle’s joke about the mind being a ghost in the machine of post-Cartesian philosophy (Ryle himself is never mentioned in this book).
Well, we seem to be getting on. Here is a conjecture, perhaps a discovery. We are perhaps finding out that the mind is something like a set of programs, and the next step is to unveil the programs. Here it sounds as if there is a fact of the matter. Western thinkers have for ever been pondering the question: what is the mind? They also wondered: what is matter? We’ve found out long answers to that one, and are still finding out. Have the cognitive scientists finally found out what mind is? I don’t think so. I don’t believe that there is a fact of the matter, for I don’t think that the mind is anything at all, unless we make it up.
I mean that literally. There seems not the slightest need for cognitive scientists to talk about the mind, but they do. Some do so out of philosophical conviction: Chomsky, for example, holds firm and vivid versions of traditional doctrines of the mind and calls himself Cartesian in spirit. Johnson-Laird is more typical of cognitive scientists, disavowing old doctrine, but still curiously driven not only to include the mind in his title, but also increasingly to bring it to the forefront of his writing. By the middle of the work we are given a paradoxical observation, and are told that ‘the design of the mind may account for the paradox.’ Towards the end we have fully-fledged ‘parallel processing and the architecture of the mind’. Throughout the book he builds up more and more shreds of knowledge which are centred on what increasingly is called the ‘mind’. The mind is indeed the topic of this book – but not the mind as some entity given in thought, and to which we referred before cognitive-science came into being. Instead, the science is constructing its own object, and the mind is that around which all the specific knowledge accretes. And what is this object of knowledge? An abstract, formal set of programs which when used achieves various ends and which may modify itself. It is the program of an achievement machine.
Now there are Luddites among us who insist that there is more to the mind than cognitive science can ever examine. That is not my point, nor would I ever say it. It commits the same error as those who would say that cognitive science has discovered that the mind is like a set of programs. Before cognitive science there wasn’t a mind that was or wasn’t a set of programs. The example of cognitive science lets us watch the very construction of a body of knowledge, by which I mean not merely constructing propositions but also constructing what the propositions are true of. It is the simultaneous construction of a body of knowledge and of its object domain. Our entire language gets moulded accordingly. It happens at the highbrow level. The talk of cognitive scientists is refreshingly innocent. They’re finding things out and they’ve absorbed as an unnoticed assumption what they’re finding things out about. Here the Luddites, thinking that they challenge cognitive scientists, instead merely confirm them in their beliefs, for the Luddites say the mind (the very mind which the cognitive scientist is constructing) can’t be captured by computational processes, and the cognitive scientist sees that that’s wrong – not noticing that it’s wrong because he, the cognitive scientist, is the one who is constructing the mind.
This is highbrow talk, however. We should note how in humdrum affairs the language of computation and the language of thought increasingly become melded. Thus the July Consumer Reports (the stodgy American monthly on which Which? was modelled) discusses ‘smart timers’, i.e. automatic light switches. On one of these you press the ‘learn’ button and use the house electricity switches as you regularly would, switching on and off as you need them. On subsequent days the timer ‘will duplicate the pattern it has learned, switching the lamp on and off dutifully at the times you did earlier’. A different smart timer senses the light outdoors, and goes on at dusk; to get the lights to go on at any other time you have to fool it by covering it with a blanket for ten minutes. These are, for most readers of the magazine, no longer live metaphors. That’s what the machine does, literally: it learns, for it is smart. Our conceptualisations associated with the achievement verbs of the mind are being radically shifted at every level, from the theoretical insights of the cognitive scientists to the purveyors of household gadgets. The mind is being made up at the one end by the proliferation of a network of knowledge and at the other end by technological trivia.
Johnson-Laird covers swiftly and well the terrain on which cognitive science is encamped: perception, learning, memory, the production of bodily movements, deductive and inductive inference, the creation of new structures, the several aspects of communication. There is also a short section on consciousness. The project is to describe in simple terms the computer programs which achieve the results of ‘mental activity’. To discover how X works (X = vision, memory or whatever) ‘we must bear in mind a lesson from computation: we need at least three different levels of explanation. We need a theory of what is computed – what the input to the process is, what has to be recovered from it, and what constraints may guide the process. We need a theory of how the system carries out the computation – that is, a computable theory of the procedure it uses. We need a theory of the underlying neurophysiology (the “hardware” of the nerve cells in which the procedure is embodied)’. In this book we are on the Turing side of things, so that the third element plays a minor role.
For the first element, it is above all the ‘constraints’ which are operative. A computation by an ideal Turing device can take as long as it wants. It early became apparent that in order to bring to an end a computation modelling a mental process, there would have to be a lot of constraints corresponding to knowledge, especially knowledge of what in reality is possible in the immediate environment of the simulated agent. A blank slate of a program is hopeless.
That has long been a commonplace of cognitive science. I found more interesting the ways in which one modifies the original (and still, in the lay mind, the standard) image of a computer remorselessly performing a sequence of tasks in a linear order. Instead, one thinks of a lot of minimally-related steps going on simultaneously. In order to put the different strands together one is led to a hierarchical structure, with something resembling a conscious pilot at the top. It seems hard to escape military or corporate metaphors of chains of command, and therefore asking for communication between elements low in the hierarchy to avoid ‘bureaucracy’ in the system.
Throughout there is a good deal of attention to real-time computation, although there is an odd tension. Thus Johnson-Laird considers the view that ‘there are only distributed representations in the mind, and that structural rules ... play no causal role in mental life’ (page 191). More on these ‘representations in the mind’ presently. If the rules play no causal role in mental life, and rules on programs are the only way to model the mental life, then there might be no model of the brain. Or, as von Neumann once expressed this fear, the brain might be the simplest model of itself. The fear is unfounded, replies Johnson-Laird. The result of any effective rules can always be computed by a Turing machine, and ‘so in principle they can always give an accurate account’ of the computations in question. Isn’t that to forget that the models should achieve their results in real time?
One of the simplest illustrations of the importance of time in modelling achievements derives from the hard facts of computer memory. The problem for real computers is not to store information but to retrieve it. In Johnson-Laird’s model, mental achievements are broken down into stages, some of which need lots of memory (and hence time) and some of which have low computational power, needing little memory. He gives the pretty example of jazz improvisation. There are chord sequences which for most performers are the legacy of tradition. They carry the piece, within which the improvisation takes place. A ‘plausible conjecture’ is that the latter uses almost no computational power, while ‘a considerable amount of computational power’ has gone into devising the chord sequences.
Cognitive science is up to the minute, yet it has a curious 17th-century taste. It relies heavily on ‘internal representations’ reminiscent of the ‘ideas’ that Locke and his ilk considered essential objects of human thought. I said that in order to work at all, the programs of cognitive science must incorporate knowledge about what the world can be expected to be like. Excellent: but what is said about such representations of the world is perplexing, and not only for the achievements of people.
Take a honey bee that finds food. It does a dance in front of others, who then locate the food. It communicates with its fellows, we say. How does the finder bee work the trick? ‘One bee discovers a source of food, and forms an internal symbolic representation of its direction and distance.’ Internal to what? It looks like a play on words. In a model of the bee’s achievement, the cognitive scientist builds into his program what the programmer calls a representation of the location of the food. That is, there is a string of symbols which 1. the programmer interprets as geography, and which 2. has the effect, when put into hardware, of inducing a simulated bee to produce certain movements. The representation is a representation to the programmer. But when we read that the bee forms an internal symbolic interpretation, we think of something inside the bee, a something that something else inside the bee is interpreting as geography. It is part of the rhetoric of cognitive science that such talk of internal symbolic representations (internal to what? symbolic to whom?) flows by unquestioned. Thus a short time ago I quoted from page 191, ‘there are only distributed representations in the mind ...’ Wait a minute! begs the reader, at which page exactly did we start getting representations in the mind which were to be part of the ‘mental life’? One chases up the index. The real answer to this question does not, I think, lie in the index, but in my opening quotation.
‘Perhaps the mind stands to the brain in much the same way that the program stands to the computer.’ These representations are not in the mind, at least not in some mind identified independently of cognitive science. They are representations which symbolise certain states of affairs or structures for the programmer. As representations, they are internal only to the program. But then the program is the mind! The ‘mental life’ is not modelled by the program. It becomes the program. The rhetoric of cognitive science then proceeds by hypostatising these ‘internal symbolic representations’ as something inside people – inside their minds, no less. A conceptual entity is transformed into a self-subsistent substance. I repeat: in making such remarks I am not urging the Luddite doctrine that all this is a mistake, on the grounds that it is false of the mind that there are these internal symbolic representations. I am drawing attention to the way in which knowledge and its objects are created in this discipline, whose project is to make up the mind. What’s unexpected is that it is a familiar mind that is being made up – familiar, at any rate, to devotees of the 17th century.
Send Letters To:
The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN
letters@lrb.co.uk
Please include name, address, and a telephone number.