What Computers Can’t Do
Eli Zaretsky
The question of what computers can’t do was posed in 1972 by the philosopher Hubert Dreyfus. Dreyfus’s answer – think creatively – was soon considered an error, but the problem remained. In the New York Times last week, Yuval Harari suggested there is nothing that computers can’t do, as they are learning to mimic humans perfectly. David Brooks was more sanguine in the same paper a few months ago:
AI will probably give us fantastic tools that will help us outsource a lot of our current mental work. At the same time, AI will force us humans to double down on those talents and skills that only humans possess.
But Brooks’s claim is mere assertion, since he never tells us what these ‘talents and skills’ are, nor why they cannot be replicated by machines, as Harari claims.
It is difficult to distinguish human from machine intelligence because we use the same underlying philosophical and psychological understandings of the mind to discuss both. We think of human beings as essentially rational, problem-solving, goal-oriented animals – an idea that long antedates neoliberalism. At the same time, we think of the computer as a problem-solving calculator, though one with access to far more data than an individual person. The main alternative to this paradigm – psychoanalysis – has long been discredited. Nonetheless, I want to propose a psychoanalytic answer to the problem Dreyfus posed. What computers can’t do is free associate.
Free association occurs when the normal functioning of the mind, which is outward-directed, rational, calculative and problem-solving, is suspended, and a different kind of thinking spontaneously arises. This thinking is free association. We all free associate normally, for example in daydreaming, while doing crossword puzzles or when trying to remember where we left something. In such cases we relax into a state of free-floating attention, rather than concentrate on solving a problem. This practice was the original foundation of psychoanalysis.
Free association reflected Freud’s original (and never abandoned) distinction between primary (i.e. unconscious) and secondary (preconscious but not yet conscious) processes. The primary process is the realm of free (undirected) associations; thoughts do not know where they are going; they proceed by association, which results either in condensations or displacements of affect; they contradict each other, more or less along Derridean lines. The secondary process, by contrast, is governed by a grammar or logic; thoughts have direction, meanings can be specified; thinking is essentially calculation. Freud’s discovery was that by suspending the secondary process, and thereby facilitating free association, it was possible to infer the memories, traumatic residues, wish-fulfilling fantasies, parental imagoes and so forth that give shape to our ‘free’ associations.
The early discoveries of psychoanalysis all rested on the primary/secondary distinction. In a dream, a preconscious thought that would disturb sleep (a fear or wish) drifts back into the non-logical, imagistic, associational world of the primary process, where it connects with early fantasies, wish-fulfilments and unconscious imagoes until it makes its way forward again in the form of a wish-fulfilling dream. Poetry too requires the primary process. In ‘On First Looking into Chapman’s Homer’, Keats begins with the secondary process experience of reading Homer, but then descends into archaic memories of a time in childhood when reading seemed like travelling, and the child could imagine himself ‘silent, upon a peak in Darien’. The archaism of the Homeric world converges with the archaism of infancy.
Computers can of course write poems; they can associate, and construct images gleaned from all the world’s literature, but they will never act like a little boy who began reading books when he was very young and imagined himself growing up to become an explorer. Computers have had no infancy, therefore no primary process, and no free associations because they have nothing to free associate to. Computers can solve problems, calculate, increase scientific knowledge or endow us with the powers to act on the world, but they cannot turn inward, become passive and receptive, and discover an inner world, since computers have no inner world to discover.
Since computers perform the same instrumental, problem-solving, goal-oriented functions that we do, the fear that they might someday ‘compete’ with us is not unreasonable. The only edge humans have over computers is access to the unconscious through suspension of rational thought: i.e., free association. But what a paradox! Free association is precisely the capacity that we use less and less. Society today has reduced free association to playing word games or finding lost keys. To understand how this occurred, we need access to the unconscious.
The idea that the mind is a sort of computer goes back to the seventeenth century, to figures such as Hobbes, Descartes and Pascal, but it really gained ground in the mid-twentieth century, with the growth of cybernetics. The basic idea behind cybernetics was to bracket off the questions of subjectivity and interiority that pervaded the Freudian age and to focus instead on prediction and control gained through the gathering of objective, behaviourist information or data. While the cybernetics movement did not survive, a data or network-based view of the world gained ground. The turning point occurred in the 1980s, following the advances in microprocessor technology that made home computers possible, leading to today’s ubiquitous screens and interfaces, feedback loops and circuits, information cycles and supply chains.
The triumph of the computer was accompanied by the destruction of psychoanalysis. According to the cognitive psychologists, information theorists and computer pioneers of the 1940s and 1950s, psychoanalysis was a pseudoscience. Managed care, which arose in the 1970s, insisted that psychotherapy proceed according to the ‘medical model’, meaning that rationality was taken as the norm and mental ‘problems’ were defined as diseases that could be classified through their symptoms, while ‘cures’ were defined as the reduction of symptoms. While a psychoanalytic profession continued to exist, it adapted to the new regime by turning itself into a problem-solving, service profession, not one oriented to the exploration of the unconscious.
You might think that the turning away from free association toward the medical model occurred through secondary process thinking: i.e., the progress of science or the integration of psychotherapy into the world of computers. This would be wrong, however. The triumph of our contemporary ‘post’-Freudian way of thinking about the mind required the collective mobilisation of vast fantasies, powered by emotions and advanced by the social movements of the 1970s. These fantasies characterised psychoanalysis as a movement that hid child sexual abuse (leading to a wave of false accusations of childcare workers) and as a countermovement to feminism, propagating the idea that women were inferior. While there were grains of truth in these accusations, the much larger truths of the unconscious and the universality of homosexuality and bisexuality were suppressed. The behaviourism of the social movements of the 1970s was reflected in the insistence that the mind was the product of society. As Juliet Mitchell wrote of the movements of her time, for them ‘it all actually happens … there is no other sort of reality than social reality.’
While the discovery of free association dates to the 1890s, Freud later formulated a second way of thinking about the mind, which built on and incorporated that discovery – the ideas of the ego and the id. The ego, as Freud conceived it, was the locus of the secondary process, but its borders with primary process thinking were permeable. The id was the source of the impulses, compulsions and narcissistic fantasies that pervaded the primary process. Freud envisioned an ego that could free associate and thereby maintain a sense of its unconscious environment. In the 1970s, the Freudian ego gave way to egoism in the form of ambition, competitiveness and other corporate values. As was frequently said at the time, in earlier social movements radicals were working for others but now radicals were working for themselves. The neoliberal redefinition of the subject in egoistic terms was a surface phenomenon, however. The embrace of egoism rested on the narcissism that emanated from the id. What Foucault called ‘productive power’ – self-generated and self-managed – required a libidinal basis. Market values, infused by egoism, rested on mass psychological processes.
We seem now to be coming to the end of a centuries-old process. The term ‘artificial intelligence’ was coined in the mid-twentieth century, but the reality of organising society in the form of a series of algorithms goes back to the seventeenth century, with its focus on ‘matter in motion’. What else is the market but a conglomeration of calculating liberal agents? E.P. Thompson demonstrated the importance of the introduction of clock-time to an increasingly regimented – especially self-regimented – society. Alan Trachtenberg did the same for railways. Moishe Postone, building on the work of Georg Lukács, showed how all secondary process thinking in modern society is formulated on the template of the commodity. What Freud adds to this is awareness of the market’s phantastic sub-structure.
Given this history, reflecting on the destruction of psychoanalysis and the triumph of computers in the 1970s, the problem of artificial intelligence is ill-posed. The danger is not that we will someday have to go to war with computers powered by artificial intelligence. The danger is that we will become a species of artificial intelligence ourselves.
Comments
Arguably any colonization, though, is an uneven and incomplete process, and perhaps the metaphor is more apt than it seems. You can deny its influence (we must return to the unadulterated source!) or affirm it (we must conform to what appears most modern!). But what does it mean to be aware of it? Clearly the Silicon Valley types are affected by the "market's phantastic sub-structure" without seemingly being aware of it. How else do we explain why so many of them invest in researching responses to an immanent Robot revolution (the coming Singularity!) that they seek to both feverishly advance and prevent? They, no doubt, would laugh at the idea their rational calculation was inflected by primary processes, rather than being solely the product of secondary ones. But while the captains of the industry champion a neoliberal view of actual workers as entrepreneurial individual contractors, they are plagued by the fear of a classical proletarian revolt of the intelligent machines.
The two dangers signaled at the end of this blog--I agree, one more false, one more real--are then perhaps not so separate. Those who are investing most in "us" becoming a species of artificial intelligence are also those most worried about a war with the computers that will overrun us. If the end of this process of the mind-as-computer is not the end of the unconscious but just its more effective concealment, what does it mean, then, that the two processes coincide in this way to make it seem that the primary is a thing of the past? Is the aim, then, mainly an awareness of its continued influence, or different articulations of the two processes?
Human language is nothing like these and how we acquire and use it is largely a mystery. Is is a relevant question to ask how natural language, which we use along with visualization to free associate, being so different from computer languages is the thing that keeps machines from being creative?
Steve Merlan
Santa Fe
The idea that there's only the rational and irrational seems mistaken to me. I think reason is also about reflection on what our goals are and should be (2nd order preferences, if you like). Also, I think we reason with others (Tomasello) So, yes, thought is turned outwards but not in the way you seem to be suggesting. And there are other forms of 'inwardness'.
To say that rationality is calculation is already to concede too much (seems the modern west can only escape that narrowness through the 'irrational' or subconscious since there's no such thing as the supra-conscious or the heart that sees ("qalb") or love's knowledge). And are there modes of knowing/thinking related to the body and emotions?
Also, Bellow:
"I count on this, not a perfect Cartesian understanding, but an imperfect one, that is Jewish."
Computers seem to be imperfect because they pretend to be perfect (which requires a reductionism of reality). A robot would probably have to be trained in 'imperfection' to learn how to walk! In any case, aren't they fed the 'goals' (I think).
Even if you're an atheist (sad to hear that, btw) İ think one can reasonably (here the British 'reasonableness' over abstract Reason is important as a sensibility) recognize other forms of reason than rational calculation. Legal and public reasoning. Reasoning over our goals (Sen would say, "evaluation is not calculation"). And the fides et ratio of the Abrahamic faiths. İ don't think one can reasonably ignore the latter (even if one doesn't believe).
The idea that one comes to an understanding with time and experience, reflection or grace, that one thinks with, about and for other people ("M saw D lovingly, not just accurately" (Iris Murdoch)...where to see rightly is to move to a clearer understanding) shows, to my mind at least, that there is a higher alternative to the ones you suggest.
As always, best wishes and salams.
K.
The very ability to re-present things (H. Jonas would say), the dialectic between absence and presence, points to the idea that human beings possess a degree of freedom which allows us to both create beauty and acknowledge truth. And that capacity, one *might* say (though, admittedly, not necessarily) points to the divine.
A "godless Jew"? Well, Eli, most of us are godless to some extent or the other! Barth (?): "Don't take your unbelief too seriously."
"For, deeper still, in some primal part of us, there is always a vital role for the not-too-perfect in our pleasures. Imperfection is essential to art. In music, the vibrato we love involves not quite landing directly on the note; the rubato singers cultivate involves not quite keeping to the beat. What really moves us in art may be what really moves us in “The 7th Voyage of Sinbad”: the vital sign of a human hand, in all its broken and just-unsteady grace, manipulating its keys, or puppets, and our minds."
--A. Gopnik
I wonder why you assume this? Or flip it: the question is whether computers are "subjects". Or the question is where the boundaries of "computer" intelligence lie. AI, if bounded within a single "computer", might be conceived as a subject with a history of learning (that is an "infancy"). But that is not how we think of AI. I think it is clear that the "computer" does not bound the AI subject -- and in fact we rarely talk about computers anymore, presumably for that reason. And yet the AI subject is not unbounded either -- it is not a totalising internet-enveloping hive-mind. What is it? Whatever the answer, I can see no a priori reason why a learning machine could not arrive at subjectivity. (Memory, history, free association). Isn't this what we fear?
I wonder, though, whether AI is capable of critical theory?
It is all so fascinating. Just any altruistic human can do sometimes could a computer contemplate a lack of desire?
Thank you for the writing and all replies to further open at least my mind!
I have read of some scientists (mostly associated with the Santa Fe Institute) who spend considerable time on 'emergence' of higher level phenomena from simpler, but loosely connected, 'agents' via 'complex systems'. This has been applied to physics, societies, and, I think even psychology (Han L. J. van der Maas).
Perhaps they will re-discover your territory!