Close
Close

What Computers Can’t Do

Eli Zaretsky

The question of what computers can’t do was posed in 1972 by the philosopher Hubert Dreyfus. Dreyfus’s answer – think creatively – was soon considered an error, but the problem remained. In the New York Times last week, Yuval Harari suggested there is nothing that computers can’t do, as they are learning to mimic humans perfectly. David Brooks was more sanguine in the same paper a few months ago:

AI will probably give us fantastic tools that will help us outsource a lot of our current mental work. At the same time, AI will force us humans to double down on those talents and skills that only humans possess.

But Brooks’s claim is mere assertion, since he never tells us what these ‘talents and skills’ are, nor why they cannot be replicated by machines, as Harari claims.

It is difficult to distinguish human from machine intelligence because we use the same underlying philosophical and psychological understandings of the mind to discuss both. We think of human beings as essentially rational, problem-solving, goal-oriented animals – an idea that long antedates neoliberalism. At the same time, we think of the computer as a problem-solving calculator, though one with access to far more data than an individual person. The main alternative to this paradigm – psychoanalysis – has long been discredited. Nonetheless, I want to propose a psychoanalytic answer to the problem Dreyfus posed. What computers can’t do is free associate.

Free association occurs when the normal functioning of the mind, which is outward-directed, rational, calculative and problem-solving, is suspended, and a different kind of thinking spontaneously arises. This thinking is free association. We all free associate normally, for example in daydreaming, while doing crossword puzzles or when trying to remember where we left something. In such cases we relax into a state of free-floating attention, rather than concentrate on solving a problem. This practice was the original foundation of psychoanalysis.

Free association reflected Freud’s original (and never abandoned) distinction between primary (i.e. unconscious) and secondary (preconscious but not yet conscious) processes. The primary process is the realm of free (undirected) associations; thoughts do not know where they are going; they proceed by association, which results either in condensations or displacements of affect; they contradict each other, more or less along Derridean lines. The secondary process, by contrast, is governed by a grammar or logic; thoughts have direction, meanings can be specified; thinking is essentially calculation. Freud’s discovery was that by suspending the secondary process, and thereby facilitating free association, it was possible to infer the memories, traumatic residues, wish-fulfilling fantasies, parental imagoes and so forth that give shape to our ‘free’ associations.

The early discoveries of psychoanalysis all rested on the primary/secondary distinction. In a dream, a preconscious thought that would disturb sleep (a fear or wish) drifts back into the non-logical, imagistic, associational world of the primary process, where it connects with early fantasies, wish-fulfilments and unconscious imagoes until it makes its way forward again in the form of a wish-fulfilling dream. Poetry too requires the primary process. In ‘On First Looking into Chapman’s Homer’, Keats begins with the secondary process experience of reading Homer, but then descends into archaic memories of a time in childhood when reading seemed like travelling, and the child could imagine himself ‘silent, upon a peak in Darien’. The archaism of the Homeric world converges with the archaism of infancy.

Computers can of course write poems; they can associate, and construct images gleaned from all the world’s literature, but they will never act like a little boy who began reading books when he was very young and imagined himself growing up to become an explorer. Computers have had no infancy, therefore no primary process, and no free associations because they have nothing to free associate to. Computers can solve problems, calculate, increase scientific knowledge or endow us with the powers to act on the world, or but they cannot turn inward, become passive and receptive, and discover an inner world, since computers have no inner world to discover.

Since computers perform the same instrumental, problem-solving, goal-oriented functions that we do, the fear that they might someday ‘compete’ with us is not unreasonable. The only edge humans have over computers is access to the unconscious through suspension of rational thought: i.e., free association. But what a paradox! Free association is precisely the capacity that we use less and less. Society today has reduced free association to playing word games or finding lost keys. To understand how this occurred, we need access to the unconscious.

The idea that the mind is a sort of computer goes back to the seventeenth century, to figures such as Hobbes, Descartes and Pascal, but it really gained ground in the mid-twentieth century, with the growth of cybernetics. The basic idea behind cybernetics was to bracket off the questions of subjectivity and interiority that pervaded the Freudian age and to focus instead on prediction and control gained through the gathering of objective, behaviourist information or data. While the cybernetics movement did not survive, a data or network-based view of the world gained ground. The turning point occurred in the 1980s, following the advances in microprocessor technology that made home computers possible, leading to today’s ubiquitous screens and interfaces, feedback loops and circuits, information cycles and supply chains.

The triumph of the computer was accompanied by the destruction of psychoanalysis. According to the cognitive psychologists, information theorists and computer pioneers of the 1940s and 1950s, psychoanalysis was a pseudoscience. Managed care, which arose in the 1970s, insisted that psychotherapy proceed according to the ‘medical model’, meaning that rationality was taken as the norm and mental ‘problems’ were defined as diseases that could be classified through their symptoms, while ‘cures’ were defined as the reduction of symptoms. While a psychoanalytic profession continued to exist, it adapted to the new regime by turning itself into a problem-solving, service profession, not one oriented to the exploration of the unconscious.

You might think that the turning away from free association toward the medical model occurred through secondary process thinking: i.e., the progress of science or the integration of psychotherapy into the world of computers. This would be wrong, however. The triumph of our contemporary ‘post’-Freudian way of thinking about the mind required the collective mobilisation of vast fantasies, powered by emotions and advanced by the social movements of the 1970s. These fantasies characterised psychoanalysis as a movement that hid child sexual abuse (leading to a wave of false accusations of childcare workers) and as a countermovement to feminism, propagating the idea that women were inferior. While there were grains of truth in these accusations, the much larger truths of the unconscious and the universality of homosexuality and bisexuality were suppressed. The behaviourism of the social movements of the 1970s was reflected in the insistence that the mind was the product of society. As Juliet Mitchell wrote of the movements of her time, for them ‘it all actually happens … there is no other sort of reality than social reality.’

While the discovery of free association dates to the 1890s, Freud later formulated a second way of thinking about the mind, which built on and incorporated that discovery – the ideas of the ego and the id. The ego, as Freud conceived it, was the locus of the secondary process, but its borders with primary process thinking were permeable. The id was the source of the impulses, compulsions and narcissistic fantasies that pervaded the primary process. Freud envisioned an ego that could free associate and thereby maintain a sense of its unconscious environment. In the 1970s, the Freudian ego gave way to egoism in the form of ambition, competitiveness and other corporate values. As was frequently said at the time, in earlier social movements radicals were working for others but now radicals were working for themselves. The neoliberal redefinition of the subject in egoistic terms was a surface phenomenon, however. The embrace of egoism rested on the narcissism that emanated from the id. What Foucault called ‘productive power’ – self-generated and self-managed – required a libidinal basis. Market values, infused by egoism, rested on mass psychological processes.

We seem now to be coming to the end of a centuries-old process. The term ‘artificial intelligence’ was coined in the mid-twentieth century, but the reality of organising society in the form of a series of algorithms goes back to the seventeenth century, with its focus on ‘matter in motion’. What else is the market but a conglomeration of calculating liberal agents? E.P. Thompson demonstrated the importance of the introduction of clock-time to an increasingly regimented – especially self-regimented – society. Alan Trachtenberg did the same for railways. Moishe Postone, building on the work of Georg Lukács, showed how all secondary process thinking in modern society is formulated on the template of the commodity. What Freud adds to this is awareness of the market’s phantastic sub-structure.

Given this history, reflecting on the destruction of psychoanalysis and the triumph of computers in the 1970s, the problem of artificial intelligence is ill-posed. The danger is not that we will someday have to go to war with computers powered by artificial intelligence. The danger is that we will become a species of artificial intelligence ourselves.


Comments

or to post a comment
  • 13 September 2024 at 2:52pm
    Patrick Cotter says:
    Free association is essential for 'Creative Flow' but so too is emotion and feeling. Artificial Intelligence can never generate original creativity because original creativity is all about creating that which never before existed. AI can only copy and simulate, at least with language. AI has not the emotional capacity to assess texture in language, to be able to generate meaningful, pleasure-provoking locutions so original one can not find them replicated anywhere by a search engine. If you want to check if your own use of language is creatively original try googling it. If google can find it, AI can beat you to it. If google replicates your locution by the thousands it means you have used already dead language, only good for conveying received ideas - no poetry.

    • 13 September 2024 at 3:29pm
      Eli Zaretsky says: @ Patrick Cotter
      thanks, Patrick, Eli

  • 13 September 2024 at 11:59pm
    Graucho says:
    Excuse me for reaching for the salt cellar regarding the AI hype. When the human race created a machine that could add 2+2 and get 4 AI was born. The rest is about scope, scale and speed. The 2 issues that were born then remain. How do we know when the machine is telling us that 2+2 makes 5 ? How do we protect ourselves from the consequences of being told that 2+2 make 5 ? Over the past several decades so called AI has crashed planes, cars , and companies. It's sent innocent sub postmasters to jail and come perilously close to starting WW3 by accident. A lot of AI should rightly be called Artificial Plagiarism. I have been told that now that chat bots have spewed out a ton of stuff onto the internet they are beginning to devour their own entrails and become less and less useful. To get back to the title of the blog. I recently saw "A man who fell to earth". Apart from eerily foretelling the life of Elon Musk, the film has one great line. "Get rid of computers and replace them with people, because people make mistakes and that's where you get new ideas". So what can't computers do ? Come up with new ideas that no one ever thought of.

    • 14 September 2024 at 1:26am
      Eli Zaretsky says: @ Graucho
      I agree with you. A lot of it is pure hype. I was trying to introduce a wider context.

  • 14 September 2024 at 9:28pm
    Douglas Wise says:
    Is 'unconscious thinking' what Heidegger was getting at with das andere Denken? Does Gelassenheit sound in free association? Is primary process the site of another clearing (Lichtung)? Do chatbots dream of electric Seienden? Is Being's poem, just begun, just about done?

    • 15 September 2024 at 2:25am
      Eli Zaretsky says: @ Douglas Wise
      Beautifully put Doug. I think a lot of people thought my piece was about computers! But you got it, Eli

  • 15 September 2024 at 4:24pm
    Peter L says:
    Great stuff. The longer view of this process is really insightful. I take it that the point of it seeming that we are now at its end is the "seeming"--an illusion well-grounded in appearances. This "triumph" of computers and cybernetics in the 1970s would then offer a different approach to the claim that an historically new "colonization" of the Unconscious and Nature was brought about by the expansion of multinational capital across the planet. The latter view runs the risk of suggesting that the theorization of the unconscious by Freud, or of nature by the Romantics, was somehow separate from or untouched by the spread of the commodity form and capitalism, much less European colonialism (a postcolonial critique of Freud's colonial trappings, like the "primal horde," would be another force or "phantasy" helping to usher in a post-Freudian age).

    Arguably any colonization, though, is an uneven and incomplete process, and perhaps the metaphor is more apt than it seems. You can deny its influence (we must return to the unadulterated source!) or affirm it (we must conform to what appears most modern!). But what does it mean to be aware of it? Clearly the Silicon Valley types are affected by the "market's phantastic sub-structure" without seemingly being aware of it. How else do we explain why so many of them invest in researching responses to an immanent Robot revolution (the coming Singularity!) that they seek to both feverishly advance and prevent? They, no doubt, would laugh at the idea their rational calculation was inflected by primary processes, rather than being solely the product of secondary ones. But while the captains of the industry champion a neoliberal view of actual workers as entrepreneurial individual contractors, they are plagued by the fear of a classical proletarian revolt of the intelligent machines.

    The two dangers signaled at the end of this blog--I agree, one more false, one more real--are then perhaps not so separate. Those who are investing most in "us" becoming a species of artificial intelligence are also those most worried about a war with the computers that will overrun us. If the end of this process of the mind-as-computer is not the end of the unconscious but just its more effective concealment, what does it mean, then, that the two processes coincide in this way to make it seem that the primary is a thing of the past? Is the aim, then, mainly an awareness of its continued influence, or different articulations of the two processes?

    • 15 September 2024 at 5:24pm
      Eli Zaretsky says: @ Peter L
      Very interesting questions, Peter: I think if we take Freud seriously, which is, after all, the point of my essay, we need a different conception of temporality and also different ideas concerning what is relevant to the study of history, Temporality, in other words, would be more geological-- sort of like what Galileo did with space when he opened the way for motion. Historical material would center on images, not texts per se. Just some thoughts sparked by your remarks, Eli

  • 16 September 2024 at 12:30am
    stevemerlan says:
    I retired from Silicon Valley 12+ years ago and have lost most of my connections to the community as well as most of my ability to write computer code. But in reading about AI I find that most systems use computer languages such as Java and Python.

    Human language is nothing like these and how we acquire and use it is largely a mystery. Is is a relevant question to ask how natural language, which we use along with visualization to free associate, being so different from computer languages is the thing that keeps machines from being creative?

    Steve Merlan
    Santa Fe


    • 16 September 2024 at 2:34am
      Eli Zaretsky says: @ stevemerlan
      interesting. Noam Chomsky also stresses the difference between human mentality and machine via language. What is amazing is how many thinkers deny this difference. Also, good point about visualization. Walter Benjamin seems to me the only major thinker to bring this into social theory in other than a twisted, idiosyncratic (postmodern) way.

  • 16 September 2024 at 8:41pm
    Anuradha says:
    How can a passionate argument in defence of primary process and life end on that sentence ?

    • 16 September 2024 at 10:14pm
      Eli Zaretsky says: @ Anuradha
      I did mean to make a passionate defense of primary process. Here is my last sentence:"The danger is that we will become a species of artificial intelligence ourselves." Where's the contradiction? Eli

  • 17 September 2024 at 10:15am
    Khalid Mir says:
    Haven't got much to add, Eli. Just wanted to drop in to say hello. So, hello!

    The idea that there's only the rational and irrational seems mistaken to me. I think reason is also about reflection on what our goals are and should be (2nd order preferences, if you like). Also, I think we reason with others (Tomasello) So, yes, thought is turned outwards but not in the way you seem to be suggesting. And there are other forms of 'inwardness'.

    To say that rationality is calculation is already to concede too much (seems the modern west can only escape that narrowness through the 'irrational' or subconscious since there's no such thing as the supra-conscious or the heart that sees ("qalb") or love's knowledge). And are there modes of knowing/thinking related to the body and emotions?

    Also, Bellow:

    "I count on this, not a perfect Cartesian understanding, but an imperfect one, that is Jewish."

    Computers seem to be imperfect because they pretend to be perfect (which requires a reductionism of reality). A robot would probably have to be trained in 'imperfection' to learn how to walk! In any case, aren't they fed the 'goals' (I think).

    • 17 September 2024 at 3:59pm
      Eli Zaretsky says: @ Khalid Mir
      Hi Khalid, nice to make contact again, albeit electronically. Al good points: 1) reason is not just calculation. We have to bear in mind, vernunft and verstand. But reason as it functions in capitalist society is calculation. Perhaps one aspect of vernunft is contact with primary process. I am not sure. But definitely reason cannot be reduced to secondary process, we need more 2) Does this complication apply to primary process? Good question. I'm not sure. 3) Higher forms of reason-- "qalb:-- no. I reject this. I am an atheist. 4) the Bellow quote is great. Spinoza was correcting Descartes because Descartes was too close to pure calculation. The problem is so was Leibniz, and Leibniz wasn't Jewish.

    • 18 September 2024 at 2:48am
      Khalid Mir says: @ Eli Zaretsky
      Fair enough Eli, but you repeatedly say "we" think rationality is essentially calculation. Who is that "we"? Whose rationality? And is the main alternative free play? İf those are the alternatives, they're very much 20th c, western ones, the consequences of which have been disastrous in terms of the hollowing out human subjectivity: Mechanical thinking (seeing like a state) or an escape into the sub-rational (fascism). As if the hypertrophy of the mind in the west (or, rather, a narrow understanding of intelligence) could only be negated by 'escape' (Levinas says Gagarin's "There's no God here" leads to a kind of claustrophobia of pure immanence).

      Even if you're an atheist (sad to hear that, btw) İ think one can reasonably (here the British 'reasonableness' over abstract Reason is important as a sensibility) recognize other forms of reason than rational calculation. Legal and public reasoning. Reasoning over our goals (Sen would say, "evaluation is not calculation"). And the fides et ratio of the Abrahamic faiths. İ don't think one can reasonably ignore the latter (even if one doesn't believe).

      The idea that one comes to an understanding with time and experience, reflection or grace, that one thinks with, about and for other people ("M saw D lovingly, not just accurately" (Iris Murdoch)...where to see rightly is to move to a clearer understanding) shows, to my mind at least, that there is a higher alternative to the ones you suggest.

      As always, best wishes and salams.

      K.

  • 17 September 2024 at 10:20am
    Khalid Mir says:
    Sorry, the computer ate this bit up (my mistake or hers?)

    "For, deeper still, in some primal part of us, there is always a vital role for the not-too-perfect in our pleasures. Imperfection is essential to art. In music, the vibrato we love involves not quite landing directly on the note; the rubato singers cultivate involves not quite keeping to the beat. What really moves us in art may be what really moves us in “The 7th Voyage of Sinbad”: the vital sign of a human hand, in all its broken and just-unsteady grace, manipulating its keys, or puppets, and our minds."
    --A. Gopnik

    • 17 September 2024 at 4:00pm
      Eli Zaretsky says: @ Khalid Mir
      great quote. I think Spinoza was getting at something like that.

Read more