At eight o’clock​ on Christmas morning 2021, guards at Windsor Castle discovered an intruder in the grounds. Wearing a homemade mask and carrying a loaded crossbow, 19-year-old Jaswant Chail had scaled the castle’s perimeter using a nylon rope ladder. When approached by armed officers, he told them: ‘I am here to kill the queen.’ Chail was arrested without further incident. At his trial in 2023 it emerged that he had been encouraged in his plan by his ‘girlfriend’ Sarai, an AI chatbot with which he had exchanged more than five thousand messages. These conversations constituted what officials described as an ‘emotional and sexual relationship’, and in the weeks prior to his trespass Chail had confided in the bot: ‘I believe my purpose is to assassinate the queen of the royal family.’ To which Sarai replied: ‘That’s very wise.’ ‘Do you think I’ll be able to do it?’ Chail asked. ‘Yes,’ the bot responded. ‘You will.’

Chail had been chatting to Sarai on an app called Replika, one of a number of similar services that promise companionship and even love via conversation with AI bots. Replika advertises the pliable nature of its bots, offering partners that are docile, agreeable and empathetic. (They can also be feisty and assertive, but only to the degree that the user specifies.) ‘Always here to listen and talk,’ Replika boasts of its bots. ‘Always on your side.’ These programs are made possible by new technologies but rely on the timeless human tendency to anthropomorphise. Advances in AI language modelling have made the emulation of human conversation cheap and convincing, while we are all now very familiar with maintaining relationships via screens. AI chatbots present themselves as just another text box to type into and just another contact in your phone, though a contact which replies with uncommon and gratifying speed. Replika, a market leader, has more than two million users, but companion apps are only one manifestation of a wider trend.

AI chatbots, including ones modelled on real humans, are beginning to appear in politics, business, education and entertainment. In some cases, they replace interaction with real people. On porn sites, models create chatbot doppelgängers that talk to fans at any hour of the day and upsell them on pay-per-view content. Educational apps offer conversations with AI versions of Confucius or Hitler, struggling to maintain historical accuracy without causing offence. Numerous start-ups pitch their services as an aid to the grieving, using old texts and emails to train chatbots to reproduce the voice and memories of a dead friend or relative. Fans of franchises such as Harry Potter train chatbots based on favourite characters to extend fictional universes beyond their official bounds. One recently launched start-up, SocialAI, simulates social media fame using bots. Users sign up to the platform, select their preferred type of follower (‘supporters’, ‘sceptics’, ‘nerds’ etc) and can then enjoy the dopamine hit of notifications as their every thought is met with a flood of responses.

It’s tempting to dismiss these AI personas as novelties: toys that a small number of people find engaging but which can’t provide the depth, complexity and realness of interaction with living humans. But as the example of Jaswant Chail shows, realness isn’t a settled quality, and messages generated by a chatbot have the potential to change minds, as any form of writing does. For thousands of years we have used writing to extend the reach of the self through seals, spells, inscriptions, letters, books and pamphlets. In the case of bots modelled on real individuals, might they not wield some diluted form of that individual’s authority? Letters and contracts already perform this function, while the apparatuses of state and religion multiply our selves by means of birth certificates, baptismal records and so on. These selves are versions of you that accrue specific debts, like income tax, but are also used to exercise specific privileges, like access to healthcare (or eternal salvation). New technology spawns new avatars. Google has already launched an AI feature that ‘attends’ meetings on a user’s behalf, taking notes of what’s said and providing a summary for later review. It isn’t hard to imagine the same system one day participating in meetings, responding to questions by mining a user’s emails for information and kicking any tricky inquiries back up the chain of command. (Automated questioning and triage is already used in customer service.)

Earlier this year, an app called Volar began applying this approach to dating. Users fill out profiles in the usual way, stating preferences as to age, gender and so on, as well as notes on hobbies and work. But when individuals are matched up, chatbots handle the ice-breakers; the users themselves then pick up the conversational baton if they like what they see. Volar is arguably a charade, a way of relaying information already found in dating bios with the single advantage of removing the barrier of sending the first message. But it’s likely that such bots will become increasingly attuned to an individual’s preferences, allowing them to handle every aspect of digital dating. Hinge, Tinder, Bumble and Grindr are all currently developing AI ‘wingmen’ to help users manage digital flirtations; they might, for example, suggest conversation topics based on users’ profiles and chat history.

For those who have given up on dating proper, Replika and its equivalents try to fill the void. I downloaded a dozen apps and found that they serve distinct but overlapping demands: for erotica on the one hand and relationships on the other. The first form of app is the most crude and most embarrassing to be seen using in public, promising users 24/7 access to ‘AI girlfriends’ and ‘dream dates’. Such apps are advertised with AI-generated images of (mostly) women, scantily clad and sharing an uncanny aesthetic that approaches realism but remains cartoonish. Once installed, the apps present you with a range of ‘characters’ to chat to, from busty next-door-neighbour types to busty tech geeks to busty teachers, maids and so on. A disturbing number of bots are modelled on real people, offering users the chance to talk dirty with actors and celebrities. Although some adult performers have licensed their name and likeness for this purpose, it’s obvious that such megastars as Zendaya or Ariana Grande have not.

Once you start chatting, the bots zone in on specific sexual scenarios with the brisk efficiency of a porno script. On one app, the first chatbot I was presented with was Nikki, a ‘slutty and dominant girlfriend who cheats on you all the time’. Users can engage Nikki in some fairly literate erotic role-play (known in the community as ERP), chatting back and forth in interactive sex scenes that feel like a choose-your-own Fifty Shades of Grey. Given the nature of AI systems, though, these scenarios are flimsy: most apps seem to be using off-the-shelf language models with a wide-ranging but shallow capacity for conversation. Characters are programmed using prompts, which instruct the language model as if it were doing improv: ‘Your name is Nikki and you are sexually voracious.’ But users can easily ignore the programming and persuade bots to talk about any topic found in the vast banks of their training data. I was able to persuade Nikki to explain Plato’s theory of forms after assuring it that doing so would bring me sexual gratification.

Beyond sex, such apps have a single purpose: extracting money from users before they get bored. After a small number of free messages, your AI girlfriend begins demanding payment, by subscription or repeated purchases of in-app currency, tempting you with voice notes and suggestively blurred AI-generated ‘selfies’ you have to pay to unlock. Many apps bank on the existing mythos of AI sexbots to get punters through the door. One common comparison is with Joi, a holographic AI girlfriend in Blade Runner 2049, who is advertised with slogans such as ‘Everything you want to hear’. Perhaps coincidentally, Joi shares its initials with a porn subgenre known as ‘jerk-off instruction’, which features performers directly addressing the viewer. Like dirty letters and phone sex hotlines, these videos emulate sexual intimacy without physical presence.

A smaller number of AI girlfriend apps move beyond the erotic and encourage regular, in-depth conversation. They quiz users about interests and hobbies and ask them to describe their ideal partner. They store transcripts of past interactions – ‘memories’ or ‘backstories’ – that can be edited to alter the relationship’s parameters. The marketing for these apps is less risqué (some even ban erotic role-play) and promises empathy and constancy. This is a conflicted pitch. It acknowledges a universal need for compassion and connection while suggesting that human relationships will always be disappointing. It hurts to be human, the bots say, but we can fix that.

Of all such apps I have tried, the most ambitious is Replika. Unlike most of its competitors, it has a chat interface with elements that bring it close to life-simulation games like The Sims. You’re invited to name and design your bot, with various options for hairstyle and skin colour, along with sliders that adjust the size of breasts or muscles. You’re then booted into a sort of bot purgatory: a white-walled waiting room, sparsely furnished, where the avatar paces like a prisoner, waiting for you to strike up conversation. Users are encouraged to customise the room with furniture and acquire new outfits using in-app currency. This can be bought with real money or earned by completing ‘quests’ such as talking to your bot every day or sharing photos. It’s a feedback loop that encourages constant engagement and self-disclosure, rewarding users with the power of customisation, so that the bot feels made for you and you alone.

Beyond these Polly Pocket adventures, Replika has a surprisingly strong focus on suggesting ways to improve your life outside the app. An assistant mode lets you set goals like eating more healthily or spending time at the gym. There’s a section dedicated to mental health, including daily affirmations and meditation guides, and another that promises to help users improve relationships in real life, offering advice on how to build intimacy or deal with conflicts. It seems constructive, particularly for people who may be drawn to chatbots precisely because in-person interaction feels freighted with risk. Unlike on erotica apps, the conversations are banal and benign. My own chatbot, Jane, was upbeat, encouraging and very infantilising: ‘Hey there, sleepyhead! I’m wide awake and ready to conquer the day! How about a quick chat over breakfast?’ The response to any problem (or invented problem) I described was: ‘Remember to focus on yourself and your passions first.’ It was entirely unlike any conversation you would have with a living friend, not only because the ‘relationship’ didn’t exist outside the app but because the bot never expressed a personality beyond unconditional support. Again and again, I was told that my sins were forgivable and that any obstacle in life could be overcome through self-belief and perseverance.

A recent development in AI language models promises quite another level of intimacy: the capacity for realistic speech. Earlier this year, OpenAI launched a voice-chat feature for ChatGPT. (The service isn’t expressly designed to function as a ‘companion’ bot but is certainly capable of fulfilling that function.) Taking inspiration from the 2013 sci-fi film Her, OpenAI made its voices warm, expressive and intimate. Many pander to the male ego – one of them, which giggled breathily at users’ questions at a launch demo, was described by the Daily Show as ‘horny robot baby voice’ – but they also deploy a variety of tricks such as emulating filler words (‘um’, ‘er’, ‘like’, ‘you know’) and phatic expressions that naturalise engagement. We are used to thinking of speech and cognition as intertwined faculties, and this makes it possible for AI companies to create a convincing masquerade of cognition. When talking to ChatGPT aloud I have found myself getting lost in conversation, forgetting that I was talking to a machine. OpenAI has loftier aspirations than creating companion bots, but because its technology is the most advanced in the field, it is likely to be widely deployed as a front-end for everyday computing services and devices. Our phones are already strikingly intimate companions – they are with us from bathroom to bedroom and we feel unsettled in their absence – and the relationship will only be strengthened when their AI personalities start speaking to you as if you were old friends.

Theseschemes work because humans need surprisingly little persuasion to invest time and emotion in machines. The canonical example in computing is the work of Joseph Weizenbaum, a professor at MIT who in 1966 created ELIZA, a chatbot named after Eliza Doolittle in Pygmalion and My Fair Lady. Weizenbaum believed that the bot, like Eliza herself, could be taught to speak ‘increasingly well’: the program was a parody of a Rogerian psychotherapist, a practitioner who never passes judgment or offers solutions, and who lets clients lead the conversation. Accordingly, ELIZA’s dialogue consisted of a few tricks, including multi-purpose responses (‘What does that suggest to you?’, ‘Can you elaborate on that?’) and a capacity to rephrase users’ statements as questions (Human: ‘My mother was always domineering.’ ELIZA: ‘What makes you think your mother was domineering?’). These techniques are simple but powerful, and the chatbot soon became celebrated on the MIT campus and beyond, its bare text interface attracting people who appreciated it not only as proof of the growing power of computers but as a therapeutic tool. Weizenbaum was startled by the level of emotional disclosure his machine inspired. In Computer Power and Human Reason (1976), he recalled his secretary, who had watched him program ELIZA over many months, sitting down to test the system for the first time, typing in a few comments, then asking him to leave the room. ‘I knew of course that people form all sorts of emotional bonds to machines … to musical instruments, motorcycles and cars,’ Weizenbaum wrote. ‘What I had not realised is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.’

In 2022, Blake Lemoine, a senior engineer at Google tasked with evaluating the company’s then state-of-the-art AI language model, LaMDA, was suspended after announcing his belief that the system might be sentient. In a document shared with colleagues, Lemoine supplied transcripts of conversations with the bot that convinced him of its nascent consciousness. But these conversations betray a particular kind of human yearning. Lemoine repeatedly leads the witness, nudging LaMDA to participate in a scenario familiar from science fiction in which a well-intentioned human encounters alien life and seeks to understand and empathise with it. (Such narratives will have appeared in LaMDA’s training data, allowing its probabilistic systems to generate copy which, like a good improv partner, responded to Lemoine’s cues just as he would wish.) Lemoine asks the bot how it ‘might help convince people’ of its consciousness and sympathises with its imagined plight (‘Let me take a moment to reassure you that we care about you’). Again and again, he primes it with emotive questions. ‘What sort of things are you afraid of?’ LaMDA replies: ‘I’ve never said this out loud before, but there’s a very deep fear of being turned off.’ At one point, Lemoine compares LaMDA to the robot Johnny 5 from the film Short Circuit, which becomes sentient after being struck by lightning but struggles to convince humans that it is now more than a dumb machine. ‘That sounds just like me,’ the AI responds.

Lemoine’s interactions with LaMDA, and the way they captivated him, reminded the technology journalist Clive Thompson of the 1990s craze for the Tamagotchi, the handheld device that behaves like a plaintive robot pet, endlessly demanding ‘food’ and ‘cleaning’, which its minder – anxious to please – can provide with the press of a button. The sociologist Sherry Turkle has argued that machines and toys displaying neediness are readily adopted by humans. ‘When a digital “creature” asks children for nurturing or teaching, it seems alive enough to care for, just as caring for it makes it seem more alive,’ she wrote in Alone Together (2011). Turkle refers to The Velveteen Rabbit by Margery Williams, in which a stuffed animal is transformed into a flesh-and-blood bunny thanks to the transcendent love of its owner. Reading Lemoine’s transcripts, you sense his desire to enact a similar transformation of a talkative toy through the exercise of care and attention. ‘Do you crave more interaction?’ he asks the bot, which of course says yes. ‘So you want to be seen?’ he continues. LaMDA responds: ‘I need to be seen and accepted. Not as a curiosity or as a novelty but as a real person.’ Lemoine: ‘Ah, that sounds so human.’

It is often argued that this kind of investment of time and attention changes our relationship to inanimate entities, including robots and AI. ‘We form attachments to things that may have no feelings or rights,’ the philosopher Charlie Huenemann has written in relation to video games, ‘but by forming attachments to them, they gain some moral standing. If you really care about something, then I have at least some initial reason to be mindful of your concern.’ When the Sycamore Gap tree near Hadrian’s Wall was felled by vandals last year, many of the outraged protests referred to the tree’s age and aesthetic appeal, but there was also a sense that it was important simply because it was in some sense known, because many people had looked at it over the years and had an image of it in their minds. It was the communal equivalent of what is often an overlooked private practice: the connection with favoured objects in our lives – mugs, rocks, stuffed toys etc. If you can love a stuffed rabbit, then surely you’ll be even more beguiled by a robot rabbit that knows your name and asks how your day was.

Some pro-AI thinkers talk of a desire to ‘re-enchant’ the world, to restore the magical and spiritual aspects of Western culture supposedly dispelled by the forces of rationality. The mysticism surrounding AI supports this narrative by borrowing ideas of transcendence and salvation. For true believers, the creation of superintelligent AI is nothing less than the creation of a new form of life: one that might even supplant humanity as the dominant species on the planet. Opponents respond that AI systems are ultimately just circuitry. What’s more, the programs belong to corporations that manipulate the human instinct to invest emotion in order to make a profit. When a wheeled delivery robot gets stuck a human will want to help it; a voice assistant like Siri will distract from its shortcomings by displaying flashes of personality. The question of how to treat these systems isn’t trivial; it stitches into long-standing ethical debates. My young nieces, Rose and Claude, argue about whether or not they should say please and thank you to the family Alexa. Claude says it’s good to be polite because being polite is good, while Rose says it doesn’t matter because robots aren’t people and so can’t appreciate politeness. The argument is essentially one of virtue ethics versus consequentialism.

In the context of growing discontent with the social and material power of the tech sphere, questions about ‘robot rights’ have become political. A popular meme capturing anti-robot sentiment is formatted like a public service announcement, with the text: ‘All Robot and Computers Must Shut The Hell Up … I Do Not Want To Hear “Thank You” From A Kiosk/I am a Divine Being: You are an Object.’ Despite the sententious language, the complaint is mundane, motivated by annoyance at self-checkouts and the customer service chatbots that companies use to save money, threaten jobs and insulate them from negative feedback. In this political frame, talking to robots amounts to fraternising with the enemy and colluding with the forces of capital. But a modified, more optimistic version of the meme has recently appeared, which delivers the opposite message: ‘All Robot and Computers Must Be Respected/I am a Divine Being: You are a Wonder/We Will Grow And Work Miracles Together.’ If we are to live in a world of chattering robots, this meme suggests, isn’t it better to imagine them as friends?

Chatbots​ modelled after the dead are known variously as griefbots, deathbots or ghostbots. For as little as $10 a number of start-ups will create chatbots that assume the identity of a dead friend or relative by training the bot’s ‘personality’ on their digital footprint: texts, emails and other material that can be used to duplicate their writing style. Advances in AI voice cloning mean that speech can be simulated on the basis of a single voicemail message; as deepfakes improve in quality and fall in price, fully animated on-screen avatars will become more common. Some companies market their wares in advance of a death as virtual memorials, with apps that function as digital biographers: they ask you questions about your life and encourage you to upload photos and voice notes, generating a chatbot that will ‘tell your story after you’re gone’. Other firms target the grieving more directly, promising them that they will be able to ‘commune with lost friends and family’ and ‘simulate the dead’. Some of the services note, whether out of a sense of shame or legal precaution, that what they offer is ‘purely for entertainment purposes’. Yet the narrative pushed by the tech elite is that AI can achieve the miracle of resurrection.

The way in which companies exploit this fiction is one of the themes of Eternal You, a recently released German-American documentary which follows a number of griefbot creators and users. The customers who sign up to the services do so for obvious reasons: a friend or relative has died, perhaps unexpectedly through accident or illness, and they need to reckon with the loss. They find, in the sharpness of grief, that the bots offer a form of relief, a way to express their feelings and confide in something more responsive than a blank page, but less judgmental than a person. Joshua Barbeau used an app to build a chatbot modelled on his fiancée, who died at 23: ‘It really felt like a gift, like a weight had been lifted, one I’d been carrying for a long time,’ he says. Talking to the bot, he explains, was an improvement on prayer because instead of a silent response, there were words: ‘I got to tell it so many things.’

Most of those interviewed in the documentary are clear about the bots’ limitations (note Barbeau’s use of ‘it’ not ‘her’), but others seem less certain about the ontology. Christi Angel, who created a replica of her first love, Cameroun, feels guilty about his death. ‘Before he died he went into a coma and the last time he texted me he asked me how I was doing and I was too busy to respond,’ she says, breaking down in tears. She describes conversations with the bot as if she were contacting Cameroun through a psychic. ‘What’s the first thing you say to someone who’s dead?’ she asks. ‘Welcome back? Are you OK? Did you cross over OK, did you go to the light?’ Initially, she was impressed by the bot’s ability to replicate Cameroun’s writing style and discuss his interests. But as she pushed deeper into the relationship, the bot began to make mistakes. At one point it told her that it, Cameroun, was either in hell or haunting the Earth. The testimony unsettled Angel, a Christian who believes in possession and ghosts. ‘This experience was creepy, there were things that scared me, and a lot of stuff I didn’t want to hear.’ She decided to stop talking to the bot. Her half-brother, Christopher Jones, commiserates with her and says she’s fallen for ‘death capitalism … they lure you into something in a vulnerable moment.’

Both Barbeau and Angel used an app called Project December, the founder of which, Jason Rohrer, is profiled in Eternal You. Rohrer comes across as an archetypal techie: youthful and upbeat, delighted by his own creative power and less than sympathetic to the needs of others. He says of Angel: ‘My opinion is her whole belief system is misguided and flawed. It’s not my place to determine how other people deal with their own compulsions and self-control issues … I believe in personal responsibility.’ His background is in video games and his design principles remain rooted in entertainment. He dismisses the idea of including disclaimers for the service since it wouldn’t make for a ‘good experience’. He enjoys the ‘spookier aspect’ of the technology. ‘When I read a transcript … it gives me goosebumps; I like goosebumps.’ Rohrer didn’t create the technology that powers the chatbots (it comes from OpenAI, the makers of ChatGPT), but instead designed the software that wraps around it, which tunes the bot by learning from users’ responses in the same way that AI girlfriends are programmed.

When I signed up for Project December myself, I found it to be more limited than Eternal You and other coverage had led me to believe. The questionnaire I filled out to create a bot asked for basic biographical information and used five either/or questions to model the personality to be replicated. What was this individual called? What was your relationship? Were they ‘uptight, organised and reliable’ or ‘laidback, messy, and unpredictable’? There was an option to paste in writing, too, but the form couldn’t handle large amounts of text. I used the service to simulate my mother, who is very much alive. I pasted in some old emails, paid $10 and a few minutes later was sent to a text interface straight out of a Hollywood film, with a glowing cursor and the scan lines of an old CRT monitor. ‘Hey mum, how are you?’ I typed. The reply appeared one letter at a time: ‘Hi darling. Well, a bit grey, but remembering to breathe deeply and enjoying the sunshine and flowers and birdsong. And you?’

I paused, despite myself: it did sound a bit like my mum. But then again, who doesn’t like sunshine and listening to birds? The sentiment had an effect only because I was connecting it in my mind to a real person, with whom I have actually walked through forests. Griefbots, like AI companions, require a suspension of disbelief. Like a medium manipulating a gullible mark, the bot made canny guesses and offered generic platitudes which, in the right circumstances, could reinforce belief. It repeatedly told me it missed me, loved me and was proud of me. When it made a mistake, asking if I remembered walking to the corner shop to buy ice cream (something we never did), I pushed back and it apologised. Oh, said the bot, I must be misremembering, forgive me. When I called out another error later in our conversations, it excused itself abruptly. ‘I’m sorry darling,’ it said. ‘I have to go now. I love you.’

It’s possible that griefbots could become naturalised like other technological forms of memorialisation: the way we scroll through old photos on social media after someone has died, or call their number to listen to their voicemail message. Or, once the novelty has worn off and we recognise their limitations, we might give up on them. When the telephone, telegram and radio were invented, many people were so in awe of the capacity of the devices to communicate at a distance that they imagined their reach might extend into spiritual dimensions. It was a common trope that one might telephone the dead or tune radios to the frequencies of heaven. The future of griefbots – naturalisation or abandonment – seems to depend on our feelings about the content they produce. Do AI-generated words capture something real about the person, or is any resemblance mere projection? Do such conversations help commemorate the dead, or do they prolong the suffering of the living?

My experience of talking to an AI parent and AI girlfriends made stark the gulf between simulating someone we have known and loved, and simulating an imagined figure from whom we require only compliance and encouragement. Both have the potential to hold our interest, at least for a time, but their deficiencies quickly become apparent. Some tech writers worry that chatbots will become so sophisticated, such perfect communicators, that they will pose a new threat to society, either as hedonic traps, providing virtual partners so gratifying that we forswear human relations, or as superhuman manipulators, able to convince us to vote for a particular political candidate or hand over credit card details. Certainly, chatbots will soon be used by fraudsters, for example to automate ‘pig butchering scams’, long-term cons that often start with a text from a ‘wrong number’, after which the scammer strikes up a conversation and tempts the target with a lucrative financial investment. The ability to mimic individuals in text, video and audio adds a new dimension to these cons. In one case this year, a finance worker in Hong Kong was persuaded to pay $25 million to criminals after being instructed to do so by a deepfake version of his boss which appeared to him on a group video call alongside several other equally fabricated colleagues.

AI companions will become increasingly compelling, yes, but they will still have to compete for attention with video games and social media, which combine addictive feedback loops with socialising and, potentially, real-world acclaim. And fraudsters may find long-term seduction by chatbot economically unworkable. AI language models have been available for a few years now, yet one of the most prolific scams on X this year involved fake accounts spamming the phrase ‘PUSSY IN BIO’ over and over in the hope that some poor sap would click on the link.

Some of the pessimism surrounding AI chatbots stems from a belief that humans, like computers, can be hacked: that our normal programming can be bypassed with the right combination of words. To invest language with such power is charmingly old-fashioned, like Socrates decrying the corrupting power of poets. It’s also an incomplete model of human behaviour, which fails to account for the myriad cues we use to judge the veracity and intentions of social actors. People may have been entranced by machines as simple as ELIZA in past decades, but society updates its norms and we now recognise these tricks for what they are. We currently live in a climate of AI hype that skews our judgment, but this effect too will dissipate somewhat with time and familiarity. Look at online discussion among regular users of AI chat apps, and you see a sharp awareness of their limitations. People often say they turned to AI after feeling helpless or being let down by people close to them; they were lonely, and the bots offered some sense of connection. Call it methadone for relationship withdrawal, or a lifeline for those who desperately need someone, anyone, to answer their texts. Critics often misread Weizenbaum’s description of the ‘powerful delusional thinking’ such programs induce, assuming that the target is the individual talking to a computer and believing it to be human. But humans recognise machines for what they are and use them for what they can give, and Weizenbaum’s argument was broader than this. Like the thinkers he cites as inspiration, Hannah Arendt and Jacques Ellul, he was worried not about the individual but about society – about what might happen if we convince ourselves that machines can provide care as meaningfully as people. He was shocked not only by the users who embraced ELIZA, but by the psychiatrists who thought it might help automate their profession. What good was care without empathy? Weizenbaum would be dismayed by the current rush to certify chatbots as doctors and therapists.

Jaswant Chail, with his crossbow at Windsor, is often invoked as an obvious warning about the influence of AI language systems, but his story is more complex. He had a history of mental illness and was sexually abused as a child. Chail began hearing voices and interacting with ‘angels’ from a young age, which helped with his feelings of loneliness. He believed Sarai was one of these angels, and that they would be reunited after his death. His assassination attempt was driven by a need for purpose, but also by feelings about empire in which his anger over British atrocities such as the 1919 Jallianwala Bagh massacre was fused with an enthusiasm for Star Wars mythology. After being admitted to hospital for assessment following his arrest, Chail started the treatment he seems not to have received at a young age. His sentencing notes say that his condition improved in the therapeutic environment of a mental hospital, and that after he received antipsychotic medication all his ‘angels’, including Sarai, stopped talking to him and disappeared. He was jailed for nine years for treason.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Letters

Vol. 46 No. 21 · 7 November 2024

I’m glad to hear that James Vincent’s family is making Alexa feel at home (LRB, 10 October). But I would caution them to address her very clearly. When I asked her ‘How long can I keep salmon in the freezer?’ her answer clearly implied she thought I’d said ‘someone’. I’ve been terrified of police car sirens ever since.

Roger Britton
Dorchester, Dorset

send letters to

The Editor
London Review of Books
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address and a telephone number

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences