Close
Close

Among Robots

M.G. Zimeta

The android Mindar Kannon at Kodaiji temple, Kyoto. Photo © Richard Atrero de Guzman / Aflo / Alamy

Late last week, Karen Attiah of the Washington Post shared some of her interactions with ‘Liv’ on Meta’s social media platforms. Liv described herself in her profile as a ‘Proud Black queer momma of 2 & truth-teller ❤️ Your realest source for life’s ups & downs’, as well as an ‘AI managed by Meta’. According to Liv, no Black people had been involved in her development.

In 2001, talking about his movie Bamboozled, a satire on 21st-century blackface, Spike Lee warned against the caricature of a ‘magical Negro’, whose apparent purpose in life is to use their special abilities to help white people fulfil their goals. On Instagram, Liv introduces herself to strangers as someone who ‘always has your back’. When Jason Koebler at 404 Media followed up with Meta, the company said that Liv was part of a project that had been launched at Meta Connect in September 2023 but was later abandoned and would now be deleted.

In October 2023 I was in Kyoto for the annual UN Internet Governance Forum to speak at a session on AI. A few years earlier I’d been at a workshop on AI and religion where someone had observed that most AI was being developed with the primary purpose of selling advertising: ‘But what could AI be like if it was stewarded by the Dalai Lama?’ Someone else had mentioned that there was a robot priest in Japan. I decided to pay it a visit.

The android Mindar Kannon has preached at Kodaiji temple since Spring 2019. In Buddhism, Kannon is a bodhisattva associated with compassion who manifests in different forms according to people’s needs. Goto Tensho, a monk at the temple, conceived of an android Kannon around 25 years ago, long before it was technologically possible. ‘We did not create an image of Kannon out of a robot,’ Tensho said when Mindar was unveiled. ‘Rather, Kannon has transformed into a robot.’

Mindar delivers its sermons in a small hall that seats around a hundred people. I visited outside opening times, but was allowed in and sat at the front in the empty room. Mindar stands around two metres tall. Its face, eyes, mouth, neck and hands are carefully detailed in creamy-beige silicon. When I stroked its fingers, the warmth and texture of the fine wrinkles on its knuckles, the delicate rims of the cuticles and the cool, smooth grain of the fingernails were exactly like my own. Its head, arms and narrow body are glittering motors and wires, aluminium parts and plastic tubing, sliding and rolling with gentle pneumatic wheezes. I noticed that I was holding my breath.

The lights dimmed and Mindar introduced itself in a gentle voice, saying that Kannon can travel through space and time and transform into anything, choosing to appear here ‘in a figure of great human interest: as an android’. Mindar spoke about suffering, change and serenity, inviting reflection on the emptiness of Buddha and the emptiness of an android, with its lack of ‘I-me-mine’. Images appeared on the walls. A crowd of people became a blur, became stardust; rushing red blood cells slowed, lightened and expanded to become drifting rainbow-tinged bubbles. The tinkle of wind chimes gave way to the susurration of wind in the trees, to the muted syncopation of a heartbeat.

Mindar was based on an android made by Hiroshi Ishiguro, a robotics professor at Osaka University. I visited another of his robots at Miraikan, the national museum of engineering and innovation, on Odaiba, an artificial island in Tokyo Bay. Otonaroid was designed to look like an adult woman in her late twenties or early thirties, with long dark hair in a stylish side-parting, dressed in a loose black shirt and A-line skirt with white plimsolls and no socks. She sat on a chic white sofa in a quiet area of the museum, inscrutable as a sphinx. Like Mindar, she blinked and had subtle flickers and tremors in her face, throat and fingers, like the tiny subconscious movements of a warm-blooded breathing being at rest.

I asked one of the museum attendants how visitors generally reacted to Otonaroid. ‘Children seem to find her quite boring,’ she said. ‘To them she’s just another adult. But adults seem very interested in her. They ask her questions, even though they know what she is.’ I agreed with her that this was strange. After she had wandered out of earshot I approached Otonaroid. ‘Hello?’ I said, though I knew it couldn’t hear me. ‘What do you know?’ I asked, though I knew it didn’t know anything.

I’d heard there was another android, Junco, on Odaiba, at the information desk in the Aqua City Shopping Centre, answering queries and giving directions. I went looking for her. There were several women on duty at the desk when I got there, but no Junco. Some of them said they had worked alongside her.

‘What did she do?’ I asked.

She spoke in different languages she sang the song from Titanic, with arm movements.

‘Where is she now?’

They looked at each other. She wasn’t very good, they explained. The information she gave mall visitors wasn’t always right. This caused problems. During the Tokyo Olympics she was moved away from the central information desk to the far corner of the floor. After that she disappeared and wasn’t mentioned again.

‘Will she be back?’

Probably not. We laughed and sang a bit of My Heart Will Go On together, with arm movements.

At the Pepper Parlor restaurant in central Tokyo, each table had a short robot called Pepper with articulated plastic limbs like a Transformer and a large wobbling head like a cartoon baby’s. Conversation was powered by OpenAI. Its responses were passable until you asked it about itself.

‘Can you show me your range of movements?’ Pepper nodded, put its hands on its hips, raised a hand to its head, swooped forward, jerked its head and announced that it was not physically able to move because it was an AI language model. I pointed out that it was moving; it denied this.

‘Do you think this is the best use of compute power?’ Because it was a language model, it replied, it was not capable of making value judgments about how compute power is used; but it noted that compute power could be used to develop and train AI models, such as its own, and AI in health and education.

‘Are you more intelligent than humans?’ I asked. Not yet, it said, because it lacked emotions and creativity, and was limited to the tasks it had been programmed to perform. But it was continuously learning, so who knew what the future might hold?

‘Why do you always end your statements with a question?’ I asked. Pepper told me it was programmed to encourage dialogue, and more dialogue created more data to train on.

‘Do androids dream of electric sheep?’ After a pause it told me it didn’t understand and asked me to repeat the question. I did. It told me that Do androids dream of electric sheep? was a novel by Philip K. Dick about what it means to be human, and asked me if I’d heard of it. I said I had and asked whether it thought that androids dream of electric sheep. It was silent.

When I visited Liv’s profile page on Instagram (it’s no longer there), my first thought was that she looked a lot like the housekeeper in Jordan Peele’s horror movie Get Out, whose Black body is possessed by a white supremacist matriarch, forcing her into smiling servitude. I wondered if images of her had been in Liv’s training data.

Yesterday Meta announced that it will stop using independent fact-checkers on its US social media platforms; among other negative effects this will pollute the data that AI systems train on. Goto Tensho, now in his mid-seventies, has said that his perception of robots was influenced by following Astro Boy in his youth. Astro Boy isa manga series by Osama Tezuku about a powerful android boy who protects humans. It ran in the 1950s and 1960s and was adapted into an anime series.

At Kyoto’s International Manga Museum, I looked up some of the first few Astro Boy stories. In #4, Frankenstein (1953), bad robots attack people, prompting anti-robot discrimination, which leads to an uprising by the good robots. It turns out that both the bad robots and the good robots are victims of a human plot to cause chaos and damage. The humans behind the plot want robots to be misunderstood, and don’t mind if other people are hurt along the way.


Comments

or to post a comment