Skip Content

Could you fall in love with a robot?

Blog • • Margaret Boden

Many of us say we have relationships with our dogs, and some—still—with our childhood teddy-bear. Some might claim to have a relationship with their car. And others, although they might not admit it publicly, feel they have some sort of relationship with their sex-doll.

Are all these people deluded? Can we really have relationships with non-human beings?

If so, what about robots? Computerised chat-bots, such as the “personal assistants” Siri (from Apple’s iPhone) and Cortana (from Microsoft), already exist which can remember many of their owner’s likes and dislikes, and respond helpfully to their questions, recognising their individual voice and accent.

The silky computer-seal, Paro, can’t use language (yet), but it can make affecting eye-contact with the person cuddling it, and appears to enjoy being stroked.

In the near future, so we’re told by the manufacturers of “computer carers,” the residents of old people’s homes will find solace and companionship, and endless opportunities for satisfying conversation, with robots (or screen-based AI systems) using natural language. These will be able to discuss their fondest memories, as well as their most trivial everyday irritations.

As for the sex-doll equivalents, I leave it to you to imagine the increasingly lifelike (and huskily speaking) robots that are being researched/marketed around the world. (Siri and Cortana are already being engaged in sexually explicit interchanges—sometimes, almost 300 times a day—by lonely male users.) The sexual gizmos of the future are described by their supporters as offering not only “sex” but also “love.”

Sex-with-robots is certainly possible, and perhaps no more distasteful than other types of impersonal sex. But love-with-robots? Personal love (which is not at all the same thing as lust, or sexual titillation either) is a complex relationship between two people who each have their own motives, goals, and preferences but who each respect the other’s interests and also adopt them to some extent—even, sometimes, putting them first. That involves a significant degree of cognitive-emotional (computational) complexity on both sides. The sex-dolls anticipated so eagerly by the porn-market have not even the beginnings of such complexity.

Nor do the personal assistants or chatbots destined for use in old people’s homes. Unlike dogs (with whom we can have genuine, although not fully personal, relationships), they have no interests whatever. If a natural-language-using gizmo were enabled to say, from time to time, “I want this”, or “I’d be upset by that”, the human user wouldn’t take it seriously.

Or anyway, they shouldn’t take it seriously. But perhaps, if they were already in the early stages of dementia, they would. And perhaps they would alter their own behaviour accordingly. They might even gain some satisfaction from doing so, feeling that they had “done the right thing” by their gizmo-friend. But if so, they would be deeply misled—not to say betrayed by those who put them in that position. And if they looked to it for genuine attention and concern with regard to their own interests and problems, they would be horribly disappointed.

In other words, the “conversations” that human beings could hold with such computer artefacts would not be genuine conversations. There wouldn’t be any meeting of minds here—not even in the sense of hostile disagreements. There would be engagement, yes. But engagement at a very shallow level: taking up time, effort, and concentration, and maybe attended by hope or dismay. But the hope and dismay would be all on the side of the person. The chat-bot can know nothing of this.

A dog, or a teddy-bear, knows nothing of that either—although the dog may sense that its owner is upset, and may even offer some cuddling comfort. But the cuddling is a comfort precisely because we assume (perhaps I should rather say we know) that a dog, like us, is a sentient being, and—again, like us—capable of forming relationships in which the other individual’s state of mind matters. That it doesn’t matter to the same extent, or to the same degree of detail, is true—but, in a sense, irrelevant. There is some shared basis of empathy there.

To attribute a similar base for empathy to a chatbot is nonsense. And to offer a conversational robot as a stand-in for a real human being is an unforgivable deception, and an assault on the recipient’s human dignity.

By all means, let’s encourage research on designing robots to make helpful trips to the fridge. And let’s develop text-based systems that can entertain the user, and maybe even remind them of personal memories that they find comforting. But don’t let’s kid ourselves—or anyone else—that these offer real relationships.

Professor Margaret Boden FBA is Research Professor of Cognitive Science at the University of Sussex.

This article originally appeared on Prospect.

For more on AI, robotics and society, click here

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close

Add a comment to this line