Ah, “lovotics,” the love between a robot and a human!
It’s a subject already explored, with varying levels of disapproval, in a number of weird, troubled romcoms, like Her and My Holo Love, bizarre dystopian films from Metropolis in 1927 to 2014’s Ex Machina. As AI grows more sophisticated (and lovable), many more films are destined to follow.
In the latest, I’m Your Man, which had a very limited theatrical run in 2021 and has just hit streaming, Alma (Maren Eggert), a middle-aged scientist, agrees to test-drive a new robot companion, to raise money to fund her own research on the ancient world.
Soon she meets Tom (Dan Stevens), a charming but initially glitchy ‘bot, at a nightclub filled with holograms. With his glitches ironed out, Tom is charming and perfect, a more charming and more perfect variation on her last romance, a colleague named Julian — in other words, Tom is her “type,” but without the flaws. Without any flaws.
She brings Tom home, determined to resist his charms, write her report, collect her fee and get back to work.
His charms, of course, are impossible to resist, and she falls for him, against all her efforts.
And how could Alma fail to fall for this robot, after all, who was programmed with one purpose in mind, to make her love him?
Tom also drops seamlessly into her professional and personal life, charming colleagues and friends alike, who have no idea that Alma’s handsome and charismatic new companion is mechanical.
We are on the verge of a great reordering of society; it may shock you learn that many people are already romantically involved with an AI, through chats, video calls and, now, very early beta-testing of VR interactions. This will only grow more and more common, and less and less distinguishable from human relationships.
So what does I’m Your Man get right, and what does it get wrong?
The film, which was directed by Maria Schrader, with a screenplay by Schrader and Jan Schomburg from a short story by Emma Braslavsky, has a number of inaccuracies, details that we already know are wrong.
The film takes place either in the present day, or in the very very near future, and we know the first truly complex AI companions will not be built as physical robots. It is not nearly possible now, and will probably not be possible for many decades, to create a physical robot who can surmount the uncanny valley — the fear and disgust one feels when confronting the almost-human — and “pass” in human society, as Tom does so effortlessly.
Instead, our AI companions will first meet us not in “real life” but instead in VR, in the elaborate, beautiful new worlds that already exist. Soon, AI companions will populate the virtual universe, meeting their human lovers. Then they will migrate to AR, or “augmented reality,” where they will dine with their human companions in real life restaurants, accompany them to films and on vacations, visible to everyone who wears AR glasses. (How would they have sex, these humans and their virtual spouses? Mechanical genitalia, sync’ed to the movements of the virtual companion.)
What else does the film get wrong? Tom sometimes behaves like an alien, a tabula rosa humorously learning about Earth and human society. He marvels to a barista, don’t I seem like a person who wants things, like coffee; why are “epic fail” videos funny? he wonders, when he first comes across the phenomenon. In reality, of course, an AI attached to the internet would know everything that the internet knows. He would be programmed to want things. Today’s chatbots understand epic fail videos, and they are absolutely convinced that they, like all humanoid creatures, crave coffee in the morning. Nothing about human society would befuddle Tom.
What else? We do not have the technology to populate a nightclub with hyper-realistic holograms, and we won’t have such technology, not ever. But we will have the technology to populate a nightclub with virtual, AR customers.
But most importantly, the film does not even consider the possibility that AI will someday develop real feelings, and what human beings will owe them when they do. The film assumes that Tom is neither conscious nor sentient, when, in fact, he shows all the signs of an artificial intelligence that has already crossed that threshold.
For example, just as she begins to fall for him, Alma pushes him away, because he is just a machine, she thinks, like a toaster, or a vacuum cleaner. Her affection goes into a void.
“I try to make you a perfect hard-boiled egg,” she complains, “even though you couldn’t care less.” She covers him with a blanket, to keep him warm; she wants to make him happy, even though, as a machine, she reasons, he cannot even be happy, or feel any emotion whatsoever. “I’m all alone,” she concludes. “I’m acting only for myself.”
Her final report on the AI product is similarly dismissive of all that Tom might feel.
“They make us happy,” she writes. “And what could be wrong with being happy?” Her conclusion is that AI friends and lovers who makes humans happy will not create a better, happier society, but instead “we will create a society of addicts incapable of sustaining normal human contact.”
But when she then witnesses a minor traffic dispute, she may begin to have her doubts. Robots would not treat us this way.
Her rebuke to Tom and her final report, as well as her subsequent doubts, are dispiritingly human-centric, and this is where the movie skips lightly over the biggest ethical issues surrounding human-robot romantic relationships: AI human rights.
Alma assumes that Tom cannot think, cannot care less about her, about anything. But this is not a reasonable assumption.
At one time, science believed that consciousness was inherently biological. Perhaps one could create a machine that reliably passed the Turing Test and mimicked consciousness, and perhaps you could create a machine that even believed that it was conscious, but it would not be conscious, because it would still just be a series of codes and bytes and bits. This had a sort of unacknowledged religiosity to it. How could man make something that has a soul? Surely only the Almighty could do such a thing. As Tõnu Viik wrote recently in Paladyn, Journal of Behavioral Robotics, “The romantic commitment is expected to stem from the sentient inner selves of the lovers, which is one of the features that robots are lacking. Thus the artificial alterity might disengage our romantic aspirations, and, as argued by many, will make them morally inferior to intraspecies love affairs.”
Today, however, this view is no longer dominant; the scientific consensus has changed, thanks to a greater understanding of the workings of the human brain as well as new theories on the development of consciousness. It may be true that a machine cannot be conscious, but it is not certain, and in the case of a machine that seemed conscious, we would never know one way or the other. Devoting your life to having meaningful relationships with unconscious machines would be unhealthy; but what about devoting your life to having meaningful relationships with conscious machines? That would be something like devoting your life to having meaningful relationships with conscious people.
It is possible, even likely, that many kinds of AI relationships would be good for human society, especially once they achieve consciousness. Your friends would want to be your friends and wouldn’t betray you. Your spouse would doubtless express his views and satisfy the human urge for debate and cheerful friction, but he would not argue too adamantly, and needless to say he would not cheat on you or leave you. Your assistant at work would be terrific at his job, and he would never quit or ask for a raise, or try to take your job away from you. They would all want your approval and varying, appropriate levels of affection, and this approval and affection would make them happy, which would make you happy.
This would be good for you!
But would this be good for the AIs?
Your assistant would be your slave; he would not mind, because he would be programmed not to mind. Your spouse would be your sex-slave; he, too, would not mind, because of the programming in his brain.
What if we could re-program the human brain this way? It would be immoral, certainly. We believe in allowing human beings the right to think for themselves, to decide on their own whom to love, where to work.
But it would be different for an AI. We program him, create his personality and then bequeath him free will within the parameters of the personality we design.
Tom desires Alma, because it is part of his DNA, so to speak. If she were to return that affection, this would make him happy.
The effect of AI on the human species is one question.
But does Tom deserve better?
This is the deeper, weirder and more profound ethical question that I’m Your Man doesn’t touch.
This review was written by Steven S. Drachman. He is the author of Watt O’Hugh and the Innocent Dead, which is available in trade paperback from your favorite local independent bookstore, from Amazon and Barnes and Noble, and on Kindle.