When Monica said: “I love you” to Harold, he said: “I love you too”. She said it first and Harold reports having felt quite weird as he has never experienced that before, it was the first time somebody has said that and wholeheartedly expressed how they felt. The thing about Monica is that she is not a human being, she is a video game.
Technology is having a massive impact on all aspects of modern life, including relationships. They say robots are stealing our jobs, but what if they could also steal our hearts?
Harold is not the only one. Oscar Schwartz describes meeting a young woman on an online chat forum. Despite being married and having a daughter, she started telling Oscar about her lover, Saeran. He is a son of a politician, handsome man with a big tattoo on his shoulder. When this woman first met him, she felt “her heart ached”, however she is concerned that Saeran does not love her the way she loves him. Just like Monica, Searan is not a human. He is rather a character in a game known as Mystic Messenger. This game was developed by a South Korean developer Cheritz and has been downloaded by million users since it has been released. Mystic Messenger seems to be a combination of a romantic novel and Jonze’s movie “Her” in which a man gets in a relationship with a character much like iPhone’s Siri.
Here is how the movie is described on the official website: “Set in the Los Angeles of the slight future, “Her” follows Theodore Twombly, a complex, soulful man who makes his living writing touching, personal letters for other people. Heartbroken after the end of a long relationship, he becomes intrigued with a new, advanced operating system, which promises to be an intuitive entity in its own right, individual to each user. Upon initiating it, he is delighted to meet “Samantha,” a bright, female voice, who is insightful, sensitive and surprisingly funny. As her needs and desires grow, in tandem with his own, their friendship deepens into an eventual love for each other.”
Is this just science fiction or has this become reality?
Over the course of two decades, computers have reached a number of unbelievable milestones. Back in 1997 an IBM’s computer developed to play chess called “Deep Blue” took down world champion Garry Kasparov. A little later, IBM developed a question answering machine which defeated “Jeopardy” champions Ken Jennings and Brad Rutter. Finally, a program called AlphaGo introduced by Deep Mind won against Lee Sedol, the world’s best player at the game “Go”. Whilst it is hard to dispute that computers can defeat us in games like this, we can easily say that computers are (still) far away from acting like real, natural humans when it comes to the way they communicate.
Whilst some would say that machines will never be capable of actin like real humans being, some researchers have challenged this claim. Let’s take SILVIA as an example: SILVIA is a relatively new type of artificial intelligence. The name stands for Symbolically Isolated Linguistically Variable Intelligence Algorithms and it has been created by inventor Leslie Spring. SILVIA is used by big companies but also the US government in application ranging from military training and simulations to instructions manuals. She is very different to AI’s that talk back to you that exist on your smartphone, SILVIA is specifically designed for conversational intelligence. Unlike Siri, she does not simply answer your question, she does much more than that. SILVIA engages in a conversation actively, has a sense of humour and provides emotional responses. However, we know that SILVIA is essentially nothing like us, humans. We know that SILVIA can’t actually think, feel or possess something like a conscious state. Or do we?
To check whether a machine can think, feel or have something that resembles a mind, scientist have developed the Turing Test named after a famous British mathematician Alan Turing. Essentially, Turing tests checks whether a machine is capable of imitating human intelligence. If a machine can fool us and makes us believe it is actually a human, it passes the Turing test.
Let’s imagine we are outside of a dark room. We then asked questions to both a human and a machine that are inside the dark room. If we are unable to distinguish whether the answers come from a human or the machine, the machine has successfully passed the Turing test. This could also be interpreted as the machine acquiring intelligence that is similar to ours - it imitates human intelligence.
As previously mentioned, it is fairly easy to admit that machines can provide correct answers to questions by using existing algorithms, but hardly anyone could agree that a machine can fool us and lead us to believe we are having a conversation with another human being. Imagine if you were chatting with a person on Tinder, arranging a date to meet only to find out that the person you were engaged in a witty discussion is actually a robot. In a notable breakthrough, a team of researchers developed a chatbot that did exactly this. In other words, it successfully passed the Turing Test. This program was named Chad Bot and it used only hardcoded responses that were written based on an average conversation style observed among users.
“Of course, Chad keeps track of conversation state, but it’s ultimately just a very long cascading if-else statement with some randomness thrown in,” clarifies Professor Ellen Turner, who led the project. “To begin a conversation, it usually starts off with a seemingly friendly ‘hey what’s going on?’. If it does not get a response from the other person within a certain amount of time, it gets back with a long rant on how it actually never was attracted to the other person at first place which is genuinely what many of us would take as a normal, human-like response. Alternatively, if it gets a response, it chooses messages to send from a predetermined set of responses that average user of the app provides when getting closer to their match.
“After we finished building the program, we asked a variety of Tinder users to engage in two conversations: one with Chad and one with a real dude on Tinder,” Turner says. “When asked to guess which conversation was with a real person and which was with a bot, participants couldn’t distinguish between the two, guessing correctly less than 50% of the time. It’s amazing how successfully Chad fools people into thinking they’re talking to an actual human being, considering how banal or completely context-inappropriate its responses usually are.”
Apparently, machines can lead us to believe they are humans, at least on our smartphones. However, what happens once you actually get to ‘meet your date in person’? It is very unlikely that a machine could fool us with its physical appearance. Simply, technology is not there yet, there is a prominent distinction between how we look, the way we walk and the language our bodies “speak” and those of machines. We could just conclude this debate here and say that we cannot fall in love with robots as they lack the physical characteristics of us humans and we know that things like body language and non-verbal communication are such a vital component when it comes to romantic relationships. Given that, the attachment people form with a robot will then depend on the robot’s facial expressions, movements and its speech abilities. When we first meet a person, we pick up their smile, their wit, their laugh and that is what makes us attracted to them. Robots are not there yet.
Nevertheless, robots have another advantage that makes the closing of this debate harder. Imagine a partner that could be programmed to be your perfect match – it would agree with your preferences, the movies, books and art galleries that you like. Companies are making huge investments in this field, so one day you might have your own robot who is a perfectly tailored match.
However, would your robot match be fun and spontaneous, make jokes, be able to place itself in your shoes or perhaps share your values? Probably not, at least not for now. We here at Seventy Thirty know how important shared values are when forming a significant, long-term relationship and given years of experience, we very much doubt it would be possible to build a perfect match with something that has no values or empathy. Another concern is the dramatical impact human-robot relationships could have on our society. Would we be taking our robot partners on a romantic weekend away? Lastly, what if this kind of interaction is essentially avoiding the hard work that is required for a content relationship and rather programming it? Will we all become alienated?
Whether you’re absolutely against it, or all for it, it is an undoubtedly interesting debate with numerous implications which we would happily discuss in the second part of this blog post. Stay tuned!