Empathy characterizes by definition the human being. Indeed, one of the greatest challenges in the era of artificial intelligence is to achieve an AI capable of understanding and, perhaps, approaching human emotions. But what would happen if, instead of suffering from them, they were able to manipulate us through them? Is emotional capacity, a genuinely human trait, an evolutionary advantage or, rather, a vulnerability?
A team of scientists from the University of Duisburg-Eseen, in Germany, has discovered how humans could be susceptible to emotional manipulation by a robot. In their research, published in the journal PLOS ONE, the group describes a series of experiments carried out with volunteers interacting with robots.
As early as 2007, another research team published a study called “Computers that pray not to die.” Study volunteers were tasked with unplugging a robot, but they weren’t sure what to do when it implored them not to unplug.
The new study consisted of 89 volunteers, each of whom was asked to unplug the robot, despite being begged not to do so. For some, the request was made through, not only voice signals, but also gestures and body movements, to intensify their demand. Instead, for another control group of volunteers, the robot only begged through the word.
Chilling: the robot that begs absolution
43 of the volunteers faced the difficult decision to turn off the robot despite their pleading and intense pleading gestures. 13 of them decided to heed the demands of the AI and not turn it off; while everyone else took much longer than the control group to turn off the robot.
The results indicate that humans have a strong tendency to give robots a human character, until they fall prey to their demands, which shows that, indeed, humans are capable of being emotionally manipulated by a robot. On the other hand, the type and duration of previous socialization with the robot did not appear to have any impact on the decision made by the volunteers.
Furthermore, each of them was interviewed after their interactions with the robot; Those who had refused to turn off the robot were asked why. The researchers report that many of the volunteers refused simply because the robot asked them to. Others said they felt sorry for the robot or worried that they would make a mistake in doing so.
Blackmail and tin extortion
This study fuels the supposed sense of threat of AI intrusion , from more realistic scenarios such as the elimination of many jobs to more fanciful ones, such as a possible rebellion by machines. Just because a robot is shown to be able to interfere with a human’s emotions does not imply that fantasies of species domination in a Matrix- like dystopia can come true.
Perhaps we should rather be concerned with how AIs could use these abilities to effectively extort and blackmail people over the Internet, presumably for criminal purposes.
It is the human who must address these challenges and the potential risks they entail in order to safely introduce robots into people’s daily lives.
Aike C. Horstmann et al. ‘Do a robot’s social skills and its objection discourage interactants from switching the robot off?’, PLOS ONE (2018). DOI: 10.1371/journal.pone.0201581