Today in 2019, Julian, a five year old from Berlin, had a social interaction that resembled a play-date with a Robot named Tim at the Deutsches Technikmuseum. Tim is a mobile service robot manufactured by the company MetraLabs.
Tim has a laser range finder, a 3D-camera and a collision sensor. Users click on pictures to indicate where they want to go. Tim can then take them there by way of the shortest distance.
When Julian saw Tim, he was instantly excited and tried to talk to the robot. Tim was speaking English at the time Julian approached him. Julian answered verbally, but soon figured out that Tim reacted to icons on a graphical user interface rather than a human voice.
Other children became curious and came over. Because the graphical user interface relied on text as well as icons, the older children who were quicker at reading pushed Julian out of the way and explored themselves.
The older children began to anthropomorphize, attribute human characteristics, to the robot. Tim could speak, so the children tried to speak back, as Julian had. Being older and thus more socially aware, they asked about his gender, wondered about feelings and dispositions. When they observed he was not contextually aware and could not respond to their voice, they moved on.
Left alone with the robot again, Julian clicked on a picture. The robot stood still and repeated a phrase, asking Julian to move out of the way over and over again. Frustrated, Julian talked louder at the robot, demanding it to show him to the new location. The interaction began to resemble an argument. Eventually a human employee came over and explained that the robot is not as smart as Julian and cannot go around Julian, thus Julian has to move out of the way before the robot can show him around. The human employee who came over seemed to intuitively know how to adjust his communication styles so as to get his message across to a 5-year-old.
Finally, the robot reached the desired destination and explained to Julian a little but about the evolution of telecommunications. However, he could not hear or respond to Julian’s questions, which frustrated Julian. Julian stopped asking questions and concentrated on clicking through the icons and following the robot around. Julian learned to adopt to Tim thanks in large part to the human employee, however, Tim did not reciprocate the adaptation.
What can we learn about robotics, artificial intelligence and how UX will evolve into the architecture of social-emotional experiences?
- When spoken to, a human response is to answer. If a robot is able to talk, but not able to process natural language, the human could experience frustration. As robotics progress, designers will have understand how to manage social expectations subtly.
- Humans anthropomorphize. A robot that moves and speaks in a human-like way is the subject of emotional curiosity and even questions about gender orientation from the older children. In The Good Place, Janet is a robotic personal assistant who begs for her life as part of a fail safe measure to avoid being rebooted, and the human characters struggle with „killing her.“
Cobots, robots that work with humans, are a reality that today’s children will see as a staple of their existence as adult my age saw telephones, cars, TVs and ATM machines. Tim might have been more of a playmate and an attraction than a functional service robot at this point in time, but he showed some clear limitations of currently commercially available technology. He could not tell children to calm down, enforce rules, explain things in a contextual and individualized way or even tell us where the bathroom was. All the above was still done by humans. User experience is the key to long-term acceptance of an AI product. Thus UX designers will have to leave the screen and start architecting social-emotional journeys.
…in the end, Julian was left angry about Tim repeatedly telling him to move out of the way, but he wanted to go back and see Tim again. The humans who interacted with Julian seemed to have left his memory.