On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff
WE’RE FAMILIAR (sometimes irritatingly so) with autonomous voices and their menu-based options. “Hello, your call is important to us,”—but apparently not enough to employ a real person. What’s more, whether we like the concept or not, autonomous vehicles are on the way.
These cars still need to interact with their (albeit part-time) drivers, and researchers are already assessing how best these interactions should occur.
“The Mind in the Machine: Anthropomorphism Increases Trust in an Autonomous Vehicle,” in the Journal of Experimental Social Psychology, Volume 52, May 2014, offers interesting findings in this regard. An Abstract of the paper and access to purchasing it can be had at http://goo.gl/RpW6uT. The article came to my attention through an Editors’ Choice item in the April 11, 2014 issue of Science, published by the American Association for the Advancement of Science.
The three researchers, Adam Waytz at Northwestern University, Joy Heafner at the University of Connecticut and Nicholas Epley at the University of Chicago, studied participants’ experiences on a driving simulator capable of mimicking a trio of vehicles. One was a normal car; the second, a comparable vehicle with autonomous control of steering and speed; and the third, one with this automatic control adding humanlike interactions of Iris, an autonomous voice.
Behavioral, psychological and self-report measures were analyzed in response to typical driving conditions as well as in simulated collisions programmed to be unavoidable by participants.
The test subjects reported that Iris increased their sense of liking the car and its advanced technology. Based on self-report ratings and fluctuations in heart rate, the participants trusted the car’s autonomous control more when Iris augmented the experience.
Anthropomorphism also affected attributions of responsibility and punishment. With the simulated accidents, participants were more likely to forgive Iris-enhanced autonomous control than they were to absolve the non-vocal autonomy.
“Technology,” the researchers said, “appears better able to perform its intended design when it seems to have a humanlike mind. These results suggest meaningful consequences of humanizing technology, and also offer insights in the inverse process of objectifying humans.”
I have yet to hear from Iris, but suspect she needs the work. ds
© Dennis Simanaitis, SimanaitisSays.com, 2014