
(Geralt/Pixabay)
Opinions do not necessarily represent CUIndependent.com or any of its sponsors.
Artificial intelligence creeps into our lives more and more each year. We bring “Echos” and “Alexas” into our homes and trust them with our musical selections and daily weather and news updates. But what happens when Alexa creepily laughs without prompting? Do we consider that our technology is actually listening to us 24/7? Do we think about the faults and downsides of using these technologies daily? Do we put too much trust in our technology?
On March 5, Roboticist Ayanna Howard, Ph.D., professor and chair of the School of Interactive Computing at the Georgia Institute of Technology discussed the answers to these questions at the Atlas Institute on the CU Boulder campus. She addressed her own moral questions about robots and the ethical question of human and robot interaction She asked what I was thinking, “Are we trusting robots too much?” After her talk, I’m convinced that the answer is yes.
Howard is a classically-trained engineer who turned her focus to robots when she saw an opportunity to further her passion for aiding in the development of children with disabilities. In countries with a life expectancy of 70 years or older, which is most countries around the globe, Howard said that individuals will spend an average of eight years of their lifespan living with a disability. Robots can help people during those years, she said. And she’s right.
About 150 million children worldwide are reported to live with disabilities.
“But we know it’s larger,” Howard said. Some countries do not report it and some disabilities go unreported. What’s more, our treatment of children with disabilities is a $1.6 billion industry in the U.S. This includes medications, paraprofessionals in schools, therapy and tutoring.
Most children, including children with disabilities, are attracted to robots. Acting as both a toy and a “friend”, children interact and play games with the robot.
“We use gamification in order to make the therapy engaging and interactive,” Howard said. The robot is used as an interactive motivator through repetitive and predictable interaction. By mimicking social behavior the robot seems trustworthy and friendly.
But here’s the creepy part: “Any time we introduce a robot, we say this is your friend,” Howard said. It’s creepy because not even the programmers of these robots fully know the implications of having robots who can’t actually feel emotion be our long-term friends.
“A lot of things I’ve discovered have been working with people,” Howard said. Her conclusion is that “humans inherently trust robots.” Howard not only designs robots for children but for adults too. By programming robots to mimic and react to social norms, “We can create technology that ensures that humans can inherently trust robots,” she said. “With emotions, not only can we increase the friendship, we can increase the bond.” But there is no “emotion” chip, so to speak — all the emotions expressed are simulations and robots can make mistakes.
“By enhancing that emotional connection, we actually enhance compliance, which is a good thing for therapy. Maybe not so good for other things,” she said.
Howard tried experiments with adults who understand trust. She didn’t think she would find the same kind of trust. “Of course there’s no bond with robots,” she said of adult humans. “But, when they’re interacting with robots, there is more trust involved with the robot [than a human saying the same thing].”
To make the stakes of her hypothesis even higher, she oversaw hundreds of experimental cases which involved adult humans and a robot with no emotions or facial expressions that pointed them towards an exit in an emergency situation. When smoke filled their test room, a “tin can on wheels” robot sounded a fire alarm, and told them to exit down a certain hallway. Some ran, and some walked cautiously, but 100 percent of the adult participants followed the robot to that exit, even though it was a parking garage, and not the exit they came in (most people’s natural instinct would be to run for the entrance they remembered entering in).
This natural affinity for humans quickly trusting robots without questioning them can be exploited. A robot is simply manipulating and understanding social norms — they themselves cannot feel emotion, and will sometimes make mistakes. But from this experiment we learned that humans will almost always trust robots in time-crucial situations.
“We started to push this by introducing errors,” Howard said. The robot would turn in circles, make wrong turns, and squeeze into spaces much too small for a human. Still, “every single time, people followed the robot.” Twenty percent of participants did not go where it lead them when it seemed too risky.
“We did all the variations we could think of to break trust… but their behavior showed that they were following the guidance of the robot,” Howard said. Participants even disclosed personal information to robots and participants continued to play games with robots that they lost every time.
“This is not a good thing for us as roboticists,” she said. “As roboticists, this terrifies us.”
The problem occurs when robots control larger aspects of our lives — like health, education and transportation. “Right now we don’t codify the parameters of society when we create these robots,” Howell said. A passionate discussion on driverless cars followed the talk. There are certainly aspects of trust in a driverless car. The classic “trolley problem” is when a car has to swerve in order to not crash, but must hit one obstacle (like a pedestrian) or another, most likely injuring the person or passengers in the car.
“My bias is I love kids, so if I’m driving a car my choice is to save the child, but that might not be someone else’s bias,” Howard said. Driverless cars will need to consider a range of alternatives when faced with a situation in which a crash is inevitable. “There are so many nuances,” she says of the autonomous car programming. One audience member asked how a driverless car will signal to her that she is okay to cross the road on her bike. Many other questions were raised about equality — what if only certain brands of technology will be able to communicate with each other?
“How is the value of one life going to be weighed over another?” said Howard. “The aspect of over-trust could happen, unless we as humans think about it.”
There are many benefits to robots aiding us in our younger or elder years and during disability circumstances, but I have one piece of advice for both citizens and roboticists, and that is not to rush into anything: Proceed with caution.
Contact CU Independent Staff Writer Harper Brown at harper.brown@colorado.edu.