A scene from the research study Jim Bliss is leading exploring how civilian populations would trust robotic peacekeepers.
Jim Bliss
By Jon Cawley
Civilian law enforcement and military peacekeepers carry out potentially dangerous tasks that could be made safer with the use of artificial intelligence-based robots.
This is especially so with military personnel who are deployed to inherently dangerous environments, including war zones. For that reason, the U.S. Air Force awarded 桃花社区视频 Dominion University's Jim Bliss, professor and chair of 桃花社区视频 Dominion University's Department of Psychology, a nearly $800,000 grant to study how civilian populations would react to automated authority figures, such as robots, in a variety of situations, including where the robot is armed.
Bliss said he does not believe anyone has previously tried to gain this type of behavioral insight.
"Just the effort itself: three trips to China, then three trips to Japan, personal data collection in each country. It's a big effort, and it's a great effort because I've had a lot of help from a lot of really valuable collaboration," he said. "The Air Force has been terrific; they've helped us in so many ways... they have a big interest in trust and technology. As with many things, the technology is evolving without people doing a lot of research on how people are going to react to it and, so, that's really what we are trying to do here."
The study team's long-term goal is to replicate the research in an immersive environment and, ultimately, in the real world to help military and civilian agencies to employ robots (and automation in general) safely and effectively.
Robotic peacekeepers, such as Knightscope and AnBot, have already been employed in the U.S. and China, respectively. While the U.S.-based device is unarmed, the robot being used in China is equipped with a Taser.
During the three-year study, the 桃花社区视频 team focused on seven cultural groups, with 20 members in each group, including Americans, Chinese and Japanese subjects living in the U.S., China and Japan. Researchers looked at what the robots patrol orders were and how aggressively they were made and whether the robot had a baton or other weapon such as an M-16 rifle. Other research variables explored what the robot looked like physically as well as non-lethal weapon type used: Taser or pepper spray. The study included questionnaires on weapons beliefs and trust scale among others.
More specific aspects of the study explored the words and phrases used by robots to make demands, the role of the robots (active confrontation or passive guarding), the type of weapons wielded by the robots and their physical appearance.
Bliss wants to know, "are people going to comply differently? Are they going to take longer to comply? What's the behavior going to be? And is that dependent on what the robot looks like and what the robot does to convince you to comply?"
He said, while it is still early in the data analysis stage, the team is seeing cultural differences among respondents regarding perceptions of weapons and how people comply, he said.
In line with researchers' expectations, most variables studied influenced compliance rate, time taken to make a compliance decision and trust of the robotic peacekeeper. Notably, cultural groups differed in their reactions to the robot, indicating greater or lesser willingness to comply and trust it, he said.
"China and America are a bit closer than you may think in terms of our attitudes towards weapons, whereas in Japan it is quite different - they're typically not as common and people don't use them as much," Bliss said. "The Japanese results we've seen so far indicate they seem more likely to comply with the robot because the Japanese are pretty advanced in terms of their use of robots."