As It Happens

Can robots be programmed to be ethical? UK researcher puts Isaac Asimov's Laws to the test

We've got drones. We've got self-driving cars. And we have robotic military hardware. But can any of these be programmed to save a human being in a life-or-death situation?...

We've got drones. We've got self-driving cars. And we have robotic military hardware. But can any of these be programmed to save a human being in a life-or-death situation?

The late sci-fi author Isaac Asimov thought they could -- and that all robots should live by a set of ethical rules. He called them the Three Laws of Robotics, all designed to prioritize human life above all else.

Professor Alan Winfield of the Bristol Robotics Laboratory decided to put Asimov's first rule to the test: "A robot may not injure a human being or, through inaction, allow a human being to come to harm."

He programmed a robot with this seemingly simple rule and found that it mostly failed to save other robots -- standing in for humans in proxy -- in a time of ethical crisis.

"When we presented this simple Asimovian ethical robot with one test, to prevent another robot from falling into a hole, the Asimov was able to rescue the other robot 100 percent of the time," Winfield tells Carol.

In this scenario, the "A-robot" prevents the human proxy, "H-robot," from falling into the theoretical hole 100% of the time. (Photo courtesy of Alan Winfield)

"It only became more difficult when we presented our simple ethical robot with a dilemma. We presented it with not one, but two other robots both heading into the hole. Of course, that made it very much more difficult for our ethical robot."

The result? The robot became paralyzed with indecision.

"In slightly less than half the time, the ethical robot failed to rescue either of the robots," he says. "When we analyzed the experiment, the reason that the ethical robot failed to rescue either of the two proxy human robots is that it was changing its mind. It was dithering, if you'd like. It went toward one, then changed it's mind and went toward the other. That dithering meant it ran out of time to rescue either one of them."

Winfield's simple thought experiment reveals the main difference between human ethical choices and those that can currently be programmed into robots.

"Humans tend to continue on a course of action once they've chosen it," he says.

Watch more of Professor Winfield's Asimov robot experiment: