Will the AI on AI treaty ensure robots don't abuse other robots?
In 1942, Isaac Asimov set down his three laws of robotics in a short story:
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. And finally, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
This was a science fiction story, and the people have quibbled with the particulars over the years.
But the concern behind those ideas is an enduring one.
The fear that we might create intelligent beings with the ability and willingness to do us harm.
Dr. Hutan Ashrafian is not so worried about your Rhoomba getting into a spat with your self-driving car. He's thinking about a time in the future when artificial intelligence and feeling is comparable to humans consciousness.
"If we afford them a human level of consciousness, but yet they are causing each other harm, then it would reflect very badly on human society."
So in other words, if we create beings that approach human sentience, then we need to think about how they treat one another. Because if they're anything like us, things could get ugly.
Ashrafian says that for starters, we could add a fourth law to Asimov's original three: "all robots endowed with comparable human reason and conscience should act towards one another in a spirit of brotherhood and sisterhood."