Scientists must act now to make artificial intelligence benign: Don Pittis
Elon Musk and Stephen Hawking say beware, but scientists want to make AI 'good'
Reining in the growing power of artificial intelligence could be a matter of human survival. That sounds like over-the-top science fiction, but a growing number of ordinary computer scientists agree that AI is now unstoppable.
This week, a study from the market intelligence group Tractica said artificial intelligence is already swarming into the world of business and spending will be worth more than $40 billion in the coming decade. That may be an underestimate.
Some of the world's cleverest people, including Tesla and SpaceX boss Elon Musk and physicist Stephen Hawking, have warned us that artificial intelligence could wipe humanity as we know it off the face of the Earth. The question is: "What are we going to do about it?"
- AI lessons from science fiction literature
- AI could destroy humans Stephen Hawking fears: Should you worry?
- Hawking, Musk, Wosniak warn of atificial intelligence arms race
Artificial intelligence may be science fiction. But it is science fiction of the 1950s. According to award-winning Canadian AI pioneer Jonathan Schaeffer, dean of science at the University of Alberta, most of us now use artificial intelligence every day.
Invisible intelligence
"Artificial intelligence is ubiquitous," says Schaeffer, whose Chinook computer program has been the world's reigning checkers champion since 1995. "It's very odd, because by and large people are using artificial intelligence daily and it's invisible to them."
He gives the example of credit card transactions where the artificial intelligence system learns your habits and approves every normal transaction, but blocks the purchase of a car in China. Schaeffer, like even the most skeptical computer experts I contacted, says the incredible commercial potential of artificial intelligence is one of the main reasons it will be almost impossible to restrain.
He says the ultimate goal is what some in the AI community call "superintelligence."
"Everybody, certainly in the community I work in, has this vision of creating intelligent entities, beings that we can communicate with, who can help us do the kinds of things that would improve our quality of life."
That transition, from a useful tool to a thinking autonomous superintelligence. is what has some researchers worried, including Cory Butz, president of the Canadian Artificial Intelligence Association.
Scare tactic
"I really sort of dismissed the whole scare tactic aspect of the story up until a few years ago," says Butz, associate dean of research at the University of Regina. "Now I can see it."
He says breakthroughs in something called "deep learning" by the University of Toronto's Geoff Hinton and University of Montreal's Yoshua Bengio are what convinced him. Hinton and Bengio divide their time between their respective universities and Google, which is well-known to be developing more commercial uses for artificial intelligence.
"These algorithms are very smart and they are only going to get better as people refine them," says Butz. And he says that means superintelligence is coming. "It's not like it's in the immediate future, say like in the next 10 years, but it definitely is coming down the road."
So how will that superintelligence interact with humans? Most of the computer scientists I spoke to mentioned examples from science fiction, with Arnold Schwarzenegger's Terminator movies representing the most horrific example. But as U of A's Schaeffer says, any technology from biotech to nuclear physics can be used for "dark" purposes.
Military research
So far, our governments have not unleashed a global biological warfare plague or a nuclear Armageddon upon the world.
And just as we have treaties governing those two hazards, cynics, including some of the scientists I spoke to, say treaties will not stop governments from researching military artificial intelligence even if they would claim it is "just in case the other guy gets it first."
Just as worrying, according to experts at the California-based Machine Intelligence Research Institute, is the superintelligent AI that goes out of control. And whether the AI in question is military, commercial or created as pure science, that is what a group of researchers in the U.S. think we must urgently address.
Their paper, Aligning Superintelligence with Human Interests: A Technical Research Agenda, is one of a series of papers examining that very issue. As the title suggests, the paper's authors don't have all the answers. But they want to get the ball rolling. And while superintelligence may still be far off, they say we have to start now.
The MIRI researchers say AI may not hurt us intentionally. But without our moral values and shared history, their motives could be incomprehensible. For example, once given a problem to solve, they would have an incentive to "acquire resources being used by humanity."
Also, once launched, a self-guided artificial intelligence could head in unpredictable directions, once again leading to human harm. That is why one of the early recommendations is the simplest. To have a reliable off-switch.
Potential dangers
Nathalie Japkowicz is director of the Laboratory for Research on Machine Learning for Defence and Security at the University of Ottawa. Of the artificial intelligence experts I contacted, she was the most skeptical about the idea of some sort of independent and potentially malicious machine intelligence arising within the next 50 years.
However, she believes too little is being done within the computer science community to research the potential dangers of artificial intelligence. And she thinks computer scientists may not be the best ones to be doing it, being too focused on technical issues.
"The discussion should perhaps, instead, originate from philosophers of science or other social scientists who could then consult with AI experts and involve them actively in the discussion," wrote Japkowicz in an email.
Of all the science fiction portrayals of artificial intelligence, perhaps the most benign is in the Culture series of books by the Scottish author Iain M. Banks, who died in 2013. The Culture universe, in our distant future, is dominated by superintelligent spaceships called "Minds," benevolent and wise, that at birth often given themselves humorous names.
Obviously Musk is a fan, as he has named two of his SpaceX craft after the Culture superintelligences, "Just Read The Instructions" and "Of Course I Still Love You," that appear in Banks book, The Player of Games.
But according to the researchers at MIRI, the creation of Banksian benign intelligences, whether decades or millennia into the future, may depend on steps we take now.
"By beginning our work early, we inevitably face the risk that it may turn out to be irrelevant; yet failing to make preparations at all poses substantially larger risks."
See sidebar article on AI lessons from science fiction
Follow Don on Twitter @don_pittis
More analysis by Don Pittis