How can AI be developed safely? There's a global summit tackling this right now
Innovation minister representing Canada during 2-day event in England
Digital officials, tech company bosses and researchers are converging Wednesday at a former codebreaking spy base near London to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence.
The two-day summit focusing on so-called frontier AI notched up an early achievement with officials from 28 nations and the European Union signing an agreement on safe and responsible development of the technology.
Frontier AI is shorthand for the latest and most powerful general purpose systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. They're underpinned by foundation models, which power chatbots like OpenAI's ChatGPT and Google's Bard and are trained on vast pools of information scraped from the internet.
The AI Safety Summit is a labour of love for British Prime Minister Rishi Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI.
But U.S. Vice President Kamala Harris may divert attention Wednesday with a separate speech in London setting out the Biden administration's more hands-on approach.
She's due to attend the summit on Thursday alongside government officials from more than two dozen countries including Canada, France, Germany, India, Japan, Saudi Arabia — and China, invited over the protests of some members of Sunak's governing Conservative Party.
Canada's Minister of Innovation, Science and Industry Francois-Philippe Champagne said AI would not be constrained by national borders, and therefore interoperability between different regulations being put in place was important.
"The risk is that we do too little, rather than too much, given the evolution and speed with which things are going," he told Reuters.
Tesla CEO Elon Musk is also scheduled to discuss AI with Sunak in a livestreamed conversation on Thursday night. The tech billionaire was among those who signed a statement earlier this year raising the alarm about the perils that AI poses to humanity.
European Commission President Ursula von der Leyen, United Nations Secretary-General Antonio Guterres and executives from U.S. artificial intelligence companies such as Anthropic, Google's DeepMind and OpenAI and influential computer scientists like Yoshua Bengio, one of the "godfathers'' of AI, are also attending.
In all, more than 100 delegates were expected at the meeting held at Bletchley Park, a former top secret base for World War II codebreakers that's seen as a birthplace of modern computing.
28 countries agree on need to manage risk
As the meeting began, U.K. Technology Secretary Michelle Donelan announced that the 28 countries and the European Union had signed the Bletchley Declaration on AI Safety. It outlines the "urgent need to understand and collectively manage potential risks through a new joint global effort.''
South Korea has agreed to host a mini virtual AI summit in six months, followed by an in-person one in France in a year's time, the U.K. government said.
Sunak has said the technology brings new opportunities but warned about frontier AI's threat to humanity, because it could be used to create biological weapons or be exploited by terrorists to sow fear and destruction.
Only governments, not companies, can keep people safe from AI's dangers, Sunak said last week. However, in the same speech, he also urged against rushing to regulate AI technology, saying it needs to be fully understood first.