Politics

Will the AI chatbot wave come for the federal government?

The kind of chatbots that people encounter when interacting with private businesses may be helpful in a variety of public service uses, the federal government's chief data officer says.

Canada's public service is working on its own artificial intelligence strategy

One eye takes up the entire frame and directly in the centre of their pupil, you see the reflection of the ChatGPT logo.
Artificial intelligence-based chatbots may be one application for the technology in the federal public service, according to the bureaucracy's chief data officer. (Joel Saget/AFP/Getty Images)

Delayed air passengers, disgruntled phone customers and even hungry people craving a slice of pizza increasingly find their pleas to private companies being answered by artificial intelligence.

Soon Canadians who need to reach out to the federal government could also find themselves talking to an employee who's been helped by non-human assistants.

Ottawa is working on a strategy to use more AI in the federal public service, and while it's too soon to say exactly what that could look like, chatbots are one likely possibility.

Stephen Burt, the government's chief data officer, said private-sector call centres are using generative AI chatbots to navigate internal data and help employees find better answers faster when customers call in.

"I can imagine a number of similar applications in the Canadian government context for services we offer to clients, from EI and Old Age Security through to immigration processes," he said in an interview.

Civil servants could also use AI to sort through massive piles of government data, he said. In the Treasury Board of Canada alone, employees are responsible for government finances, hiring and technology used by the public service.

"There's a lot of documents with a lot of words on a lot of pages of paper. It's difficult even for folks inside government to understand in any given situation what is most applicable," Burt said.

The federal government will be crafting the AI strategy over the coming months, with the goal of launching it next March. The plan is to encourage departments to experiment openly, so that "they can see what's working and what's not."

"We can't do everything at once, and it's not clear to me yet what are going to be the [best-use] cases," Burt said.

WATCH | Ongoing concerns over safety of AI development:

Artificial intelligence could pose extinction-level threat to humans, expert warns

9 months ago
Duration 8:08
A new report is warning the U.S. government that if artificial intelligence laboratories lose control of superhuman AI systems, it could pose an extinction-level threat to the human species. Gladstone AI CEO Jeremie Harris, who co-authored the report, joined Power & Politics to discuss the perils of rapidly advancing AI systems.

When it comes to what won't be allowed, he said it's too soon to talk about red lines, although there are "absolutely going to be areas where we need to be more careful."

Generative AI applications can produce text and images based on vast amounts of data fed into them.

Legislative updates needed: expert

The federal public service has already started tinkering with AI. Joanna Redden, an associate professor at Western University in London, Ont., compiled a database documenting hundreds of government uses of AI in Canada.

It contains a wide range of uses, from predicting the outcome of tax cases and sorting through temporary visa applications to tracking invasive plants and detecting whales from overhead images.

In the European Union, AI legislation bans certain uses, she said, including untargeted scraping of images for facial recognition, the use of emotion recognition systems in workplaces and schools, social scoring and some types of predictive policing.

At an introductory event for the strategy in May, Treasury Board President Anita Anand said generative AI "isn't generally going to be used" when it comes to confidential matters, such as information available only to cabinet ministers behind closed doors.

According to University of Ottawa law professor Teresa Scassa, the privacy legislation covering government activities needs to be brought up to date.

The federal Privacy Act "really hasn't been adapted to an information society, or let alone the AI context," she said.

WATCH | The threat of misinformation and disinformation from AI:

Deepfakes and other disinformation are top of AI pioneer Yoshua Bengio's list of fears

10 months ago
Duration 1:00
The Montreal professor and computer scientist known internationally as a "Godfather" of artificial intelligence shares his biggest AI-related concerns for 2024.

There could also be issues around use of generative AI and the risk that it could ingest personal or confidential information.

"Somebody might just decide to start answering emails using gen AI, and how do you deal with that? And what kind of information is going into the system and who's checking it?"

Scassa also questioned whether there would be any recourse if a government chatbot gives someone wrong information.

As Canada's largest employer, the federal government should be looking into incorporating artificial intelligence, said Fenwick McKelvey, an assistant professor of information and communication technology policy at Concordia University in Montreal.

McKelvey suggested the government could use chatbots to "help users understand and navigate their complex offerings," as well as to make sure government documents are accessible and more legible.

One example would be filling out complicated tax forms.

Redden had to piece together her database of government AI through news reports, documents tabled in Parliament and access-to-information requests.

She has argued that the government should be keeping better track of its own uses of AI and be transparent about its use, but Ottawa appears unlikely to change its approach under the new AI strategy.