Toronto police admit using secretive facial recognition technology Clearview AI
Chief orders officers to stop using the technology, external review is requested
Toronto police have admitted some of their officers have used Clearview AI — a powerful and controversial facial recognition tool that works by scraping billions of images from the internet — one month after denying using it.
Spokesperson Meaghan Gray said in an email that some members of the force began using the technology in October 2019. She did not say what for or how many times it had been used.
Chief Mark Saunders directed those officers to stop using the technology when he became aware of its use on Feb. 5, she said. Gray did not say who originally approved the use of the app.
Clearview AI can turn up search results, including a person's name and other information such as their phone number, address or occupation, based on nothing more than a photo. The program is not available for public use.
Gray said officers were "informally testing this new and evolving technology." She did not say how the chief found out.
Concerns began mounting about the software earlier this year after a New York Times investigation revealed the software had extracted more than three billion photos from public websites like Facebook and Instagram and used them to create a database used by more than 600 law enforcement agencies in the U.S., Canada and elsewhere.
In January, Toronto police told CBC News they used facial recognition, but denied using Clearview AI. It's unclear if police purchased the technology — if so, it was never disclosed publicly — or were allowed to test it.
At the time, Ontario Provincial Police also said they used facial recognition technology, but wouldn't specify which tools they used. The RCMP would not say what tools it uses.
Vancouver's police department said it had never used the software and had no intention of doing so.
Toronto police have now asked Ontario's Information and Privacy Commissioner and the Crown Attorney's office to review whether Clearview AI is an appropriate investigative tool, she said.
"Until a fulsome review of the product is completed, it will not be used by the Toronto Police Service."
Peel Regional Police said in a statement they were given a demo version of the software "for testing purposes only, however the Chief has directed that testing cease until a full assessment is undertaken."
The force says it is working with the province's privacy commissioner's office to make sure any future use of facial recognition technology is in keeping with privacy legislation.
In a statement, Ontario's privacy commissioner Brian Beamish said he was not aware the force was using Clearview until Feb. 5 and is "relieved that its use has been halted."
"The indiscriminate scraping of the internet to collect images of people's faces for law enforcement purposes has significant privacy implications for all Ontarians. We have made it clear in the past that my office should be consulted before this type of technology is used," the statement said.
If other law enforcement agencies are continuing to use Clearview AI technology we need them to contact us.— Brian Beamish, Ontario privacy commissioner
Beamish went on to say his office will be consulting with the force to examine its use of facial recognition technology.
"We question whether there are any circumstances where it would be acceptable to use Clearview AI," Beamish said. "If other law enforcement agencies are continuing to use Clearview AI technology we need them to contact us.
The Toronto Police Services Board said it was not aware of the technology being used by the force.
"A report on this issue has never been the subject of consideration by the board," Sandy Murray said, speaking for the board.
A spokesperson for Mayor John Tory said the mayor was notified on Thursday, adding Tory "supports" halting and reviewing its use.
Poor accuracy
Former Ontario privacy commissioner Ann Cavoukian told CBC News she was "dismayed" to learn that Toronto police were using the technology.
"Clearview AI has scraped 3.9 billion facial images off of public social media … No consent, no notice, nothing," Cavoukian said.
One of the biggest dangers with facial recognition tools, she says, is low accuracy — they can wrongly identify an innocent citizen as a suspect or person of interest.
"Can you imagine trying to clear your name when that happens? It's a nightmare."
There's also no consent involved in the collection process, she says, rejecting the argument that social media users knowingly leave their information open to scraping.
"When people use Facebook for example, they're sharing their pictures with the people they've chosen to — the friends, colleagues, the family they've indicated … They're not saying, 'Here world, it's all there for you to see.'"
Cavoukian applauded the police chief for stopping its use.
"He took the right action," she said, adding facial recognition has been banned in several jurisdictions in the United States including San Diego, San Francisco, Oakland, Calif., and Texas.
It's unclear if Toronto police have made any arrests based on information generated by the app.
The Toronto revelation raises longer-term questions such as how any data that was gathered will be stored and whether it will ever be used as evidence in an Ontario court.
CBC News has contacted Clearview AI for comment, but has so far not received a response.
With files from Jayme Poisson, John Rieti and The Current