An Ontario judge tossed a court filing seemingly written with AI. Experts say it's a growing problem
Artificial intelligence can produce false information that can make its way into court submissions

Legal experts say an Ontario judge's criticism of a lawyer who seemingly leaned on artificial intelligence to prepare court materials is putting the spotlight on the dangers of AI tools that can produce false or fictitious information.
That, in turn, can have real-life consequences, they say.
Fake cases, known as AI hallucinations, can make their way into legal submissions if a lawyer doesn't take additional steps to make sure the cases actually exist, says Amy Salyzyn, an associate professor at the University of Ottawa's faculty of law.
Lawyers routinely suggest what past decisions — or case law — a court should apply in their clients' cases. Judges then determine what cases to consider.
The problem arises when lawyers use generative AI tools that can produce made-up information, Salyzyn says. A judge making a decision could therefore be presented with incorrect or false information.
"You don't want a court making a decision about someone's rights, someone's liberty, someone's money, based on something totally made-up," Salyzyn told CBC Radio's Metro Morning on Friday.
"There's a big worry that if one of these cases did potentially sneak through. You could have a miscarriage of justice."

Her comments come after Justice Joseph F. Kenkel, a judge with the Ontario Court of Justice, ordered criminal defence lawyer Arvin Ross on May 26 to refile his defence submissions for an aggravated assault case, finding "serious problems" in them.
Kenkel said one case cited appeared to be fictitious, while several case citations referred to unrelated civil cases. Still other citations led to a case named that was not the authority for the point being made.
"The errors are numerous and substantial," Kenkel said.
Kenkel ordered Ross to prepare a "new set of defence submissions" ensuring that: the paragraphs and pages are numbered; case citations include a "pinpoint cite" to the paragraph that explains the point being made; case citations are checked and include links to CanLII, a non-profit organization that provides online access to legal decisions, or other sites to ensure they are accurate.
"Generative AI or commercial legal software that uses GenAI must not be used for legal research for these submissions," Kenkel said.
CBC Toronto contacted Ross but he declined the request for an interview, saying in a statement that he's "focused on complying with the court's directions."

French lawyer tracking cases with AI hallucinations
The case, known as R. v. Chand, is the second Canadian case to have been included on an international list, compiled by French lawyer Damien Charlotin, of legal decisions in "cases where generative AI produced hallucinated content." In many cases, the lawyers on the list used fake citations. The list identifies 137 cases so far.
In the list's first Canadian case, Zhang v. Chen, B.C. Justice D. M. Masuhara reprimanded lawyer Chong Ke on Feb. 23, 2024 for inserting two fake cases into a notice of application that were later discovered to have been created by ChatGPT. The judge, who described the errors as "alarming," ordered Ke to pay court costs but not special costs.
"As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers," Masuhara wrote in a ruling on costs. "Competence in the selection and use of any technology tools, including those powered by AI, is critical. The integrity of the justice system requires no less."
Salyzyn said the phenomenon where lawyers file court materials that cite non-existent cases is a global one and it's arising because AI tools, such as ChatGPT, are not information retrieval devices but tools that match patterns in language. The result can be inaccurate information that looks "quite real" but is in fact fabricated.
AI tools "can put things together that look like legal cases. Sometimes they might reference real legal cases too, if it appears a lot in the data that it has consumed. But fundamentally, the tool is kind of predicting the next words to go together, and sometimes it predicts and mixes together citations that look quite real but don't accord with anything in reality," she told Metro Morning.
Verification is key, law prof says
Salyzyn said lawyers are responsible to clients and the courts for the work they produce, but if they are going to rely on technology, they need to make sure that made-up information is not being passed along. Verification is key, she said.
"If lawyers are using technology to assist their practices, they need to still verify what that technology is producing," she said.

Nadir Sachak, a criminal defence lawyer with Sachak Law in Toronto, said AI is a resource that can be used by lawyers but the lawyers are still ultimately responsible for what they submit to court.
"You better make sure that, if you're relying upon technology like AI, that it's done properly," Sachak said.
He said in the case, R. v. Chand, the judge had no issue with the quality of the defence presented but it appears the lawyer involved had not reviewed the argument presented to court.
The use of AI also poses questions for how lawyers bill clients, Sachak said.
"Obviously, if one is acting ethically, one cannot simply bill a client for hours of work that the lawyer did not do, if the AI generated the material, let's say, in five minutes," he said. "One still has to make sure that whatever is presented is professional, done properly, diligently, and accurately."
In an email on Monday, the Law Society of Ontario said it cannot share information on any investigations it has undertaken, but said it has produced a white paper that provides an overview of generative AI, as well as guidance and considerations for lawyers on how its professional conduct rules apply to the delivery of legal services empowered by generative AI.
With files from Mercedes Gaztambide and Metro Morning