Authors
Partner, Disputes, Toronto
Partner, Disputes, Toronto
Associate, Disputes, Toronto
Associate, Disputes, Toronto
Key Takeaways
- AI is transforming the legal profession, helping with tasks like data review and drafting submissions, yet it raises concerns about accuracy and professional responsibility.
- Misuse of AI in legal advocacy comes with a risk, as Courts have demonstrated a willingness to sanction parties that rely on hallucinated authorities.
- Canadian courts are enacting guidelines for AI use, requiring transparency and emphasizing the responsibility of legal counsel to proofread AI-generated content.
The rapid expansion of artificial intelligence (AI) has been transforming virtually every sector of the economy. In the legal profession, law firms are increasingly relying on AI to perform tasks such as reviewing data, drafting agreements and conducting basic legal research. The emergence of AI is also transforming legal advocacy, as legal counsel and self-represented litigants are increasingly using AI to draft and support legal submissions in courts across the country.
The use of AI in legal advocacy has numerous advantages, particularly in reducing the cost for litigants and advancing access to justice. But such use also raises significant issues of professional responsibility and accuracy in the development of the law. While AI is a powerful tool, it is an imperfect one. It can generate errors and contain hallucinations. A number of recent high-profile examples in Canada and the U.S. illustrate the dangers of overreliance on AI in litigation.
Courts and regulators in Canada are responding to these dangers with guidelines for transparency regarding the use of AI, as well as consequences for those who misuse it. Litigants, together with their legal counsel, need to supervise AI use in legal advocacy to ensure that it is used responsibly, does not undermine legal submissions to the detriment of the client’s case or skew the development of the law.
The use of AI also raises broader legal issues relating to the sources of underlying data that drive the outputs of AI, including privacy and copyright issues, some of which are explored in our Osler Legal Outlook article. Moreover, the technology is developing rapidly, with the evolution of the definition of AI and the rise of agentic AI which are discussed in a separate Osler Legal Outlook article. As AI tools become more sophisticated, we can expect to see increased use of AI in legal advocacy, as well as more stringent expectations regarding responsible use that clients and their lawyers will need to understand and monitor.
Misuse of AI by legal advocates
There have been a number of recent and widely reported cases relating to the misuse of AI in the U.S. A notable example involved Boies Schiller Flexner LLP (BSF). A lead partner at BSF and former federal prosecutor filed a detailed brief in an important appeal. Opposing counsel pointed out material errors in BSF’s filing that appeared to be AI hallucinations.
BSF took immediate steps to withdraw the brief. In a declaration made as part of his motion to file a corrected response brief [PDF], the partner acknowledged that firm policies identified the risks associated with use of AI and that firm lawyers were expected to “to scrupulously proofread and cite check” briefs. However, he personally failed to do so.
In the past year, Canadian courts have also encountered similar examples of overreliance on and misuse of AI in legal submissions.
In Reddy v. Saroya, for example, the Alberta Court of Appeal considered whether the appellant’s counsel should be required to pay a cost award after he submitted a factum containing a number of citations for which no case could be found. Similarly, in R. v. Chand, the judge instructed counsel for the defendant to prepare a new set of defence submissions without the use of generative AI after counsel provided submissions containing citations hallucinated by AI.
In Zhang v. Chen, the applicant’s counsel cited two non-existent authorities in a proceeding before the B.C. Supreme Court. She ultimately admitted the citations had come from ChatGPT and that she had not verified them. The Court declined to impose “special costs” against the lawyer, which are typically reserved for “reprehensible conduct or an abuse of process.” However, the Court ruled that the lawyer was personally liable for those costs awarded to the opposing party that were attributable to the additional time required to address the hallucinations.
In Hussein v. Canada (Immigration, Refugees and Citizenship), the Federal Court imposed costs against a litigant for the misuse of AI and for misleading the court about the use of AI. Again, counsel submitted several cases that either did not exist or were inaccurately cited for specific propositions of law. The lawyer admitted to relying on AI and failing to verify the sources independently. However, this admission was made only after the lawyer had produced two incomplete books of authorities in response to a Court direction.
Finally, in Ko v. Li, the Ontario Superior Court of Justice ordered a lawyer to attend a contempt of court hearing after citing non-existent cases in written submissions. Ultimately, the lawyer’s admission, apology and corrective steps — including attending professional development programs specific to the risks of AI in legal practice — were found to have adequately purged any possible contempt.
Response by the Canadian courts to AI risks
As noted in our Osler Update, Artificial advocacy: how Canadian courts and legislators are responding to generative AI, Canadian courts and regulators have responded in a number of ways, including by requiring transparency regarding the use of AI and underscoring the professional responsibility of legal counsel.
For example, Ontario recently introduced amendments to the Rules of Civil Procedure that require litigants to certify that authorities cited in factums are authentic. Expert witnesses are required to certify the authenticity of every authority, document or record referred to in an expert report. Additionally, the Ontario Superior Court of Justice recently updated its provincial practice direction to emphasize that the court will not tolerate inadvertence regarding the misuse of AI, particularly when it comes to hallucinated authorities. The Court notes its power to sanction AI misuse includes public reprimand, ordering costs, adjourning or dismissing the case, initiating contempt proceedings, and (where applicable), referral to the Law Society of Ontario. The Ontario Civil Rules Committee is also considering rule amendments that define “artificial intelligence” for the purpose of the Rules and that provide a process for challenging the authenticity of allegedly fabricated or altered computer-generated evidence.
Various other Canadian courts (including those in Manitoba [PDF], the Yukon [PDF] and Nova Scotia [PDF], as well as the Federal Court [PDF]) require written disclosure if AI has been used in the preparation of court filings, although there is no requirement to certify the authenticity of cited authorities.
The Federal Court has also taken action. In its Notice to the Parties and the Profession on May 7, 2024 [PDF], the Court set out the expectation that submissions containing “content created or generated” by AI must contain “a Declaration in the first paragraph stating that AI was used in preparing the document, either in its entirety or only for specifically identified paragraphs.”
The growing judicial scrutiny of AI use in litigation underscores the importance of adopting a consistent and technically sound definition of what constitutes “artificial intelligence” for these purposes. Recent consultations on the proposed amendments to Ontario’s Rules of Civil Procedure have illustrated that even this basic concept remains unsettled. The definitions under consideration diverge from those used by the OECD, in the globally influential Artificial Intelligence Act in the European Union and even in emerging regulation such as Ontario’s Working for Workers Act. Such inconsistency creates uncertainty as to when AI-related disclosure obligations are triggered.
In practice, much of what is referred to as “AI” in litigation contexts involves machine learning-based systems that make probabilistic inferences (in contrast to deterministic inferences, such as an Excel calculation). These systems can produce erroneous and biased output, giving rise to the risks the courts are seeking to manage. We expect courts and rule-makers will seek to align definitions and judicial guidance to ensure that procedural rules and professional obligations are focused on a concept of AI that reflects the nature of these risks.
Implications for legal departments and legal counsel
Given the immense advantages of using AI in legal advocacy and its role in advancing access to justice, we can expect increasing use of this technology by litigants and the courts. We can also expect heightened regulation and guidance from the courts and rule-makers, as well as potential consequences for failure to comply.
While the misuse of AI by legal advocates and the courts has drawn public attention, many of those risks can, and should, be mitigated well before litigation. Risk management is a core pillar of AI governance. Lawyers and legal departments are taking a more active role in shaping the frameworks that enable responsible use.
As noted in our Osler Update, AI governance: navigating the path ahead, it is critical to establish clear internal policies on AI use and AI-generated content, conduct robust vendor due diligence to safeguard data security and system reliability, and maintain transparent client communications about when and how AI is applied. Clients are and should be taking a similar approach in order to provide guidance on when and how the use of AI is permitted in addressing their matters.
By developing and following these safeguards, legal professionals can capture AI’s efficiency gains while meeting their professional and ethical obligations. Clients, in turn, will reap the benefits of a proactive approach.


