Is AI judging me? Canadian courts and AI

Authors

 
Recent hype around generative AI has led to a flurry of activity in the legal profession. Litigators are considering how AI can be used to increase the value and efficiency of their work, while mitigating the risks it poses. Law societies are issuing directions on using Gen AI in legal practice, while reminding lawyers of their professional obligations1.

Now, all eyes are turning to the courts: what role(s) will AI play in the courtroom?

Use of AI in the court

Perspectives from the bench

The use of Gen AI in court proceedings made headlines earlier this year when a lawyer referred to cases “hallucinated” by ChatGPT in a notice of application. Holding the lawyer personally liable for additional expenses incurred as a result of her mistake, the British Columbia Supreme Court observed that “generative AI is still no substitute for the professional expertise that the justice system requires of lawyers” and that “competence in the selection and use of any technology tools, including those powered by AI, is critical” to the integrity of the justice system2.

In a similar case in the United States, Mata v. Avianca, a New York District Court said that while reliable AI tools can provide useful assistance, lawyers are required to do their due diligence to ensure that submissions are correct. Fake submissions waste time, deprive clients of legitimate citations to support their case, and lower the reputations of those involved in the case—and the judicial system as a whole3.

Guidance from the courts

Some Canadian courts have communicated their expectations on the use of Gen AI in court materials. For example:

  • The Federal Court now requires parties to give notice when using Gen AI. Although AI use will not in and of itself attract an adverse inference by the Court, the Court asks parties to exercise caution and ensure that a human is involved to ensure the veracity of the technology’s output.
  • The Manitoba Court of King’s Bench requires parties to indicate how AI was used in preparing court materials.
  • The Québec Court of Appeal asks litigants to exercise caution, ensure their submissions are based on reliable sources, and maintain human oversight to ensure that submissions are accurate.
  • The Chief Justices of Alberta’s Court of Justice, Court of King’s Bench, and Court of Appeal jointly urged litigants to exercise caution, rely on authoritative sources, and ensure that a human is in the loop.
  • The British Columbia Supreme Court stated in its 2023 annual report that it expects litigants, lawyers, and others who participate in court proceedings to inform themselves of current issues and advice regarding AI, and to ensure that their materials are authentic and accurate.

Similar guidance has been issued by the Superior Court of Québec, the Supreme Court of Newfoundland and Labrador, the Provincial Court of Nova Scotia, and the Supreme Court of Yukon.

Use of AI by the court

AI decision-making in Canada

Canadian courts have also started to consider and respond to speculations about their own use of AI. For example:

  • In December 2023, the Federal Court announced that it will not use AI in decision-making without first engaging in public consultation. The Court also committed that its internal, administrative use of AI would abide by the principles of accountability, respect for fundamental rights, non-discrimination, accuracy, transparency, cybersecurity, and ensuring that a human is kept in the loop.
  • In Ontario, the Civil Rules Committee’s Artificial Intelligence Subcommittee has been established to analyze and advise the Committee on proposals respecting the use of AI in the litigation process.
  • Chief Justice Wagner of the Supreme Court of Canada has also announced that the Canadian Judicial Council is engaging with experts in AI and working to develop guidance for the courts.
A snapshot of other jurisdictions

United States

Some U.S. judges have expressed enthusiasm for using AI to assist with decision-making. In a concurrence in James Snell v. United Specialty Insurance Company, for example, Judge Newsom of the 11th Circuit announced that he used Gen AI to confirm his preliminary interpretation of the word “landscaping” in an insurance policy. He concluded that it is no longer ridiculous to think that AI-powered large language models like ChatGPT “might have something useful to say about the common, everyday meaning of the words and phrases used in legal texts”4. For more about the use of AI in legal contexts, read “What are the new best practices for AI for legal teams?”

United Kingdom, New Zealand, and Australia

In December 2023, the United Kingdom Courts and Tribunals Judiciary released guidance for judicial office holders on the responsible use of AI in courts and tribunals. The guidelines state that “[a]ny use of AI by or on behalf of the judiciary must be consistent with the judiciary’s overarching obligation to protect the integrity of the administration of justice”5. While the guideline explains that AI can help with summarizing, writing presentations, and performing administrative tasks, it does not recommend AI for legal research and analysis.

Similar guidelines were released by the courts of New Zealand. Rather than discouraging use of AI for legal research and analysis, however, the New Zealand guidelines simply state that extra care is required.

The Australasian Institute of Judicial Administration has also released a guide on AI decision-making for judges, tribunal members, and court administrators. The guide provides an overview of AI tools, suggests where AI can be utilized by decision-makers, and discusses their impacts on core judicial values of open justice, accountability and equality before the law, procedural fairness, access to justice, and efficiency.

Conclusion

While it is unlikely that AI will be running Canadian courtrooms any time soon, we will likely see it playing an increasing role in the background of litigation processes, on both sides of the bench. Litigants and lawyers should therefore be aware of risks, follow courts’ guidance, and abide by the basic duties and principles that guide and govern legal disputes—no matter what technological transformations are to come.


  1. See, for example, the Law Society of British Columbia, “Guidance on Professional Responsibility and Generative AI” (November 20, 2023); the Law Society of Alberta, “The Generative AI Playbook: How Lawyers Can Safely Take Advantage of the Opportunities Offered by Generative AI” (January 2024); the Law Society of Manitoba, “Generative Artificial Intelligence—Guidelines for Use in the Practice of Law” (April 2024); the Law Society of Ontario, “Licensee use of generative artificial intelligence” (April 2024); the Law Society of Saskatchewan, “Guidelines for the Use of Generative Artificial Intelligence in the Practice of Law” (February 2024); the Law Society of Newfoundland & Labrador, “Artificial Intelligence in Your Practice”; and the Nova Scotia Barristers’ Society, “Artificial Intelligence in the Practice of Law—What is AI and can I or should I use it in my practice” (2023).
  2. Zhang v. Chen, 2024 BCSC 285 at para. 46.
  3. Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (United States).
  4. James Snell v. United Specialty Insurance Company, [2024] USCA11, 22-12581 (United States).
  5. Courts and Tribunals Judiciary, “Artificial Intelligence: Guidance for Judicial Office Holders” (2023).

This article was published as part of the Q4 2024 Torys Quarterly, “Machine capital: mapping AI risk”.

Inscrivez-vous pour recevoir les dernières nouvelles

Restez à l’affût des nouvelles d’intérêt, des commentaires, des mises à jour et des publications de Torys.

Inscrivez-vous maintenant