October 2, 2025Calculating...

AI class actions in Canada: new legal ground or the same old claims?

Across Canada, class action claims are being commenced against companies for their use (or alleged use) of AI tools and technology. This trend flows from the United States, where numerous class actions have been filed alleging that the use of AI tools creates liability for defendants. Do these cases represent a bold new frontier in class action litigation, or are AI tools simply amplifying class action risks that already exist?

In this article, we examine how recent trends in Canadian class action claims underscore the need for companies to prioritize responsible AI diligence and controls.

AI class action trends: recent cases

Although the initial wave of AI-related class actions in Canada has focused on copyright claims over data used to train LLMs, litigation in other sectors and jurisdictions indicates that early adopters may face risk associated with:

  • consumer-related claims, with the use of customer service chatbots providing incorrect or misleading information to consumers or making incorrect decisions, or companies misrepresenting their products’ AI capabilities (also known as AI washing);
  • competition and anti-trust claims, with allegations that AI-powered tools allow for price-fixing or collusion between competitors;
  • employment claims, with AI tools collecting personal information of employees for performance, attendance or other employee management purposes and allegations of discriminatory hiring practices from AI-powered recruitment tools; and
  • privacy claims, with biometric information analytics (even if the AI system is processing de-identified information) and cybersecurity breaches of AI systems or LLMs themselves.

In each of these scenarios, courts will have to grapple with whether the use of AI itself is a legal issue or whether the claim turns on the intersection of AI with other aspects of consumer protection, employment, copyright and privacy law. In an environment where AI is not specifically regulated, the courts’ analysis in these cases may, in turn, raise novel questions regarding the standard of care for companies implementing AI tools, liability in a supply chain that spans from LLM developers to end customers, and pinpointing causation of harm and quantifying damages.

Identifying the standard of care for AI

Currently, there is no comprehensive AI legislation regulating AI use across Canadian businesses and industries. Instead, businesses must consider a patchwork of provincial laws and voluntary regulations that intersect with the use of AI as a business tool.

Indeed, in 2024, the federally-introduced Artificial Intelligence and Data Act (AIDA) failed to pass before the 2025 federal election1. In June, Canada’s Minister of Artificial Intelligence Evan Solomon stated that the government does not intend to revive AIDA, and instead aims to foster Canadian innovation while easing international tensions associated with AI innovation2.

Voluntary frameworks and traditional legislation

In the absence of overarching federal legislation, companies should be aware of how their use of AI may intersect with regulatory frameworks and provincial laws, including Innovation, Science and Economic Development Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems3; the Canadian Charter of Rights and Freedoms4; provincial consumer and privacy laws; tort and civil liability laws, and regulatory guidance such as OSFI’s Draft Guideline E-23: Model Risk Management5. International legal frameworks, such as under the EU Artificial Intelligence Act6, and industry standards such as ISO 420017 may also be helpful guides to documenting a company’s AI risk management approach.

Risk mitigation

While legislating governing the use of AI tools and related technology is likely to evolve along with the technology itself, businesses should be aware of how the use of AI can come into conflict with the organization’s other existing legal obligations. Below is a list of key considerations that should be taken into account when implementing the use of AI technology.

  • Privacy: What consent and data handling procedures are in place over your company’s use of AI? How can your organization provide enough transparency to support meaningful consent to AI processing of personal information without singling itself out for litigation and regulatory scrutiny? What do your terms of use for your customers say about your use of AI (and any data you are collecting from your customers)?
  • Competition: How might your AI tools (i.e., pricing algorithms or market analytics) lead to allegations of price fixing? What data is used by the AI tools you are using (i.e., has your competitors’ pricing data been made available to you through the use of the tool)?
  • Copyright: What impact might copyright lawsuits against LLM developers have on a corporate customer of the LLM, whether from a reduced quality product, employee productivity or reputational perspective?
  • Consumer: How are your chatbots trained to answer questions, especially when they’re unsure of the answers to be provided? Are your representations on a product’s AI capabilities accurate?
  • Employment: Do your procurement, supply chain and third-party risk management frameworks support sufficient downstream diligence to identify the risk of discriminatory algorithms or data sets in recruitment tools? What personnel training and oversight mechanisms are available to monitor for discriminatory output, on an individual and systemic level?
  • Securities: Are you accurately representing your AI capabilities and your company’s use of AI tools?

Final word: establishing an internal compliance framework

In the absence of comprehensive legislation, how can companies mitigate risks associated with AI use?

  1. By documenting initial and ongoing due diligence for any adopted AI tools.
  2. By documenting internal user training, output testing and ensuring that there is a “human in the loop” to flag incorrect/biased outputs.
  3. By establishing an escalation path in the event of unanticipated outputs or results.
  4. By revisiting vendor contract terms with a view to financial risk shifting or sharing and ensuring appropriate AI oversight by suppliers.
  5. By assessing whether existing insurance policies provide sufficient coverage for business interruptions, losses or litigation arising from the use of an AI tool.

To discuss these issues, please contact the author(s).

This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.

For permission to republish this or any other publication, contact Janelle Weed.

© 2025 by Torys LLP.

All rights reserved.
 

Subscribe and stay informed

Stay in the know. Get the latest commentary, updates and insights for business from Torys.

Subscribe Now