Across Canada, class action claims are being commenced against companies for their use (or alleged use) of AI tools and technology. This trend flows from the United States, where numerous class actions have been filed alleging that the use of AI tools creates liability for defendants. Do these cases represent a bold new frontier in class action litigation, or are AI tools simply amplifying class action risks that already exist?
In this article, we examine how recent trends in Canadian class action claims underscore the need for companies to prioritize responsible AI diligence and controls.
Although the initial wave of AI-related class actions in Canada has focused on copyright claims over data used to train LLMs, litigation in other sectors and jurisdictions indicates that early adopters may face risk associated with:
In each of these scenarios, courts will have to grapple with whether the use of AI itself is a legal issue or whether the claim turns on the intersection of AI with other aspects of consumer protection, employment, copyright and privacy law. In an environment where AI is not specifically regulated, the courts’ analysis in these cases may, in turn, raise novel questions regarding the standard of care for companies implementing AI tools, liability in a supply chain that spans from LLM developers to end customers, and pinpointing causation of harm and quantifying damages.
Currently, there is no comprehensive AI legislation regulating AI use across Canadian businesses and industries. Instead, businesses must consider a patchwork of provincial laws and voluntary regulations that intersect with the use of AI as a business tool.
Indeed, in 2024, the federally-introduced Artificial Intelligence and Data Act (AIDA) failed to pass before the 2025 federal election1. In June, Canada’s Minister of Artificial Intelligence Evan Solomon stated that the government does not intend to revive AIDA, and instead aims to foster Canadian innovation while easing international tensions associated with AI innovation2.
In the absence of overarching federal legislation, companies should be aware of how their use of AI may intersect with regulatory frameworks and provincial laws, including Innovation, Science and Economic Development Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems3; the Canadian Charter of Rights and Freedoms4; provincial consumer and privacy laws; tort and civil liability laws, and regulatory guidance such as OSFI’s Draft Guideline E-23: Model Risk Management5. International legal frameworks, such as under the EU Artificial Intelligence Act6, and industry standards such as ISO 420017 may also be helpful guides to documenting a company’s AI risk management approach.
While legislating governing the use of AI tools and related technology is likely to evolve along with the technology itself, businesses should be aware of how the use of AI can come into conflict with the organization’s other existing legal obligations. Below is a list of key considerations that should be taken into account when implementing the use of AI technology.
In the absence of comprehensive legislation, how can companies mitigate risks associated with AI use?
To discuss these issues, please contact the author(s).
This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.
For permission to republish this or any other publication, contact Janelle Weed.
© 2025 by Torys LLP.
All rights reserved.