A new era of innovation in Canada

Artificial intelligence

For guidance on the legal updates and business developments shaping the AI industry, and how to mitigate risk, read Torys’ AI resources for business leaders. For more industry insights on the opportunities and challenges presented by AI, read our feature Q&As from Julien Billot, CEO of Scale AI and Mark Shulgan, Co-founder of Intrepid Growth Partners.

 
Artificial intelligence continues to move further toward the forefront of business operations and client service. This rapid shift toward autonomy is redefining how people work, with the broader market’s interest—and capital—following suit.

Areas to watch

M&A

The recent market opportunities presented by AI have been a major driving force in global M&A. Across sectors, AI transactions continue with strong momentum as investment at the intersection of the infrastructure, energy and technology powering the AI boom picks up pace. For investors, there is a need to understand, among other things in this fast-changing area, where real opportunity and risk lies in AI and the differences between AI investments and traditional tech deals.

The Canadian government has marked AI sovereignty as a priority, with Budget 2025 proposing more than $1 billion to be provided over the next 5 years to support large-scale, sovereign AI infrastructure1. This investment will come from funds set aside as part of the C$2 billion “Canadian Sovereign AI Compute Strategy” directed at AI, data centre and computing initiatives, and which promises to allocate C$1.7 billon to AI resources toward data centre and public computing infrastructure2.
 

Questions of IP ownership are making headlines, including whether an AI model generates information based on static or adaptive data and what sector-specific rules and requirements the AI product may be accountable to.
 
Data centres

The AI boom—and the energy needed to scale it—is driving significant investment in infrastructure, particularly in data centres and supercomputing. Global private equity deals in data centres and relevant sectors reached US$107.7 billion between 2020 and 2024, almost doubling the investment deployed across the previous four years (US$49.9 billion)3. Canada’s leading pension funds are notable players within the space. For example, the “Maple 8” have invested nearly US$6.6 billion combined in data centre and digital infrastructure firms whose shares trade on U.S. stock exchanges4.

To streamline these kinds of projects, data centre-friendly public policies are being developed both federally and across a number of provinces. These policies are expected to be key drivers of data centre investment in Canada. In Ontario, the leading data centre market in Canada, there have been over 80 facilities built to date and the province is expecting 16 more to connect to its grid over the coming decade. It is forecasted that these new builds will represent 13% of new electricity demand. The Alberta government recently proposed the Utilities Statutes Amendment Act, 2025 (Bill 8) which, if passed, will come into force in 2027. The Bill’s proposed framework will allow data center developers to power their installations through offtake agreements with power producers or by generating electricity at the data centre site. This would provide fast-track approvals, pending developers have secured the necessary transmission, regulatory and environmental permits5.

Data centre projects are complex and involve numerous stakeholders across multiple industries, from construction and hardware to software, power management, insurance and cybersecurity. As their development scales up, companies along the supply chain should consider key regulations and challenges. For projects in Ontario these include obtaining the necessary approvals and registrations by the Ontario Energy Board (OEB) or Independent Electricity System Operator (IESO), deciding whether to connect the data centre at the distribution or transmission side (which impacts cost responsibility and risk protection for ratepayers), and determining whether to pursue direct or self-supplied power (e.g., grid connection with behind-the meter generation, or off-grid with generation on-site). Companies must also stay vigilant to evolving legislation and regulations affecting data centres. For example, Ontario’s Bill 40, which received royal assent on December 11, 2025, introduces amendments providing authority for the Minister to set out in regulation that “data centres covered by the regulation must meet certain requirements before connecting or reconnecting to the electricity grid6. No draft regulation has been published to date.

Mitigating risk

Privacy and data governance

The multifaceted, evolving nature of AI presents privacy challenges that have regulators grappling with how to limit the risks of sharing personal data with a technology that may use that data to generate information for third parties. Currently, there remains a patchwork of approaches across international jurisdictions.

The European Union has launched the EU Artificial Intelligence Act, which integrates data governance into a risk-based framework7. This approach categorizes levels of risk and relevant governance measures based on potential harm to health, safety and fundamental rights. High-risk AI providers must establish risk management systems, conduct training, validation and testing of datasets, and draw up technical documents, among other requirements.

The U.S. has announced plans to roll back existing regulations in a bid to spur innovation and maintain its leadership position. Its “AI Action Plan” does not introduce federal privacy mandates but rather looks to existing sectoral laws and state frameworks to guide AI use8.

Federal AI regulation in Canada is still a work in progress. The proposed Bill C-27—which was to overhaul the Personal Information Protection and Electronic Documents Act (PIPEDA)9, set up a new tribunal for privacy enforcement and introduce Canada’s first federal AI law (the AI and Data Act)—“died” with the prorogation of Parliament in January. The new cabinet will not be reviving Bill C-27 but has noted that it will introduce a framework that focuses on building trust and protecting people’s data10. Organizations should continue to monitor the privacy and AI landscape and implement a data governance policy that acts as a compliance framework across their collection, processing and use of data.
 

Most jurisdictions use human-centred language when speaking to the assignment of authorship and ownership, leaving AI inputs and outputs in a legal grey area.
 
Intellectual property

IP ownership is a core challenge when it comes to AI. The datasets used to train AI models require large levels of (often-copyrighted) materials, while the output that is generated can be created using a mix of copyrighted material and information built from previously AI-generated data, thus presenting a complex, multi-layered web of potential IP owners. As traditional IP legal frameworks do not account for a non-human creator, most jurisdictions use human-centred language when speaking to the assignment of authorship and ownership, leaving AI inputs and outputs in a legal grey area. While Canada and the U.S. have begun to develop potential approaches to copyright-protection for computer-generated and AI-assisted works, these guidelines still require a high degree of human involvement. If models are trained on copyrighted works without the correct permissions, developers—and those who acquire the technology—may face infringement claims.

Tech contracting

Questions of IP ownership are making headlines, including whether an AI model generates information based on static or adaptive data and what sector-specific rules and requirements the AI product may be accountable to. These questions lend themselves to what, in many cases, may ultimately amount to complex agreement structuring processes for parties contracting and transacting in connection with AI assets. The risks associated with the acquisition of an AI company call for specific considerations in transaction documents, alongside an understanding of the nuances around integrating AI into business operations and procuring vendor AI.

Contracts must be built on strong due diligence and address the unique risks and responsibilities that the technology presents. They should consider relevant license rights and restrictions, define the scope of the product and its underlying datasets, outline data rights and include ongoing data governance, privacy provisions and compliance commitments. Agreements should also clearly state ownership of inputs and outputs, risk allocation and change of law clauses.

Litigation

As the use of AI increases, so do related disputes. Class actions targeting companies for AI-related data breaches, improper data handling practices, IP infringement and biased models are on the rise. While these cases are predominantly in the U.S., they may be a predictor for future class action developments in Canada. In fact, there have been several recent class action filings with an AI focus in the second half of 2025, alleging privacy or copyright issues. Privacy is the most active area of AI-related class actions, with claims ranging from the scraping of personal data without consent to improper data handling and breaches resulting from AI-driven systems. For example, if a generative AI tool is trained on publicly available but personally identifiable information, it may reproduce sensitive content, triggering privacy violations under laws like Québec’s Law 25 or federal PIPEDA. In the copyright context, class actions often allege that datasets used in training AI models included copyrighted works without authorization. This raises liability questions for both the developers of large language models and the platforms that deploy them.

To mitigate product liability risks, AI developers and manufacturers should clearly communicate how the technology is used and what its capabilities and limitations are, and provide clear warnings related to misuse. In addition to a transparent data governance strategy, models should be continuously monitored through testing, with each test result clearly documented. For consumer claims related to agentic AI-type technologies, questions to ask include: How are your chatbots trained to answer questions, especially when they’re unsure of the answers to be provided? Are your representations on a product’s AI capabilities accurate?

AI is being increasingly leveraged within the employment life cycle, with employers using it in hiring, management and offboarding processes. While these tools provide efficiency, they also can present several risks that can result in litigation, such as human rights violations and privacy issues. Organizations should maintain up-to-date knowledge of the legal landscape surrounding AI for employee recruitment and retention.


  1. Government of Canada, Budget 2025: Building a stronger Canadian economy.
  2. Environmental Registry of Ontario, New Requirements for Data Centres Seeking to Connect to the Electricity Grid in Ontario, ERO number 025-1001.
  3. EU Artificial Intelligence Act, High-level summary of the AI Act.
  4. Government of Canada, Personal Information Protection and Electronic Documents Act (S.C. 2000, c. 5).

To discuss these issues, please contact the author(s).

This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.

For permission to republish this or any other publication, contact Janelle Weed.

© 2025 by Torys LLP.

All rights reserved.
 

Subscribe and stay informed

Stay in the know. Get the latest commentary, updates and insights for business from Torys.

Subscribe Now