April 27, 2023Calculating...

Guide to artificial intelligence regulation in Canada

The existing regime of artificial intelligence (AI) regulation is poised to change significantly in the coming years, with the introduction of AI legislation and automated decision-making rules in Canada, and the roll-out of new legislation in the EU, the United States and other jurisdictions.

Currently, a piecemeal combination of existing human rights law, privacy law, tort law and intellectual property (IP) law partially regulates the AI industry. But this suite of technologies is gaining traction—even more than the recent surge in ChatGPT headlines might reflect—and legislators are responding. Organizations that leverage the use of AI systems should be aware of existing and upcoming laws and regulations that govern the design, development, distribution and use of these systems.

Why AI creates legal risk—and how Canadian regulators have responded

Generally speaking, Canadian law applies equally to AI as to other technologies. However, there are several areas of law that are particularly applicable to AI. These include human rights, privacy, tort and IP.

Human rights law

AI technology can give rise to a variety of human rights impacts, depending on how it is developed and deployed. Certain uses of AI can infringe on the right to equality and non-discrimination. The rapid rise of AI-assisted automated decision-making processes has led to a concern that these systems are replicating patterns of discrimination encoded in the data fed to them during development.

There can be unconscious bias and unintentional discrimination in the training data that develops AI systems, which can then replicate these biases in the form of biased output. For instance, an AI system assisting with decision-making in the hiring process may unwittingly replicate the underrepresentation of a marginalized group that was present in the training data. This risk must be kept in mind when developing and implementing algorithms and AI systems, particularly those that are involved in decision-making about individuals, given both the risk of infringement of human rights law and policy (including the provisions against discrimination and discriminatory practices in the Canadian Human Rights Act) and various provincial human rights statutes that protect against discrimination. Organizations should also consider associated reputational risks.

In 2021, the government of Ontario consulted on draft commitments and actions for its Trustworthy Artificial Intelligence (AI) Framework. The framework consisted of i) commitments pertaining to the transparency of AI use by the government; ii) rules and tools to guide the safety, equity and secure use of AI by the government; and iii) the reflection and protection of rights and values in the government’s use of AI. Among others, the Ontario Human Rights Commission (OHRC) submitted to the public consultation on the Framework. The OHRC identified policing, health care and education as specific areas of concern with respect to the use of AI potentially infringing on human rights, particularly the potential discriminatory impact in each of these areas1.

Canada is also part of the Freedom Online Coalition (FOC) which has commented on the international human rights law risk associated with under-regulated AI, including the internationally recognized right to privacy, and has advocated for private sector organizations to observe responsible business practices in the use of AI systems in their operations.

Tort law

The common law of negligence will apply to regulate AI in instances where parties are harmed by an AI system. While this issue has not been explored yet by Canadian courts, it is likely that established tort law principles of negligence will continue to hold defendants liable for damage caused by an AI system that they developed or deployed.

Note that activities that can give rise to negligence claims, and illegal activities, remain negligent or illegal even if such activities are conducted through AI systems. Though liability risks can certainly arise for businesses that develop AI, Canadian law has not yet addressed issues relating to assigning liability where there are multiple stakeholders involved in the use or development of an AI system or where it is unclear where a harmful output or action of an AI system originated.

Other torts are likely to become relevant in the AI context as well, particularly given the ability of AI programs to alter and duplicate the voices and likenesses of individual (known as “deepfakes”). These include intentional infliction of mental distress, placing a person in a false light, non-consensual distribution of intimate images and interference with economic relations. Recently enacted legislation in British Columbia addressing non-consensual distribution of intimate images specifically allows for an individual to retain a reasonable expectation of privacy in an intimate image even when it has been altered or they are not identifiable in the image.

Privacy law

Almost any use of AI must consider the impact on privacy given that AI systems are developed by consuming large amounts of data, some of which can be related to individuals. The collection, use and disclosure of personal information in Canada by private businesses is protected by the federal Personal Information Protection and Electronic Documents Act (PIPEDA) and substantially similar provincial legislation in Québec, Alberta and British Columbia.

PIPEDA places importance on the consent of individuals in terms of the collection of their personal information and how it is used and disclosed. Businesses that use or develop AI in their operations must be aware of the obligations created by PIPEDA to the extent that any personal information is used to train and develop AI systems or that is collected or used when consumers interact with AI systems.

To the extent that creators of AI systems collect, use or share personal information in the training or development of their AI systems, data privacy regulators have the authority to investigate their practices. For instance, in April 2023, the Office of the Privacy Commissioner of Canada (OPC) launched an investigation into OpenAI, the company behind the AI-powered chatbot ChatGPT, based on a complaint of collection, use and disclosure of personal information without consent. Similar investigations have been considered or launched in EU countries on the basis of the EU’s data protection law, the General Data Protection Regulation (GDPR).

Federal guidance

Prior to the introduction of Bill C-27, which proposed the landmark Artificial Intelligence and Data Act (AIDA), the OPC issued recommendations on how PIPEDA should be amended in order to appropriately regulate AI. Its recommendations included taking a human rights-based approach and laying out clear rights and obligations with respect to personal information that would ensure that the use of personal information in the AI context would be regulated by PIPEDA2.

In general, in response to AI, the OPC advocates for a rights-based regime that includes demonstrable accountability, while also including exceptions to the rules of consent with respect to personal information for socially beneficial uses of information. Though not legally binding, it is helpful to understand the OPC’s position in the event of AI-related privacy complaints as the AI industry continues to progress.

OSFI’s recent guidance on responsible use of AI highlights similar priorities for financial services regulators, with a particular focus on effective AI governance, the use of data, the ethics of AI systems, and explainability of such systems for customers.

Ontario guidance

The Ontario Information and Privacy Commissioner (IPC) has also informally suggested that Ontario adopt a harmonized and rights-based approach to AI, including a focus on the following:

  • researching the effects of data profiling, social media exposure and algorithmic prediction on the psychological development of individuals, especially children and youth;
  • defining harms to include group harms that result from AI systems, going beyond physical, psychological, property or economic harms;
  • taking a broader human rights approach, going beyond federal constitutional powers that regulate commercial or criminal activity;
  • developing an integrated approach across Ontario’s public, private and not-for-profit sectors for the regulation of AI, with a focus on the areas of health and law enforcement; and
  • conducting algorithmic impact assessments within a principled and rights-based framework that balances the autonomy, dignity and integrity of persons or groups with the broader societal interests and public good considerations associated with AI use and development3.

Further non-binding IPC guidance can be found in its comments on a municipal police board’s policy regarding the use of AI. The IPC was concerned with ensuring clarity in the AI use policy, as well as ensuring transparency, accountability and oversight for the use of AI. Measures it recommended include requiring a whistleblower mechanism to report violations of the policy, clear descriptions of roles and responsibilities, requiring recordkeeping requirements for AI technologies and publishing proactive disclosure regarding the uses of AI on the organization’s website. The IPC also highlighted the importance of human-in-the-loop oversight and meaningful explanations of AI use4.

Intellectual property law

IP legislation in Canada has not yet directly addressed the questions that come up when AI systems create what would otherwise be considered intellectual property, such as chatbots that are able to produce written works or programs that generate digital images when given prompts by users. The Federal Circuit in the United States and the Supreme Court in the UK have both concluded that an “inventor” for the purposes of patents must be a human, but this issue has not been brought before Canadian authorities. However, it is likely that the existing U.S. and UK authorities will be influential in deciding this issue when it comes up in a Canadian court.

The federal government in 2021 suggested multiple approaches for how AI-generated works would be considered for protection under the Copyright Act5, given that Canadian copyright jurisprudence suggests that an author must be a natural person: 1) attribute authorship to the person who arranged for the work to be created; 2) clarify that copyright and authorship only apply to works generated by humans, and works created with no humans participating in the creation of the work would not be eligible for copyright protection; or 3) create a new “authorless” set of rights for AI-generated works.

Though these were proposed by the government, the copyright issues have not been addressed by the AIDA, and this remains a significant question particularly in light of the recent rise in the popularity and accessibility of AI-generated works.

The next generation of Canadian AI regulation

There are multiple upcoming and proposed legislative reforms in Canada that address AI regulation directly, the most significant of which is the AIDA. It is slated to come into effect in 2025 at the earliest.

Upcoming federal and provincial privacy law reforms also have provisions governing automated decision-making that will affect the use of AI in business operations.

The Artificial Intelligence and Data Act

The proposed federal Bill C-27, if passed, would implement Canada’s first artificial intelligence legislation, the AIDA. The AIDA creates Canada-wide obligations and prohibitions pertaining to the design, development and use of artificial intelligence systems in the course of international or interprovincial trade and commerce. This applies to any “technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions”.

According to the companion document to the AIDA6, the federal government expects the AIDA and its regulations to come into force no sooner than two years after Bill C-27 receives Royal Assent. Bill C-27 is currently in its second reading in the House of Commons.

“High-impact” systems in the AIDA

The most significant obligations in the AIDA apply to “high-impact” AI systems. Although that term has not been defined in the Bill, the government has suggested that regulations may prescribe that the following factors be considered in determining what AI systems are high impact:

  • evidence of risks of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences;
  • the severity of potential harms caused by the system;
  • the scale of use of the system;
  • the severity of harms or adverse impacts that have already occurred;
  • whether it is reasonably possible to opt out from the system, either practically or legally;
  • imbalances of economic or social circumstances;
  • the age of persons using or interacting with the system; and
  • whether the risks associated with the system are adequately regulated under another law.

The federal government has indicated that these considerations would not apply to a person distributing or publishing open-source software or models, given that these by themselves are not considered a complete AI system. Accordingly, they would apply to a person making available for use an open-access, fully functioning high-impact AI system.

Requirements under AIDA: high-impact systems

Developers and operators of high-impact systems have significant obligations. They would be required to

  • establish measures to identify, assess and mitigate the risks of harm or biased output (“biased output” referring to prohibited grounds of discrimination under the Canadian Human Rights Act);
  • monitor compliance and effectiveness of those measures;
  • publish on a publicly available website a plain-language description of the system that covers how the system is used, the types of content and outputs it is intended to generate, the mitigation measures established and any other information prescribed by regulation; and
  • notify the Minister as soon as feasible if use of the system results, or is likely to result, in material harm.

It is not clear whether the regulations will introduce exceptions to these requirements for systems that would otherwise be considered high-impact systems based on the considerations listed above, such as exceptions to the transparency requirement for tools aimed at security or fraud prevention. Given that such exceptions currently exist in privacy law for security tools, it is possible that similar ones may be introduced for the purposes of the AIDA.

Requirements under AIDA: non-high-impact systems

Those systems that do not qualify as high-impact systems are subject to less onerous obligations. Developers and operators are only required to

  • assess whether their system qualifies as a high-impact system; and
  • where the system processes or makes available for use anonymized data (i.e., formerly personal information that has been converted into an anonymized form with the aim of protecting the privacy of individuals such that it is no longer personally identifiable) they must (in accordance with regulations) establish measures with respect to the manner of anonymization and the use or management of anonymized data.
Two key roles: AI and Data Commissioner, and the Minister

The AIDA will also create an office headed by a new AI and Data Commissioner to administer the act. This role would begin with education in and assistance for complying with the AIDA but will evolve to eventually include compliance and enforcement. The Commissioner will ensure that the AIDA will be administered and enforced according to the capabilities and scale of the organization in question.

The AIDA gives the responsible Minister substantial investigation and enforcement powers, which include the power to require the production of records, require a company to conduct an internal audit or engage the services of an independent auditor to investigate possible contraventions, order a company to implement any measure to address issues raised in such an audit report, and order a company to pay an administrative monetary penalty.

Offences and penalties

Regulatory offences currently in the AIDA include:

  • contravening any of the above obligations;
  • obstructing or providing false or misleading information to the Minister; and
  • possessing or using personal information, knowing it was obtained illegally, “for the purpose of designing, developing, using or making available for use an artificial intelligence system”.

Currently, regulatory violations of the AIDA will result in administrative monetary penalties. The amounts of those penalties are unknown and set to be determined by regulation.

In addition, the companion document has clarified that the AIDA will create three new criminal offences under the Criminal Code of Canada, aimed at punishing AI-related activities that intentionally cause or create a risk of harm, which are the following:

  • Knowingly possessing or using unlawfully obtained personal information to design, develop, use or make available for use an AI system. This could include knowingly using personal information obtained from a data breach to train an AI system;
  • Making an AI system available for use knowing, or being reckless as to whether, it is likely to cause serious harm or substantial damage to property, where its use actually causes such harm or damage; and
  • Making an AI system available for use with intent to defraud the public and to cause substantial economic loss to an individual, where its use actually causes that loss.

Unlike the regulatory offences that result in monetary penalties, these are crimes that can be investigated by law enforcement and prosecuted by the Public Prosecution Service of Canada.

Our understanding of the AIDA is subject to change with the introduction of regulations covering significant elements of the scope and content of the statute. The AIDA is also subject to possible substantive amendments as Bill C-27 progresses through Parliament, so updates to this legislation should be monitored by organizations that are affected—particularly those that leverage the use of AI systems that may be considered “high impact” with reference to the above factors.

Upcoming and proposed privacy law reforms
Private Sector Act (Québec)

Provisions related to automated decision-making (ADM) in section 12.1 of Québec’s recently revamped private-sector privacy statute, the Act respecting the protection of personal information in the private sector (the Private Sector Act), will come into effect in September 2023. These provisions, originally part of Bill 64, will give consumers the right to information, and the right to objection when their personal information is used to make decisions about them without independent human judgment. This provision applies to ADM decisions that are exclusively automated.

Organizations will be required to

  • provide notice of the ADM process at the time the decision is made;
  • provide a channel for individuals to submit questions, comments or complaints to a representative who can review the decision;
  • allow people to request correction of the personal information used in the decision; and
  • inform the individual, upon request, of i) the personal information used in the decision; ii) the reasons, principal factors and parameters that led to the decision; and iii) the individual’s right to correct the personal information used in the decision.

This upcoming change will affect organizations that use AI in any fully automated decision-making process that involves the personal information of any individuals in Québec, whether they are customers or employees. For clarity, individuals must be given notice of each separate ADM process for which an organization uses their personal information.

Consumer Privacy Protection Act (federal)

The Consumer Privacy Protection Act (CPPA) in Bill C-27 aims to modernize existing Canadian privacy law for the digital economy. This includes providing for restrictions around automated decision-making, which is currently defined more broadly in the federal CPPA than in Québec’s Private Sector Act. Section 2(1) of the CPPA defines an ADM to include a system that assists in the judgment of human decision-makers, rather than limiting the definition to fully automated decision-making systems.

Section 62 of the CPPA requires organizations using ADMs to make readily available a plain-language account of the organization’s use of any ADM that makes predictions, recommendations or decisions about individuals that could have a significant impact on the individuals concerned, including those decisions made by a human but assisted by a system.

The CPPA in section 63(3) also allows individuals to request explanations for any automated decision that could have a significant impact on them, and it does not currently allow an organization to refuse an explanation request. Section 55 only allows organizations to refuse requests for disposal that are vexatious or made in bad faith.

Bill C-27 may be passed by the end of 2023, and it will likely take at least a year after the bill passes for the CPPA to come into force.

How Canada compares with other major jurisdictions

The Government of Canada has indicated in its companion document to the AIDA that interoperability with legal frameworks in other jurisdictions will be a “key consideration” in the development of the AIDA’s regulations in order to facilitate access to international markets for Canadian companies. Canada, the EU and the United States have all taken similar approaches in prioritizing a rights-based approach focusing on transparency and accountability and ensuring that potential harms of AI use are mitigated to the extent possible.

EU

Given the international influence of the EU’s GDPR as a global privacy standard, it is important to note that the EU has recently introduced the AI Act (AIA) to ban the use of “unacceptable risk” in AI applications and systems and to regulate the use of “high-risk” AI applications. It is projected to come into force in late 2023 or early 2024.

The AIA outlines the essential requirements that high-risk systems need to meet (including data governance, transparency, cybersecurity, documentation, monitoring and human oversight) and the method of classifying a system as high-risk. Some examples of high-risk systems exist in law enforcement, education, administration of justice and biometric identification of natural persons.

The AIA also delineates which AI practices are prohibited, which include general purpose social scoring, exploitation of children or mentally disabled persons resulting in harm, subliminal manipulation resulting in harm, and remote biometric identification for law enforcement purposes in publicly accessible spaces. Certain AI systems that are not high-risk may also be subject to enhanced transparency obligations, which would require humans to be notified that they are interacting with an AI system7.

United States

Like in Canada and the EU, AI regulation in the United States is still being considered and developed, at both the federal and state levels. In 2022, certain states began to regulate automated employment decision tools that use AI for candidate screening and employment decisions, including New York and Illinois. The state data privacy laws scheduled to come into force in 2023 in California, Connecticut, Colorado and Virginia all contain provisions regarding automated decision-making as well. Some of these state laws include provisions on consumer rights for AI-powered decisions, AI transparency and governance via AI impact assessments.

The White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights in 20218. This guidance focused on five principles for the design, use and deployment of automated systems to protect the U.S. public, which include safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration and fallback. This blueprint reflects similar concerns with AI in the Canadian proposed legislation and guidance outlined above. At this time, there is no plan to introduce an AI Bill of Rights in the United States as legislation.

In addition to potential legislation, federal AI regulation may be in the pipeline for the United States via the FTC. In 2022, the FTC issued an advance notice of proposed rulemaking to address commercial surveillance and data security, which included a discussion on automated decision-making systems. This invited public comment on what kind of transparency companies should provide to consumers regarding AI, prohibitions on “unfair or deceptive” AI uses and certifying AI as meeting accuracy, validity and reliability standards9.

The NIST in the United States has recently issued an AI Risk Management Framework that includes controls that enable organizations to demonstrate that their AI is trustworthy (valid and reliable; safe, fair and nonbiased; explainable and interpretable; and transparent and accountable), as well as an action framework to assist companies in managing AI risk and meeting the trustworthiness criteria10. This framework is voluntary, but NIST is considered an influential organization in the global development of technology standards; in Canada, some public sector organizations already require NIST compliance for their contractors.

Conclusion

While the legal and regulatory landscape is still uncertain, it is clear that regulation is increasing in multiple jurisdictions. Organizations developing or licensing AI systems now should look to these sources to forecast their likely compliance obligations, rather than launching products that may need to be altered once these laws come into force. Such organizations should consider proactively implementing written policies and procedures that address their expected obligations under the developing legal and regulatory framework.


  1. Department of Finance (Canada), news release of July 19, 2021.
  2. Office of the Privacy Commissioner of Canada (November 2020), A Regulatory Framework for AI: Recommendations for PIPEDA Reform.
  3. Information and Privacy Commissioner of Ontario (21 July 2022), Privacy and humanity on the brink.
  4. ISED Canada (2021), A Consultation on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things.

To discuss these issues, please contact the author(s).

This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.

For permission to republish this or any other publication, contact Janelle Weed.

© 2024 by Torys LLP.

All rights reserved.
 

Subscribe and stay informed

Stay in the know. Get the latest commentary, updates and insights for business from Torys.

Subscribe Now