The existing regime of artificial intelligence (AI) regulation is poised to change significantly in the coming years, with the introduction of AI legislation and automated decision-making rules in Canada, and the roll-out of new legislation in the EU, the United States and other jurisdictions.
Currently, a piecemeal combination of existing human rights law, privacy law, tort law and intellectual property (IP) law partially regulates the AI industry. But this suite of technologies is gaining traction—even more than the recent surge in ChatGPT headlines might reflect—and legislators are responding. Organizations that leverage the use of AI systems should be aware of existing and upcoming laws and regulations that govern the design, development, distribution and use of these systems.
Generally speaking, Canadian law applies equally to AI as to other technologies. However, there are several areas of law that are particularly applicable to AI. These include human rights, privacy, tort and IP.
AI technology can give rise to a variety of human rights impacts, depending on how it is developed and deployed. Certain uses of AI can infringe on the right to equality and non-discrimination. The rapid rise of AI-assisted automated decision-making processes has led to a concern that these systems are replicating patterns of discrimination encoded in the data fed to them during development.
There can be unconscious bias and unintentional discrimination in the training data that develops AI systems, which can then replicate these biases in the form of biased output. For instance, an AI system assisting with decision-making in the hiring process may unwittingly replicate the underrepresentation of a marginalized group that was present in the training data. This risk must be kept in mind when developing and implementing algorithms and AI systems, particularly those that are involved in decision-making about individuals, given both the risk of infringement of human rights law and policy (including the provisions against discrimination and discriminatory practices in the Canadian Human Rights Act) and various provincial human rights statutes that protect against discrimination. Organizations should also consider associated reputational risks.
In 2021, the government of Ontario consulted on draft commitments and actions for its Trustworthy Artificial Intelligence (AI) Framework. The framework consisted of i) commitments pertaining to the transparency of AI use by the government; ii) rules and tools to guide the safety, equity and secure use of AI by the government; and iii) the reflection and protection of rights and values in the government’s use of AI. Among others, the Ontario Human Rights Commission (OHRC) submitted to the public consultation on the Framework. The OHRC identified policing, health care and education as specific areas of concern with respect to the use of AI potentially infringing on human rights, particularly the potential discriminatory impact in each of these areas1.
Canada is also part of the Freedom Online Coalition (FOC) which has commented on the international human rights law risk associated with under-regulated AI, including the internationally recognized right to privacy, and has advocated for private sector organizations to observe responsible business practices in the use of AI systems in their operations.
The common law of negligence will apply to regulate AI in instances where parties are harmed by an AI system. While this issue has not been explored yet by Canadian courts, it is likely that established tort law principles of negligence will continue to hold defendants liable for damage caused by an AI system that they developed or deployed.
Note that activities that can give rise to negligence claims, and illegal activities, remain negligent or illegal even if such activities are conducted through AI systems. Though liability risks can certainly arise for businesses that develop AI, Canadian law has not yet addressed issues relating to assigning liability where there are multiple stakeholders involved in the use or development of an AI system or where it is unclear where a harmful output or action of an AI system originated.
Other torts are likely to become relevant in the AI context as well, particularly given the ability of AI programs to alter and duplicate the voices and likenesses of individual (known as “deepfakes”). These include intentional infliction of mental distress, placing a person in a false light, non-consensual distribution of intimate images and interference with economic relations. Recently enacted legislation in British Columbia addressing non-consensual distribution of intimate images specifically allows for an individual to retain a reasonable expectation of privacy in an intimate image even when it has been altered or they are not identifiable in the image.
Almost any use of AI must consider the impact on privacy given that AI systems are developed by consuming large amounts of data, some of which can be related to individuals. The collection, use and disclosure of personal information in Canada by private businesses is protected by the federal Personal Information Protection and Electronic Documents Act (PIPEDA) and substantially similar provincial legislation in Québec, Alberta and British Columbia.
PIPEDA places importance on the consent of individuals in terms of the collection of their personal information and how it is used and disclosed. Businesses that use or develop AI in their operations must be aware of the obligations created by PIPEDA to the extent that any personal information is used to train and develop AI systems or that is collected or used when consumers interact with AI systems.
To the extent that creators of AI systems collect, use or share personal information in the training or development of their AI systems, data privacy regulators have the authority to investigate their practices. For instance, in April 2023, the Office of the Privacy Commissioner of Canada (OPC) launched an investigation into OpenAI, the company behind the AI-powered chatbot ChatGPT, based on a complaint of collection, use and disclosure of personal information without consent. Similar investigations have been considered or launched in EU countries on the basis of the EU’s data protection law, the General Data Protection Regulation (GDPR).
Prior to the introduction of Bill C-27, which proposed the landmark Artificial Intelligence and Data Act (AIDA), the OPC issued recommendations on how PIPEDA should be amended in order to appropriately regulate AI. Its recommendations included taking a human rights-based approach and laying out clear rights and obligations with respect to personal information that would ensure that the use of personal information in the AI context would be regulated by PIPEDA2.
In general, in response to AI, the OPC advocates for a rights-based regime that includes demonstrable accountability, while also including exceptions to the rules of consent with respect to personal information for socially beneficial uses of information. Though not legally binding, it is helpful to understand the OPC’s position in the event of AI-related privacy complaints as the AI industry continues to progress.
OSFI’s recent guidance on responsible use of AI highlights similar priorities for financial services regulators, with a particular focus on effective AI governance, the use of data, the ethics of AI systems, and explainability of such systems for customers.
The Ontario Information and Privacy Commissioner (IPC) has also informally suggested that Ontario adopt a harmonized and rights-based approach to AI, including a focus on the following:
Further non-binding IPC guidance can be found in its comments on a municipal police board’s policy regarding the use of AI. The IPC was concerned with ensuring clarity in the AI use policy, as well as ensuring transparency, accountability and oversight for the use of AI. Measures it recommended include requiring a whistleblower mechanism to report violations of the policy, clear descriptions of roles and responsibilities, requiring recordkeeping requirements for AI technologies and publishing proactive disclosure regarding the uses of AI on the organization’s website. The IPC also highlighted the importance of human-in-the-loop oversight and meaningful explanations of AI use4.
IP legislation in Canada has not yet directly addressed the questions that come up when AI systems create what would otherwise be considered intellectual property, such as chatbots that are able to produce written works or programs that generate digital images when given prompts by users. The Federal Circuit in the United States and the Supreme Court in the UK have both concluded that an “inventor” for the purposes of patents must be a human, but this issue has not been brought before Canadian authorities. However, it is likely that the existing U.S. and UK authorities will be influential in deciding this issue when it comes up in a Canadian court.
The federal government in 2021 suggested multiple approaches for how AI-generated works would be considered for protection under the Copyright Act5, given that Canadian copyright jurisprudence suggests that an author must be a natural person: 1) attribute authorship to the person who arranged for the work to be created; 2) clarify that copyright and authorship only apply to works generated by humans, and works created with no humans participating in the creation of the work would not be eligible for copyright protection; or 3) create a new “authorless” set of rights for AI-generated works.
Though these were proposed by the government, the copyright issues have not been addressed by the AIDA, and this remains a significant question particularly in light of the recent rise in the popularity and accessibility of AI-generated works.
There are multiple upcoming and proposed legislative reforms in Canada that address AI regulation directly, the most significant of which is the AIDA. It is slated to come into effect in 2025 at the earliest.
Upcoming federal and provincial privacy law reforms also have provisions governing automated decision-making that will affect the use of AI in business operations.
The proposed federal Bill C-27, if passed, would implement Canada’s first artificial intelligence legislation, the AIDA. The AIDA creates Canada-wide obligations and prohibitions pertaining to the design, development and use of artificial intelligence systems in the course of international or interprovincial trade and commerce. This applies to any “technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions”.
According to the companion document to the AIDA6, the federal government expects the AIDA and its regulations to come into force no sooner than two years after Bill C-27 receives Royal Assent. Bill C-27 is currently in its second reading in the House of Commons.
The most significant obligations in the AIDA apply to “high-impact” AI systems. Although that term has not been defined in the Bill, the government has suggested that regulations may prescribe that the following factors be considered in determining what AI systems are high impact:
The federal government has indicated that these considerations would not apply to a person distributing or publishing open-source software or models, given that these by themselves are not considered a complete AI system. Accordingly, they would apply to a person making available for use an open-access, fully functioning high-impact AI system.
Developers and operators of high-impact systems have significant obligations. They would be required to
It is not clear whether the regulations will introduce exceptions to these requirements for systems that would otherwise be considered high-impact systems based on the considerations listed above, such as exceptions to the transparency requirement for tools aimed at security or fraud prevention. Given that such exceptions currently exist in privacy law for security tools, it is possible that similar ones may be introduced for the purposes of the AIDA.
Those systems that do not qualify as high-impact systems are subject to less onerous obligations. Developers and operators are only required to
The AIDA will also create an office headed by a new AI and Data Commissioner to administer the act. This role would begin with education in and assistance for complying with the AIDA but will evolve to eventually include compliance and enforcement. The Commissioner will ensure that the AIDA will be administered and enforced according to the capabilities and scale of the organization in question.
The AIDA gives the responsible Minister substantial investigation and enforcement powers, which include the power to require the production of records, require a company to conduct an internal audit or engage the services of an independent auditor to investigate possible contraventions, order a company to implement any measure to address issues raised in such an audit report, and order a company to pay an administrative monetary penalty.
Regulatory offences currently in the AIDA include:
Currently, regulatory violations of the AIDA will result in administrative monetary penalties. The amounts of those penalties are unknown and set to be determined by regulation.
In addition, the companion document has clarified that the AIDA will create three new criminal offences under the Criminal Code of Canada, aimed at punishing AI-related activities that intentionally cause or create a risk of harm, which are the following:
Unlike the regulatory offences that result in monetary penalties, these are crimes that can be investigated by law enforcement and prosecuted by the Public Prosecution Service of Canada.
Our understanding of the AIDA is subject to change with the introduction of regulations covering significant elements of the scope and content of the statute. The AIDA is also subject to possible substantive amendments as Bill C-27 progresses through Parliament, so updates to this legislation should be monitored by organizations that are affected—particularly those that leverage the use of AI systems that may be considered “high impact” with reference to the above factors.
Provisions related to automated decision-making (ADM) in section 12.1 of Québec’s recently revamped private-sector privacy statute, the Act respecting the protection of personal information in the private sector (the Private Sector Act), will come into effect in September 2023. These provisions, originally part of Bill 64, will give consumers the right to information, and the right to objection when their personal information is used to make decisions about them without independent human judgment. This provision applies to ADM decisions that are exclusively automated.
Organizations will be required to
This upcoming change will affect organizations that use AI in any fully automated decision-making process that involves the personal information of any individuals in Québec, whether they are customers or employees. For clarity, individuals must be given notice of each separate ADM process for which an organization uses their personal information.
The Consumer Privacy Protection Act (CPPA) in Bill C-27 aims to modernize existing Canadian privacy law for the digital economy. This includes providing for restrictions around automated decision-making, which is currently defined more broadly in the federal CPPA than in Québec’s Private Sector Act. Section 2(1) of the CPPA defines an ADM to include a system that assists in the judgment of human decision-makers, rather than limiting the definition to fully automated decision-making systems.
Section 62 of the CPPA requires organizations using ADMs to make readily available a plain-language account of the organization’s use of any ADM that makes predictions, recommendations or decisions about individuals that could have a significant impact on the individuals concerned, including those decisions made by a human but assisted by a system.
The CPPA in section 63(3) also allows individuals to request explanations for any automated decision that could have a significant impact on them, and it does not currently allow an organization to refuse an explanation request. Section 55 only allows organizations to refuse requests for disposal that are vexatious or made in bad faith.
Bill C-27 may be passed by the end of 2023, and it will likely take at least a year after the bill passes for the CPPA to come into force.
The Government of Canada has indicated in its companion document to the AIDA that interoperability with legal frameworks in other jurisdictions will be a “key consideration” in the development of the AIDA’s regulations in order to facilitate access to international markets for Canadian companies. Canada, the EU and the United States have all taken similar approaches in prioritizing a rights-based approach focusing on transparency and accountability and ensuring that potential harms of AI use are mitigated to the extent possible.
Given the international influence of the EU’s GDPR as a global privacy standard, it is important to note that the EU has recently introduced the AI Act (AIA) to ban the use of “unacceptable risk” in AI applications and systems and to regulate the use of “high-risk” AI applications. It is projected to come into force in late 2023 or early 2024.
The AIA outlines the essential requirements that high-risk systems need to meet (including data governance, transparency, cybersecurity, documentation, monitoring and human oversight) and the method of classifying a system as high-risk. Some examples of high-risk systems exist in law enforcement, education, administration of justice and biometric identification of natural persons.
The AIA also delineates which AI practices are prohibited, which include general purpose social scoring, exploitation of children or mentally disabled persons resulting in harm, subliminal manipulation resulting in harm, and remote biometric identification for law enforcement purposes in publicly accessible spaces. Certain AI systems that are not high-risk may also be subject to enhanced transparency obligations, which would require humans to be notified that they are interacting with an AI system7.
Like in Canada and the EU, AI regulation in the United States is still being considered and developed, at both the federal and state levels. In 2022, certain states began to regulate automated employment decision tools that use AI for candidate screening and employment decisions, including New York and Illinois. The state data privacy laws scheduled to come into force in 2023 in California, Connecticut, Colorado and Virginia all contain provisions regarding automated decision-making as well. Some of these state laws include provisions on consumer rights for AI-powered decisions, AI transparency and governance via AI impact assessments.
The White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights in 20218. This guidance focused on five principles for the design, use and deployment of automated systems to protect the U.S. public, which include safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration and fallback. This blueprint reflects similar concerns with AI in the Canadian proposed legislation and guidance outlined above. At this time, there is no plan to introduce an AI Bill of Rights in the United States as legislation.
In addition to potential legislation, federal AI regulation may be in the pipeline for the United States via the FTC. In 2022, the FTC issued an advance notice of proposed rulemaking to address commercial surveillance and data security, which included a discussion on automated decision-making systems. This invited public comment on what kind of transparency companies should provide to consumers regarding AI, prohibitions on “unfair or deceptive” AI uses and certifying AI as meeting accuracy, validity and reliability standards9.
The NIST in the United States has recently issued an AI Risk Management Framework that includes controls that enable organizations to demonstrate that their AI is trustworthy (valid and reliable; safe, fair and nonbiased; explainable and interpretable; and transparent and accountable), as well as an action framework to assist companies in managing AI risk and meeting the trustworthiness criteria10. This framework is voluntary, but NIST is considered an influential organization in the global development of technology standards; in Canada, some public sector organizations already require NIST compliance for their contractors.
While the legal and regulatory landscape is still uncertain, it is clear that regulation is increasing in multiple jurisdictions. Organizations developing or licensing AI systems now should look to these sources to forecast their likely compliance obligations, rather than launching products that may need to be altered once these laws come into force. Such organizations should consider proactively implementing written policies and procedures that address their expected obligations under the developing legal and regulatory framework.
To discuss these issues, please contact the author(s).
This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.
For permission to republish this or any other publication, contact Janelle Weed.
© 2023 by Torys LLP.
All rights reserved.