Why AI creates legal risk—and how Canadian regulators have responded
Generally speaking, Canadian law applies equally to AI as to other technologies. However, there are several areas of law that are particularly applicable to AI. These include human rights, privacy, tort and IP.
Human rights law
AI technology can give rise to a variety of human rights impacts, depending on how it is developed and deployed. Certain uses of AI can infringe on the right to equality and non-discrimination. The rapid rise of AI-assisted automated decision-making processes has led to a concern that these systems are replicating patterns of discrimination encoded in the data fed to them during development.
There can be unconscious bias and unintentional discrimination in the training data that develops AI systems, which can then replicate these biases in the form of biased output. For instance, an AI system assisting with decision-making in the hiring process may unwittingly replicate the underrepresentation of a marginalized group that was present in the training data. This risk must be kept in mind when developing and implementing algorithms and AI systems, particularly those that are involved in decision-making about individuals, given both the risk of infringement of human rights law and policy (including the provisions against discrimination and discriminatory practices in the Canadian Human Rights Act) and various provincial human rights statutes that protect against discrimination. Organizations should also consider associated reputational risks.
In 2021, the government of Ontario consulted on draft commitments and actions for its Trustworthy Artificial Intelligence (AI) Framework. The framework consisted of i) commitments pertaining to the transparency of AI use by the government; ii) rules and tools to guide the safety, equity and secure use of AI by the government; and iii) the reflection and protection of rights and values in the government’s use of AI. Among others, the Ontario Human Rights Commission (OHRC) submitted to the public consultation on the Framework. The OHRC identified policing, health care and education as specific areas of concern with respect to the use of AI potentially infringing on human rights, particularly the potential discriminatory impact in each of these areas1.
Canada is also part of the Freedom Online Coalition (FOC) which has commented on the international human rights law risk associated with under-regulated AI, including the internationally recognized right to privacy, and has advocated for private sector organizations to observe responsible business practices in the use of AI systems in their operations.
The common law of negligence will apply to regulate AI in instances where parties are harmed by an AI system. While this issue has not been explored yet by Canadian courts, it is likely that established tort law principles of negligence will continue to hold defendants liable for damage caused by an AI system that they developed or deployed.
Note that activities that can give rise to negligence claims, and illegal activities, remain negligent or illegal even if such activities are conducted through AI systems. Though liability risks can certainly arise for businesses that develop AI, Canadian law has not yet addressed issues relating to assigning liability where there are multiple stakeholders involved in the use or development of an AI system or where it is unclear where a harmful output or action of an AI system originated.
Other torts are likely to become relevant in the AI context as well, particularly given the ability of AI programs to alter and duplicate the voices and likenesses of individual (known as “deepfakes”). These include intentional infliction of mental distress, placing a person in a false light, non-consensual distribution of intimate images and interference with economic relations. Recently enacted legislation in British Columbia addressing non-consensual distribution of intimate images specifically allows for an individual to retain a reasonable expectation of privacy in an intimate image even when it has been altered or they are not identifiable in the image.
Almost any use of AI must consider the impact on privacy given that AI systems are developed by consuming large amounts of data, some of which can be related to individuals. The collection, use and disclosure of personal information in Canada by private businesses is protected by the federal Personal Information Protection and Electronic Documents Act (PIPEDA) and substantially similar provincial legislation in Québec, Alberta and British Columbia.
PIPEDA places importance on the consent of individuals in terms of the collection of their personal information and how it is used and disclosed. Businesses that use or develop AI in their operations must be aware of the obligations created by PIPEDA to the extent that any personal information is used to train and develop AI systems or that is collected or used when consumers interact with AI systems.
To the extent that creators of AI systems collect, use or share personal information in the training or development of their AI systems, data privacy regulators have the authority to investigate their practices. For instance, in April 2023, the Office of the Privacy Commissioner of Canada (OPC) launched an investigation into OpenAI, the company behind the AI-powered chatbot ChatGPT, based on a complaint of collection, use and disclosure of personal information without consent. Similar investigations have been considered or launched in EU countries on the basis of the EU’s data protection law, the General Data Protection Regulation (GDPR).
Prior to the introduction of Bill C-27, which proposed the landmark Artificial Intelligence and Data Act (AIDA), the OPC issued recommendations on how PIPEDA should be amended in order to appropriately regulate AI. Its recommendations included taking a human rights-based approach and laying out clear rights and obligations with respect to personal information that would ensure that the use of personal information in the AI context would be regulated by PIPEDA2.
In general, in response to AI, the OPC advocates for a rights-based regime that includes demonstrable accountability, while also including exceptions to the rules of consent with respect to personal information for socially beneficial uses of information. Though not legally binding, it is helpful to understand the OPC’s position in the event of AI-related privacy complaints as the AI industry continues to progress.
OSFI’s recent guidance on responsible use of AI highlights similar priorities for financial services regulators, with a particular focus on effective AI governance, the use of data, the ethics of AI systems, and explainability of such systems for customers.
The Ontario Information and Privacy Commissioner (IPC) has also informally suggested that Ontario adopt a harmonized and rights-based approach to AI, including a focus on the following:
- researching the effects of data profiling, social media exposure and algorithmic prediction on the psychological development of individuals, especially children and youth;
- defining harms to include group harms that result from AI systems, going beyond physical, psychological, property or economic harms;
- taking a broader human rights approach, going beyond federal constitutional powers that regulate commercial or criminal activity;
- developing an integrated approach across Ontario’s public, private and not-for-profit sectors for the regulation of AI, with a focus on the areas of health and law enforcement; and
- conducting algorithmic impact assessments within a principled and rights-based framework that balances the autonomy, dignity and integrity of persons or groups with the broader societal interests and public good considerations associated with AI use and development3.
Further non-binding IPC guidance can be found in its comments on a municipal police board’s policy regarding the use of AI. The IPC was concerned with ensuring clarity in the AI use policy, as well as ensuring transparency, accountability and oversight for the use of AI. Measures it recommended include requiring a whistleblower mechanism to report violations of the policy, clear descriptions of roles and responsibilities, requiring recordkeeping requirements for AI technologies and publishing proactive disclosure regarding the uses of AI on the organization’s website. The IPC also highlighted the importance of human-in-the-loop oversight and meaningful explanations of AI use4.
Intellectual property law
IP legislation in Canada has not yet directly addressed the questions that come up when AI systems create what would otherwise be considered intellectual property, such as chatbots that are able to produce written works or programs that generate digital images when given prompts by users. The Federal Circuit in the United States and the Supreme Court in the UK have both concluded that an “inventor” for the purposes of patents must be a human, but this issue has not been brought before Canadian authorities. However, it is likely that the existing U.S. and UK authorities will be influential in deciding this issue when it comes up in a Canadian court.
The federal government in 2021 suggested multiple approaches for how AI-generated works would be considered for protection under the Copyright Act5, given that Canadian copyright jurisprudence suggests that an author must be a natural person: 1) attribute authorship to the person who arranged for the work to be created; 2) clarify that copyright and authorship only apply to works generated by humans, and works created with no humans participating in the creation of the work would not be eligible for copyright protection; or 3) create a new “authorless” set of rights for AI-generated works.
Though these were proposed by the government, the copyright issues have not been addressed by the AIDA, and this remains a significant question particularly in light of the recent rise in the popularity and accessibility of AI-generated works.