June 11, 2025Calculating...

Québec privacy regulator identifies best practices for the use of AI in the workplace

Earlier this year, the Commission d’accès à l’information (CAI), Québec’s privacy regulator, issued a submission to the Québec Minister of Labour as part of a consultation on the digital transformation of workplaces. The CAI made several recommendations for employers regarding the use of employee-related AI tools and applications and flagged high-risk areas in connection with AI in the workplace.

What you need to know

  • The CAI recommended a number of measures for employers using AI in the workplace, including:
    • establishing an internal AI policy;
    • notifying employees of the use of AI and engaging employees in business decisions on AI;
    • conducting algorithmic impact assessments for higher-risk AI activity; and
    • ensuring that privacy law requirements are followed with respect to the use of AI.
  • The CAI also voiced concern regarding the overcollection of employee information (including geolocation, biometric, and video data) and the potential training of AI “algorithmic management” systems on such information.
  • Though these recommendations are not binding law, they are a good indicator of the direction of both future legislation and regulation in Québec as well as the CAI’s approach to current privacy law enforcement. As such, employers should consider reviewing and updating their employee privacy and AI policies, practices, and procedures.

CAI recommendations on the use of AI in the workplace

The CAI identified the following recommendations for employers. In its submission, it raised concern that existing legislative provisions (such as those under privacy laws) are limited and do not address all issues at stake with the use of AI in the workplace. As such, the CAI took the position that the following recommendations should eventually be formalized into legal requirements under labour or other laws.

  • Establishing an internal policy on the use of AI: The CAI recommends a policy that specifies the AI tools and systems used (including names of suppliers as applicable), their purposes, the personal information involved (both input and output), the expected impact on the rights of data subjects, risk mitigation measures, details on how the AI tools produce results, how the results will be used in decision-making, how rights can be exercised in relation to these technologies, and details regarding audits and evaluations completed by the employer in respect of the AI tool.  
  • Conducting algorithmic impact analyses: In addition to the Privacy Impact Analysis (PIA) requirement under Québec privacy laws, the CAI recommends conducting an algorithmic impact analysis on the use or development of an AI tool used in the workplace. It is recommended that the analysis assesses the effects of AI systems that make decisions (whether partially or fully automated) involving the rights and privacy of employees.
  • Notifying employees: The CAI recommends that employees be notified of an employer’s plan to use AI systems in its decision-making regarding employees well in advance of the decision being made. This recommendation goes beyond current privacy laws, which only require notice in advance of a fully automated decision.
  • Engaging employees: In addition to the employee notification point above, the CAI also recommends that employers involve employees in the algorithmic impact analysis process. The CAI indicates that it favours the notion of increased employee engagement in company decisions regarding the adoption and implementation of AI in the workplace.
  • Being transparent: The CAI recommends transparency with employees about the AI systems and tools being used and their potential impact. 
  • Ensuring appropriate purposes and uses: The CAI recommends that employers regularly assess their use of AI systems and tools to ensure that the tool is relevant and necessary for the legitimate purposes of the employer. It further recommends assessing the impact of deployed AI tools and avoiding the use of AI for what the CAI considers to be unacceptable purposes. This includes the analysis of emotions or psychological states, biometric categorization, and fully automated decisions that have a significant effect on employees. The CAI referenced EU Regulations as a benchmark for what should be considered prohibited or unacceptable AI practices.
  • Providing access: Employers are still subject to the privacy law requirement to provide employees with access to their personal information, which includes personal information used in and generated by AI tools and systems. This includes inferences generated by AI about an individual. The CAI recommends that employers consider a mechanism that would enable employees to collectively obtain such information in a structured and commonly used technological format. Employers should also keep the right to data portability in mind.

Key CAI concerns

The CAI expressed concerns about the normalization of employers collecting a wealth of information about their employees—including geolocation information, video surveillance, telephone and social media tracking, and biometric authentication data such as facial recognition—that can potentially be leveraged to train an AI-powered algorithmic management system. In the CAI’s view, the decisions and predictions generated by such systems could have significant impacts on individuals’ livelihoods and their rights as employees, and should be considered “high-risk”.

When dealing with these “high-risk” systems, the CAI says employers should take into account a set of special considerations, including:

  • increasing transparency,
  • preventing discriminatory biases,
  • ensuring necessity and proportionality,
  • establishing thorough human intervention and oversight,
  • ensuring sufficient technological maturity within the organization to deal with complex AI risks, and
  • providing employees with the ability to assert their rights and exercise control over their personal information.

Importantly, the CAI identified that the common human-in-the-loop risk mitigant for the use of AI systems is not as simple as just having a human do a cursory review of AI decisions. Instead, it recommends that the overseeing human should be able to study the entire analysis of the decision and avoid falling victim to “automation bias” by over-trusting the system.

The CAI also cautioned against the use of AI systems in a broad and extensive manner, such that algorithms would end up dictating virtually all parameters of the workplace—like working hours, working conditions, and remuneration. It also pointed out that employers can be biased by considerations of convenience and efficiency, as well as a sense of urgency around adopting AI technologies given the rapidly accelerating AI landscape when assessing whether its collection and use of personal information is necessary and proportionate to its purposes. Since organizations conduct these assessments on themselves, the CAI expressed concern that few barriers would be in place to prevent harmful AI practices.

Impacts for employers

CAI’s regulatory approach. Though these recommendations do not have the force of law, they are nevertheless an important indication of the CAI’s views and regulatory expectations of businesses using AI. While AI and employee privacy have been priorities for the CAI, the CAI’s concerns outlined above (particularly with respect to AI systems it views as “high-risk”) may be an indicator of some of its more specific enforcement priorities with respect to AI in the workplace. Employers with employees in Québec should therefore consider taking measures to implement the CAI’s recommendations as appropriate.

Future legislation in Québec. As the CAI recognizes, existing legislation may need to be updated to transform some of its recommendations into binding law. Hence, employers should view them as a potential indicator for future AI workplace legislation and regulation in Québec.

How employers can respond. As a starting point, establishing an internal governance framework that applies to privacy and the use of AI in the workplace is a best practice in any jurisdiction to ensure compliance with existing privacy, human rights and employment laws, and to limit other AI-related risks.

Similarly, many of the CAI’s recommendations are based on existing privacy law requirements as applied to the higher-sensitivity AI context, such as transparency and access requirements. Organizations should consider reviewing their existing employee privacy practices and governance framework to ensure that they can be practically applied to the organization’s current or potential uses of AI. For example, employers may want to confirm they have the technological means to provide access to employee personal information in AI systems and can explain to employees how their personal information was used to arrive at an automated decision.

Where appropriate, employers should also consider expanding the scope of these policies and practices to meet the higher threshold of these recommendations; for example, by adding an algorithmic impact analysis component to your existing PIA procedure for AI tools or increasing training for human overseers of AI decisions to ensure that they are capable of a thorough analysis of an AI decision.

For more information on how employers can identify and respond to AI risks, please review our guide: AI for employers: balancing risk and reward.


To discuss these issues, please contact the author(s).

This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.

For permission to republish this or any other publication, contact Janelle Weed.

© 2025 by Torys LLP.

All rights reserved.
 

Subscribe and stay informed

Stay in the know. Get the latest commentary, updates and insights for business from Torys.

Subscribe Now