Earlier this year, the Commission d’accès à l’information (CAI), Québec’s privacy regulator, issued a submission to the Québec Minister of Labour as part of a consultation on the digital transformation of workplaces. The CAI made several recommendations for employers regarding the use of employee-related AI tools and applications and flagged high-risk areas in connection with AI in the workplace.
The CAI identified the following recommendations for employers. In its submission, it raised concern that existing legislative provisions (such as those under privacy laws) are limited and do not address all issues at stake with the use of AI in the workplace. As such, the CAI took the position that the following recommendations should eventually be formalized into legal requirements under labour or other laws.
The CAI expressed concerns about the normalization of employers collecting a wealth of information about their employees—including geolocation information, video surveillance, telephone and social media tracking, and biometric authentication data such as facial recognition—that can potentially be leveraged to train an AI-powered algorithmic management system. In the CAI’s view, the decisions and predictions generated by such systems could have significant impacts on individuals’ livelihoods and their rights as employees, and should be considered “high-risk”.
When dealing with these “high-risk” systems, the CAI says employers should take into account a set of special considerations, including:
Importantly, the CAI identified that the common human-in-the-loop risk mitigant for the use of AI systems is not as simple as just having a human do a cursory review of AI decisions. Instead, it recommends that the overseeing human should be able to study the entire analysis of the decision and avoid falling victim to “automation bias” by over-trusting the system.
The CAI also cautioned against the use of AI systems in a broad and extensive manner, such that algorithms would end up dictating virtually all parameters of the workplace—like working hours, working conditions, and remuneration. It also pointed out that employers can be biased by considerations of convenience and efficiency, as well as a sense of urgency around adopting AI technologies given the rapidly accelerating AI landscape when assessing whether its collection and use of personal information is necessary and proportionate to its purposes. Since organizations conduct these assessments on themselves, the CAI expressed concern that few barriers would be in place to prevent harmful AI practices.
CAI’s regulatory approach. Though these recommendations do not have the force of law, they are nevertheless an important indication of the CAI’s views and regulatory expectations of businesses using AI. While AI and employee privacy have been priorities for the CAI, the CAI’s concerns outlined above (particularly with respect to AI systems it views as “high-risk”) may be an indicator of some of its more specific enforcement priorities with respect to AI in the workplace. Employers with employees in Québec should therefore consider taking measures to implement the CAI’s recommendations as appropriate.
Future legislation in Québec. As the CAI recognizes, existing legislation may need to be updated to transform some of its recommendations into binding law. Hence, employers should view them as a potential indicator for future AI workplace legislation and regulation in Québec.
How employers can respond. As a starting point, establishing an internal governance framework that applies to privacy and the use of AI in the workplace is a best practice in any jurisdiction to ensure compliance with existing privacy, human rights and employment laws, and to limit other AI-related risks.
Similarly, many of the CAI’s recommendations are based on existing privacy law requirements as applied to the higher-sensitivity AI context, such as transparency and access requirements. Organizations should consider reviewing their existing employee privacy practices and governance framework to ensure that they can be practically applied to the organization’s current or potential uses of AI. For example, employers may want to confirm they have the technological means to provide access to employee personal information in AI systems and can explain to employees how their personal information was used to arrive at an automated decision.
Where appropriate, employers should also consider expanding the scope of these policies and practices to meet the higher threshold of these recommendations; for example, by adding an algorithmic impact analysis component to your existing PIA procedure for AI tools or increasing training for human overseers of AI decisions to ensure that they are capable of a thorough analysis of an AI decision.
For more information on how employers can identify and respond to AI risks, please review our guide: AI for employers: balancing risk and reward.
To discuss these issues, please contact the author(s).
This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.
For permission to republish this or any other publication, contact Janelle Weed.
© 2025 by Torys LLP.
All rights reserved.