Authors
Andrew Gray
Lauren Nickerson
Louis Althaus
As new and evolving forms of artificial intelligence gain traction in the market, some academics and practitioners predict a future in which AI not only assists and advises corporate leaders, but itself takes on a leadership role. We discuss the board’s role in overseeing the company’s AI use and how the board itself can use AI. We also look to the future to explore whether AI use by boards will become a required part of the job, and whether—one day—AI might replace humans at the helm.
As AI tools are assessed and implemented within the corporation, board members are responsible for:
While these forms of AI may be new, the process of incorporating technology and overseeing its governance is not. Boards and management can lean on pre-existing judgment, expertise, and experience with change management in navigating this process.
The board can also use AI to improve its governance processes. For example, AI tools can be used to monitor regulatory compliance, identifying indicators of fraud and cybersecurity risk, detecting operational inefficiencies, reducing agency costs, and offering predictions based on company and market trends. These inputs can be used to assist and improve the board’s decision-making processes. AI can also be used to streamline administrative processes, including preparing materials and summarizing complex reports.
But directors should also be careful not to over-rely on AI, or to fall victim to automation bias1. AI can make mistakes, miss crucial pieces of context, or misinterpret the task it has been given. It is also important not to assume that AI is fully neutral, as it can be imbued with biases in its underlying training data.
Directors should also consider whether they could be held liable for mistakes made by an AI system. While Canadian corporation law provides some room for error, provided reasonable diligence is met, overreliance on AI to the point of delegating important components of decision-making may result in directors’ liability.
For the time being, directors have the flexibility to consider whether and what AI tools can help improve governance processes. But will AI someday be required for the job?
In Canada, directors must “exercise the care, diligence, and skill that a reasonably prudent person would exercise in comparable circumstances” when overseeing the business of a corporation2. But as the Supreme Court of Canada explained in BCE Inc., “[c]ommercial practice plays a significant role in forming the reasonable expectations of the parties”3. As AI use by corporate boards becomes more prevalent, it could, in theory, reach a point where it becomes standard practice. At this point, failure to use it in accordance with that standard practice could result in liability for breaching the directors’ duty of care4.
Fully AI-run corporations are not foreseeable on the horizon, for several reasons:
Human directors remain essential, but they should develop a familiarity with AI for two reasons: (1) to assist them in their responsibilities in overseeing AI use within the organization, and (2) to empower them to use tools that can improve their knowledge and decision-making capacities—while being aware of the risks, and mindful of their obligations to the corporation and its shareholders.
To discuss these issues, please contact the author(s).
This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.
For permission to republish this or any other publication, contact Janelle Weed.
© 2025 by Torys LLP.
All rights reserved.