December 17, 2025Calculating...

(Circuit)Board Strategy

Authors

 
As new and evolving forms of artificial intelligence gain traction in the market, some academics and practitioners predict a future in which AI not only assists and advises corporate leaders, but itself takes on a leadership role. We discuss the board’s role in overseeing the company’s AI use and how the board itself can use AI. We also look to the future to explore whether AI use by boards will become a required part of the job, and whether—one day—AI might replace humans at the helm.

What you need to know

  • The board’s responsibility is to ensure proper governance and oversight of AI systems used within the organization. This includes ensuring that AI systems align with the corporation’s goals, are properly risk-managed, and provide an appropriate return-on-investment.
  • AI can assist corporations through fraud detection, risk identification and management, and predictive data analysis, ultimately providing inputs to human decision-making.
  • In the future, board members may have a responsibility to use AI if its use in corporate governance rises to a level of industry standard—but we are not yet at that point.
  • AI is unlikely to replace human directors in the foreseeable future. AI currently lacks fundamental leadership qualities, and existing legal regimes are not equipped to deal with non-human directors. Instead, AI is best used as a collaborator in corporate governance, not as a replacement for human skill and judgment.

Where we are now

What is the board’s role in overseeing AI use by the corporation?

As AI tools are assessed and implemented within the corporation, board members are responsible for:

  • Understanding the organization’s AI use: board members must stay up to date on what tools are being developed, what those tools are being used for, what data they have access to, and who the relevant stakeholders are.
  • Understanding the risks: common risks associated with AI use include those related to transparency and explainability, hallucinations and inaccuracies, privacy and data protection, intellectual property infringement, bias and discrimination, liability if AI causes harm, and regulatory compliance. These risks may vary in applicability and intensity depending on how the corporation is using AI, and for what purposes.
  • Overseeing AI: it is crucial to implement AI governance systems to ensure responsible development and use of AI within the organization. These governance systems should implement clear lines of accountability, including by setting out when information should be escalated to the board.
  • Staying informed: it will be important to ensure that either (a) someone on the board understands the technology, or (b) the board seeks external expertise when making decisions related to AI.

While these forms of AI may be new, the process of incorporating technology and overseeing its governance is not. Boards and management can lean on pre-existing judgment, expertise, and experience with change management in navigating this process.

How can the board use AI in governing the corporation?

The board can also use AI to improve its governance processes. For example, AI tools can be used to monitor regulatory compliance, identifying indicators of fraud and cybersecurity risk, detecting operational inefficiencies, reducing agency costs, and offering predictions based on company and market trends. These inputs can be used to assist and improve the board’s decision-making processes. AI can also be used to streamline administrative processes, including preparing materials and summarizing complex reports.

But directors should also be careful not to over-rely on AI, or to fall victim to automation bias1. AI can make mistakes, miss crucial pieces of context, or misinterpret the task it has been given. It is also important not to assume that AI is fully neutral, as it can be imbued with biases in its underlying training data.

Directors should also consider whether they could be held liable for mistakes made by an AI system. While Canadian corporation law provides some room for error, provided reasonable diligence is met, overreliance on AI to the point of delegating important components of decision-making may result in directors’ liability.

Looking ahead

Will boards be required to use AI?

For the time being, directors have the flexibility to consider whether and what AI tools can help improve governance processes. But will AI someday be required for the job?

In Canada, directors must “exercise the care, diligence, and skill that a reasonably prudent person would exercise in comparable circumstances” when overseeing the business of a corporation2. But as the Supreme Court of Canada explained in BCE Inc., “[c]ommercial practice plays a significant role in forming the reasonable expectations of the parties”3. As AI use by corporate boards becomes more prevalent, it could, in theory, reach a point where it becomes standard practice. At this point, failure to use it in accordance with that standard practice could result in liability for breaching the directors’ duty of care4.

Will AI replace humans at the helm?

Fully AI-run corporations are not foreseeable on the horizon, for several reasons:

  • AI lacks fundamental leadership qualities. Even if we can solve AI’s biases and inaccuracies, it has not yet developed to the point where it can make nuanced judgment calls that Boards need to make5. Rather, it lacks the judgment, creativity, innovation, and conscience required for the job6.
  • AI lacks transparency. The black box issue persists—there is no reliable way to trace how an AI system makes decisions or reaches conclusions7.
  • We would need to significantly reimagine corporate law. The law does not currently contemplate artificially intelligent directors. As it stands, Canadian corporate law requires each director to be a “natural person”8 restricts delegation of duties that “lie at the heart of the management of the corporation,”9 and imposes a standard of reasonableness and judgment on directors that AI systems cannot meet10. Directors are also required to act in the best interests of the corporation, “commensurate with the corporation’s duties as a responsible corporate citizen”11. AI is not at a place where it can make the nuanced, detailed value-judgments required to shape an organization to be a good corporate citizen.

Key takeaways

Human directors remain essential, but they should develop a familiarity with AI for two reasons: (1) to assist them in their responsibilities in overseeing AI use within the organization, and (2) to empower them to use tools that can improve their knowledge and decision-making capacities—while being aware of the risks, and mindful of their obligations to the corporation and its shareholders.


To discuss these issues, please contact the author(s).

This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.

For permission to republish this or any other publication, contact Janelle Weed.

© 2025 by Torys LLP.

All rights reserved.
 

Subscribe and stay informed

Stay in the know. Get the latest commentary, updates and insights for business from Torys.

Subscribe Now