Author
The board says we need an AI strategy. How do we start?
Across Canada and internationally, organizations of all sizes and in all industries have been using and developing AI tools for years. The generative AI (Gen AI) boom in recent years has been quickly followed by a push for national and industry regulation, which has now brought the focus back to corporate governance of AI within organizations. While many companies initially responded by prohibiting the use of open-access Gen AI tools for business purposes, both the opportunities and the risks of this rapidly advancing technology are demanding a more comprehensive AI strategy.
Just as corporate legal, compliance and business teams are starting to scope their AI governance plans, the board or senior management are pushing for a formal strategy. Here are some tips on where to start.
What AI systems or tools does the organization already use? Don’t limit this inventory to Gen AI or standalone AI initiatives. Try to identify enterprise software that has AI-powered components, such as predictive text, advanced search or security features. This will help map the current position of the company’s AI journey and the existing governance processes that can be leveraged in the broader AI strategy.
Where did these AI systems come from? Does the organization exclusively procure third-party tools, does it develop in-house, or does it modify or customize base products? Are vendors using AI in their services to the company even where it isn’t used in-house? Keeping track of where your organization’s systems came from can help determine how model risk requirements and compliance matters are being met, and who is responsible for their oversight.
Who reviewed and approved the existing AI systems? This will help identify stakeholders for the AI strategy and existing business approval and risk management frameworks.
What policies, processes and training apply to existing AI systems? This could include content relevant to procurement, vendor management, IT, security, privacy, analytics, records retention, legal and human resources, as well as risk appetite and management frameworks and executive/board reporting processes. Organizations should also ask about informal processes, such as internal consultations, troubleshooting strategies and in-house expertise, even where not formally documented.
For more on AI and human resources, read “Can HR use AI to recruit, manage and evaluate employees?” and for more on cybersecurity, read “A sword and a shield: AI’s dual-natured role in cybersecurity”.
In a recent survey of global business leaders1, 34% of respondents named “improved efficiency and productivity” as the main benefit they hoped to achieve by implementing AI systems within their organization. Identifying the desired outcomes of leveraging AI can help companies determine where their AI strategy is headed, and the following steps can help highlight a path forward:
Pair the inventory and near-term forecast to develop a point-in-time summary of the organization’s use of AI, existing governance program and areas for enhancement. This summary will inform the scope of the AI strategy and may include insights into:
Seek input from legal counsel and other external advisors on how to address strategic and governance gaps and formalize the strategy. This may include:
In addition to data governance and cybersecurity oversight considerations, an AI strategy will need to stay agile to respond to changing regulations and emerging risks while capitalizing on opportunities. To this end, companies need to establish clear processes, policies and controls when implementing AI systems and new use cases within their enterprise.
This article was published as part of the Q4 2024 Torys Quarterly, “Machine capital: mapping AI risk”.