Q4 | Torys QuarterlyFall 2025

Five questions in-house counsel should ask about agentic AI in financial services

It hasn’t been long since the generative AI boom first shook the market, requiring businesses, legislators and courts to urgently consider how to adapt to, adopt and manage this technology—yet even as we ride these shockwaves out, new horizons emerge, the latest of these being agentic AI.

 
Agentic AI refers to artificial intelligence systems that are capable of autonomously planning and performing tasks, making decisions and interacting with various systems to accomplish specific, pre-defined goals. Agentic AI uses, and builds on, older forms of AI but exhibits a higher degree of agency in executing the tasks it is designed to perform. This level of autonomy is thanks to the goal-driven, looped architecture the model follows from perception to reasoning and action, allowing it to self-correct and learn from experience.

As the integration of this technology within financial services accelerates, we share five key questions that financial institutions’ in-house counsel should be asking.

  1. What is the benefit/business case of agentic AI?
  2. How are our clients or partners using agentic AI when dealing with us?
  3. How does agentic AI increase our risk profile?
  4. How do we ensure a “human-in-the-loop” when using agentic AI?
  5. How does our governance framework need to change to account for agentic AI?

1. What is the benefit/business case of agentic AI?

Agentic AI’s use cases vary from organization to organization and are continually evolving as the technology develops. So an important preliminary question for in-house counsel is whether the value add of agentic AI is worth the cost of its adoption and implementation.

Generally, key benefits of agentic AI, as compared to older forms of task-based AI systems, include its ability to respond to dynamic and evolving circumstances and to interact with other tools, platforms and agents. Financial institutions might benefit from using agentic AI for a number of tasks:

  • Improving Know-Your-Client processes by collecting client information, checking it against identification and other documentation, cross-referencing information across databases, assessing client risk tolerance, and making product and investment recommendations based on their assessed tolerance.
  • Assisting with commercial agreements by identifying deviations from standard terms, drafting fallback provisions based on the institution’s risk tolerance, and adjusting terms to be more “buyer-friendly” or “seller-friendly” based on precedent agreements.
  • Increasing operational efficiencies by monitoring trends, analyzing transactions and generating reports on ways to improve. Some suggest that agentic AI could also be used to support customer relations processes, resulting in an overall reduction of human labour costs.
  • Supporting quality assurance by monitoring and validating the outputs of human agents and other AI systems, generating reports on common errors and developing recommendations for reducing future errors.

2. How are our clients or partners using agentic AI when dealing with us?

As interest in agentic AI continues to grow, it is important for in-house counsel to be aware of industry trends and client expectations around its implementation. Financial institutions may see an increase in clients using, or seeking to use, agentic AI to streamline their banking and investing processes. For example, AI agents could be used to monitor account balances and move unused funds into higher yield accounts between billing cycles—ensuring that enough funds are in the right account when it is time to pay bills1. Clients could also use agentic AI to track spending, enforce budgetary caps, make purchases and book travel.

Certain payment providers, including Visa, Mastercard and PayPal, recently indicated that they will support AI agents facilitating payments on behalf of users2.

3. How does agentic AI increase our risk profile?

The appeal of agentic AI is that it requires less oversight than other forms of artificial intelligence. However, this reduced oversight also increases many of the risks associated with AI use, such as:

  • Risk of error: agentic AI, like any form of AI, can make mistakes. However, unlike a chatbot in which each output is seen and assessed by human eyes (even if only by the end user), agentic AI may be assigned a string of sequential tasks, each of which depends on the prior task being done correctly. A mistake made at a midpoint in that sequence may be difficult to detect and could derail the end goal or make a final product unreliable. Case law in Canada and the U.S. suggests that financial institutions will be held responsible for errors made by AI agents used in their operations3.
  • Data breaches: agentic AI raises similar data protection risks as generative AI with respect to training data and inputs. The collection and use of high volumes of personal information when training AI models gives rise to data breach threats and privacy issues in any context. However, due to reduced oversight over agentic AI systems, there is an increased risk of data breaches at the output stage. For example, an AI agent might use sensitive data—including personal information—when interacting with other platforms and agents outside of the organization. In some cases, this may be authorized (such as using personal information to make a purchase requested by a client), but there is also an increased risk of unauthorized disclosure or dissemination.
  • Regulatory risk: many AI regulations and standards, including the European Union’s Artificial Intelligence Act, require risk classification for AI systems that correspond with a sliding scale of mitigating measures. As these standards evolve, autonomous AI agents dealing with financial information may warrant a higher risk rating, thereby requiring a higher degree of regulatory scrutiny. Generative AI guidance by federal and provincial regulators may also be updated as agentic AI enters the scene. Given the potential risks associated with this new technology, privacy regulators and financial services regulators (e.g., OSFI) are likely to be on high alert regarding uses of agentic AI that involve information or decisions about clients.
  • Other considerations: in addition to the risks raised by agentic AI itself, it is important to ensure that any AI products or capabilities are not overstated. Recently, the U.S. Federal Trade Commission launched a claim against a company for asserting that its AI agents could operate autonomously as customer service representatives, when in fact they were often unable to perform basic functions like placing outbound calls to businesses, scheduling appointments, taking down email addresses, or responding accurately to questions without substantial supervision4.

4. How do we ensure a “human-in-the-loop” when using agentic AI?

A key principle of responsible AI governance is to ensure that there is a qualified human overseeing AI use. So what does “human-in-the-loop” look like when AI agents are performing tasks autonomously? Currently, this is an open question for regulators and companies but it will likely require humans to closely monitor the AI agent’s activities, review steps taken at various points of the task sequence, perform regular audits and continually assess the agent’s performance against established metrics.

Whether this degree of oversight is worth the investment in the “autonomous” system is a question to be assessed by each organization, with respect to the specific tool and use case (see question 1). Be aware that, at least in the early days of agentic AI, many of these assessments might land on a “no”.

5. How does our governance framework need to change to account for agentic AI?

Financial institutions using or facilitating the use of agentic AI should update their data governance strategy to account for new risks. This includes:

  • Reassessing the AI suite: as new tools are introduced, it will be important to assess the risk profile of not only these new tools, but of the organization’s AI suite as a whole. It may be time to retire some systems that still carry risk but offer a dwindling benefit.
  • Updating governance protocols: as the AI tools the organization is using shifts, so should its policies and governance frameworks. Make sure to revisit any AI, privacy and data protection policies to ensure that any new dimensions of risk are accounted for, and that you have a plan for ensuring a qualified human is in the loop, with adequate insight into how these systems are operating. For federally regulated financial institutions, this should include technology and cyber risk frameworks developed under OSFI’s Guideline B-13, and a periodic audit and documentation of these controls.
  • Seeking expert advice: when in doubt about the best way to integrate new technologies, seek input from experts—both internal and external to the organization.

  1. Ramji Sundarajan and Uzayr Jeenah, “The end of inertia: Agentic AI’s disruption of retail and SME banking” McKinsey & Company (August 15, 2025).
  2. Ibid.
  3. E.g., Moffatt v. Air Canada, 2024 BCCRT 149 and Derek Mobley v. Workaday, Inc, Case No. 23-cv-00770-RFL.

To discuss these issues, please contact the author(s).

This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.

For permission to republish this or any other publication, contact Janelle Weed.

© 2025 by Torys LLP.

All rights reserved.
 

Subscribe and stay informed

Stay in the know. Get the latest commentary, updates and insights for business from Torys.

Subscribe Now