It hasn’t been long since the generative AI boom first shook the market, requiring businesses, legislators and courts to urgently consider how to adapt to, adopt and manage this technology—yet even as we ride these shockwaves out, new horizons emerge, the latest of these being agentic AI.
Agentic AI refers to artificial intelligence systems that are capable of autonomously planning and performing tasks, making decisions and interacting with various systems to accomplish specific, pre-defined goals. Agentic AI uses, and builds on, older forms of AI but exhibits a higher degree of agency in executing the tasks it is designed to perform. This level of autonomy is thanks to the goal-driven, looped architecture the model follows from perception to reasoning and action, allowing it to self-correct and learn from experience.
As the integration of this technology within financial services accelerates, we share five key questions that financial institutions’ in-house counsel should be asking.
Agentic AI’s use cases vary from organization to organization and are continually evolving as the technology develops. So an important preliminary question for in-house counsel is whether the value add of agentic AI is worth the cost of its adoption and implementation.
Generally, key benefits of agentic AI, as compared to older forms of task-based AI systems, include its ability to respond to dynamic and evolving circumstances and to interact with other tools, platforms and agents. Financial institutions might benefit from using agentic AI for a number of tasks:
As interest in agentic AI continues to grow, it is important for in-house counsel to be aware of industry trends and client expectations around its implementation. Financial institutions may see an increase in clients using, or seeking to use, agentic AI to streamline their banking and investing processes. For example, AI agents could be used to monitor account balances and move unused funds into higher yield accounts between billing cycles—ensuring that enough funds are in the right account when it is time to pay bills1. Clients could also use agentic AI to track spending, enforce budgetary caps, make purchases and book travel.
Certain payment providers, including Visa, Mastercard and PayPal, recently indicated that they will support AI agents facilitating payments on behalf of users2.
The appeal of agentic AI is that it requires less oversight than other forms of artificial intelligence. However, this reduced oversight also increases many of the risks associated with AI use, such as:
A key principle of responsible AI governance is to ensure that there is a qualified human overseeing AI use. So what does “human-in-the-loop” look like when AI agents are performing tasks autonomously? Currently, this is an open question for regulators and companies but it will likely require humans to closely monitor the AI agent’s activities, review steps taken at various points of the task sequence, perform regular audits and continually assess the agent’s performance against established metrics.
Whether this degree of oversight is worth the investment in the “autonomous” system is a question to be assessed by each organization, with respect to the specific tool and use case (see question 1). Be aware that, at least in the early days of agentic AI, many of these assessments might land on a “no”.
Financial institutions using or facilitating the use of agentic AI should update their data governance strategy to account for new risks. This includes:
To discuss these issues, please contact the author(s).
This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.
For permission to republish this or any other publication, contact Janelle Weed.
© 2025 by Torys LLP.
All rights reserved.