Authors
Adam S. Armstrong
Molly Reynolds
Lauren NickersonT
Tristan Montag
In 2023, U.S. stocks plummeted after an image went viral depicting smoke pouring from a building near the Pentagon. The image was AI generated, but its impact on the market was real1. In the years since that incident, it has only become cheaper and easier to manipulate media using AI. This has led to a sharp uptick in the use of deepfakes to harm companies, their leaders, their shareholders and their customers.
Deepfakes are AI-generated media (visual and/or audio) that look and sound realistic but are manipulated to depict something that is not real or did not occur2. Deepfakes are a growing source of concern for companies. A 2024 study found that 25.9% of executives polled said their organization experienced one or more deepfake incidents targeting financial and accounting data within the prior 12-month period3. This number is expected to increase dramatically in the years ahead. As AI becomes more sophisticated and affordable, deepfakes are becoming more convincing and easier to make, thereby increasing the quantity and quality of deepfakes targeting organizations4.
The risks presented by deepfakes are significant. They can be used to facilitate fraudulent transfers, compromise confidential or proprietary data, reduce profit, manipulate stock prices and harm reputation.
Deepfakes of executives can be used to facilitate fraudulent transfers or access confidential or proprietary information. For example:
By 2027, Deloitte predicts that fraud losses due to AI will be as high as $40 billion in the United States alone10.
AI technology can also impact stock value and revenue by generating fake executive statements and consumer reviews, which may influence customer purchasing behaviour and investor sentiment11. For example, a deepfake of a CEO making offensive or inaccurate remarks could stall negotiations on a merger or cause a dip in stock prices.
Deepfakes are often used to harm reputation and spread misinformation. For example, a deepfake was used to discredit a Baltimore principal by falsely depicting him as making racist comments12. Similarly, a deepfake of Joe Biden’s voice was used in a robocall telling voters not to vote in the New Hampshire Democratic presidential primary13. Similar tactics could be used against companies and their leaders.
There are avenues for legal recourse for companies targeted by deepfakes. Existing legal frameworks can be called upon, including laws prohibiting defamation, forgery, fraud, identity theft, copyright infringement, appropriation of personality, and harassment. In some jurisdictions, deepfake-specific legislation may apply14. However, just because recourse is available does not always mean it is effective. It is often difficult to trace the source of the deepfake and apprehend the fraudster. Even if the fraudster is apprehended, it is often the case that the stolen money has been redirected and is unrecoverable.
Companies should therefore develop strategies to mitigate the risk of deepfake incidents occurring in the first place. These strategies can include:
For assistance developing and implementing a plan, contact the authors or our multidisciplinary AI Practice Group.
To discuss these issues, please contact the author(s).
This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.
For permission to republish this or any other publication, contact Janelle Weed.
© 2025 by Torys LLP.
All rights reserved.