December 17, 2025Calculating...

Deepfakes are on the rise – are you prepared?

 
In 2023, U.S. stocks plummeted after an image went viral depicting smoke pouring from a building near the Pentagon. The image was AI generated, but its impact on the market was real1. In the years since that incident, it has only become cheaper and easier to manipulate media using AI. This has led to a sharp uptick in the use of deepfakes to harm companies, their leaders, their shareholders and their customers.

What you need to know

  • Deepfakes are hyper-realistic audio or visual media generated by AI. As AI improves, they are becoming more difficult to detect.
  • Corporations face an increasing risk of being targeted by deepfakes. Deepfakes can be used to facilitate fraudulent transfers of company funds by mimicking the likeness of executives, or to manipulate stock prices and decrease revenue by generating false statements.
  • Although there are avenues for recourse for fraudulent use of deepfakes, that process is complicated by the difficulty of tracing perpetrators. Companies can implement a variety of mitigation strategies to reduce the risk of losses related to deepfakes, including educating employees, developing incident response plans, using technology to detect deepfakes, and reducing publicly available content.

Companies and their leaders are being targeted by deepfakes

Deepfakes are AI-generated media (visual and/or audio) that look and sound realistic but are manipulated to depict something that is not real or did not occur2. Deepfakes are a growing source of concern for companies. A 2024 study found that 25.9% of executives polled said their organization experienced one or more deepfake incidents targeting financial and accounting data within the prior 12-month period3. This number is expected to increase dramatically in the years ahead. As AI becomes more sophisticated and affordable, deepfakes are becoming more convincing and easier to make, thereby increasing the quantity and quality of deepfakes targeting organizations4.

The risks presented by deepfakes are significant. They can be used to facilitate fraudulent transfers, compromise confidential or proprietary data, reduce profit, manipulate stock prices and harm reputation.

Facilitating fraudulent transfers

Deepfakes of executives can be used to facilitate fraudulent transfers or access confidential or proprietary information. For example:

  • In early 2020, a deepfake of a company director’s voice was used to trick a bank manager in the United Arab Emirates into approving fraudulent financial transactions totaling $35 million USD5.
  • In January 2024, a deepfake of a multinational design and engineer consultancy’s CFO deceived an employee into transferring $25.6 million USD to fraudsters. This was a complex scam that started with a phishing email to an employee, and advanced to a videoconference where the employee conversed with a deepfake that mimicked the face, voice, and mannerisms of the CFO6.
  • Throughout 2024, a deepfake of Elon Musk was used to advertise a cryptocurrency “investment” opportunity, deceiving social media users into losing their personal savings—in one case, an entire retirement fund of $690,000 was lost7.
  • In May 2024, a deepfake of the CEO of a large advertising agency was used in an attempt to obtain money and personal details from an agency leader. The fraudsters created a WhatsApp account with his name and image and used that account to set up a Microsoft Teams meeting. Luckily, the attempt was not successful8.
  • In July 2024, a deepfake of the CEO of a luxury car company’s voice was used in an attempted scam. However, the fraud was ultimately prevented when an executive assistant noticed some inconsistencies in the tone of his voice and asked a security question that the scammer could not answer9.

By 2027, Deloitte predicts that fraud losses due to AI will be as high as $40 billion in the United States alone10.

Manipulating stock prices, harming reputation, and impacting company viability

AI technology can also impact stock value and revenue by generating fake executive statements and consumer reviews, which may influence customer purchasing behaviour and investor sentiment11. For example, a deepfake of a CEO making offensive or inaccurate remarks could stall negotiations on a merger or cause a dip in stock prices.

Deepfakes are often used to harm reputation and spread misinformation. For example, a deepfake was used to discredit a Baltimore principal by falsely depicting him as making racist comments12. Similarly, a deepfake of Joe Biden’s voice was used in a robocall telling voters not to vote in the New Hampshire Democratic presidential primary13. Similar tactics could be used against companies and their leaders.

What to do about it

There are avenues for legal recourse for companies targeted by deepfakes. Existing legal frameworks can be called upon, including laws prohibiting defamation, forgery, fraud, identity theft, copyright infringement, appropriation of personality, and harassment. In some jurisdictions, deepfake-specific legislation may apply14. However, just because recourse is available does not always mean it is effective. It is often difficult to trace the source of the deepfake and apprehend the fraudster. Even if the fraudster is apprehended, it is often the case that the stolen money has been redirected and is unrecoverable.

Companies should therefore develop strategies to mitigate the risk of deepfake incidents occurring in the first place. These strategies can include:

  • Educate employees on the risks of deepfakes: some of the deepfake scams noted above were unsuccessful because the targeted employees had sufficient training to identify and respond to the scam. Employee education can be strengthened through training, including role playing and conducting simulations to help them identify suspicious instructions.
  • Develop an incident response plan: codify the steps key stakeholders should take when an employee suspects that they are interacting with a fraudster using deepfakes. This plan should include a clear escalation procedure to key personnel trained to respond to fraud incidents.
  • Implement multi-factor authentication: in circumstances where large transactions are being made, or confidential or proprietary information is being discussed, companies should implement multi-factor authentication processes to make sure that the person making the request is who they say they are. Similarly, companies could require multiple employees to confirm transactions over a certain threshold, perhaps using codewords to confirm the authenticity of their approval.
  • Using technology to detect deepfakes: while deepfakes can facilitate fraud, AI technology can also be applied to monitor threats and detect real-time voice and video deepfakes15.
  • Limiting executive exposure: to create a deepfake, fraudsters need access to audio and visual content of a particular individual. Consider how much content of any one executive to make available, including for press conferences or virtual meetings.
  • Creating spaces for verified public information: to protect investors and customers, communicate that all trusted company releases and public announcements will be made through verified sources, and the circumstances in which the company will or will not ask for personal or financial information.

For assistance developing and implementing a plan, contact the authors or our multidisciplinary AI Practice Group.


To discuss these issues, please contact the author(s).

This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.

For permission to republish this or any other publication, contact Janelle Weed.

© 2025 by Torys LLP.

All rights reserved.
 

Subscribe and stay informed

Stay in the know. Get the latest commentary, updates and insights for business from Torys.

Subscribe Now