A growing number of organisations have already encountered deepfake-driven threats. How big is the problem today, and how should firms respond?
In 2019, a UK-based energy firm’s CEO was tricked by a deepfake voice impersonation of the head of his parent company, leading him to transfer $255,000 to fraudulent accounts. In what was thought to be a first-of-its-kind case, the voice clone’s German accent and vocal nuances were so convincing that it bypassed the company’s usual verification steps.
Fast-forward six years and artificial intelligence (AI) technology is developing fast, leading to a number of similar incidents. In 2020, cybercriminals used AI-generated voice deepfakes to impersonate a company CEO, convincing a bank manager to transfer $35 million to fake accounts.
In 2024, a worker at a Hong Kong-based firm was duped into paying $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.
Deepfakes are increasingly impacting organisations, with 43% of firms reporting at least one audio call incident and 37% experiencing deepfakes in video calls, according to a 2025 Gartner report.
Accelerating Attacks
Gartner’s research shows deepfakes are entering a stage of “industrialisation”, powered by generative AI models that can now mimic executives’ voices or likenesses in seconds. The analyst describes how deepfake-as-a-service offerings are accelerating attack frequency and lowering the skill barrier for adversaries.
Deepfakes are being used for CEO fraud, synthetic identity scams and disinformation campaigns, leading to financial loss, reputational damage and supply chain disruption, Gartner’s Cybersecurity Turbulence in 2025 report says.
Generative AI now allows anyone to create and spread sophisticated synthetic content “at a speed and scale we’ve never seen before,” says Gartner principal analyst, Apeksha Kaushik.
This is due to user-friendly AI tools and cloud platforms, which have removed technical barriers, enabling even non-experts to produce “convincing fake audio and video,” she tells SC Media UK. “Recent industry data shows that the cost and time required to make a believable deepfake have dropped dramatically, making these attacks accessible to a wider range of malicious actors.”
The quality of deepfake technology is increasing “at a dramatic rate,” agrees Will Richmond-Coggan, partner and head of cyber disputes at Freeths LLP. “The result is that there can be less confidence that real-time audio deepfakes, or even video, will be detectable through artefacts and errors as it has been in the past.”
Adding to the risk, many people share images and audio recordings of themselves via social media, while some host vlogs or podcasts. This makes it increasingly easy for cybercriminals to create realistic voice or video impersonations of an organisation’s senior personnel, Richmond-Coggan says.
The Growth of Deepfake Attacks
Significant research in both private and governmental use of AI has accelerated technological development at “breakneck paces,” says Michael Tigges, senior security operations analyst at Huntress.
He cites the example of Google's "Veo 3" and OpenAI's "Sora" technologies, which “continue to evolve almost monthly,” refining their video synthesis abilities and “creating more compelling and more accurate experiences for users”.
Using technology such as this, attacks are already sophisticated, but experts warn that this is only the start. Gartner predicts AI-driven deception will soon integrate into multi-vector attacks, combining social engineering with technical compromise.
The future will see more of the same – but “faster, cheaper and everywhere,” Camden Woollven, head of strategy and partnership marketing at GRC International Group tells SC Media UK. “AI’s not arriving, it’s already here and quietly slotting itself into every weak spot we’ve got. It’s all the same principle: Automate persuasion and scale deceit.”
As the technology develops, Tigges predicts fake Zoom meetings will become more compelling and interactive. “Interviews with prospective employees and third-party vendors may be malicious, and conventional employees will find themselves battling state sponsored threat actors more regularly in pursuit of their daily remit.”
The line between real and synthetic audio and video will “continue to blur” as AI tools mature, says Crystal Morin, senior cybersecurity strategist at Sysdig. “High-quality models that were once out of reach or required advanced technical skills can now be used by anyone with minimal training, at little to no cost.”
Checks for Businesses
Deepfakes are an increasing risk to all businesses. With this in mind, firms need to adopt a “layered, proactive security approach,” says Kaushik. She describes how many organisations are now deploying real-time deepfake detection, using technologies such as liveness checks, digital watermarking and AI-based content authentication.
Multi-factor authentication (MFA) is evolving to include behavioural analytics and device profiling, while ongoing employee training is “crucial” in order to counter social engineering, says Kaushik.
Taking this into account, Dr. Ruth Wandhöfer, head of European markets at Blackwired, highlights the importance of developing a robust cybersecurity culture. “Train employees at all levels to recognise social engineering and deepfake tactics, encouraging vigilance and reporting suspicious activity.”
At the same time, don’t automatically trust conversations just because they are face-to-face. Richmond-Coggan suggests additional forms of authentication are used to ensure contacts are legitimate. “At its simplest, this might be the use of agreed codewords or pass-phrases, used when instructions are being given or information requested that’s of a certain level of sensitivity – or when it has the potential to be financial or identity fraud.”
User scepticism is critical, agrees Tigges. He recommends "out of band authentication.”
“If someone asks to make an IT-related change, ask that person in another communication method. If you're in a Zoom meeting, shoot them a Slack message.”
To avoid being caught out by deepfakes, it is also important that employees are willing to challenge authority, says Richmond-Coggan. “Even in an emergency it will be better for someone in leadership to be challenged and made to verify their identity, than the organisation being brought down because someone blindly followed instructions that didn’t make sense to them, or which they were too afraid to challenge.”
Woollven suggests two-person approvals for payments, callbacks to numbers you already know and rules that mandate not acting on anything within the same channel it came from.
It makes sense to drill the response before it happens, says Woollven. “If someone does fall for it, you want people to know exactly what to do, who to call, how to recall the payment, and who handles the press. If something feels urgent, it’s probably fake. Even if it seems harmless, assume it isn’t.”
Written by
Kate O'Flaherty
Cybersecurity and privacy journalist