Deep fakes are becoming a real business issue. What can CISOs do about it?
Deep fakes are on the rise, with criminals using voice and video to impersonate CEOs. Attacks using the technique are surging. Nearly a third (32%) of UK businesses report experiencing a deep fake security incident in the past year, making it the country's second most common type of breach, according to ISMS.online's State of Information Security report.
The real-life examples are growing. Deep fakes have been used on Zoom calls to trick execs into disclosing sensitive information, while attackers are taking advantage of voice cloning to persuade professionals to transfer large sums of money.
In February this year, a finance worker transferred $25m (£18.9m) into a fraudster’s account after being tricked into attending a video call with deep fake recreations of his colleagues.
Back in 2019, the CEO of a UK energy firm was duped after receiving a call he thought was from his German boss, leading him to wire $243,000 (£191,744) to cyber-criminals.
In 2020, attackers cloned a company director’s voice, convincing a Hong Kong branch manager to authorise transfers worth $35m (£27.6m).
As deep fakes become more convincing, super-charged by the abilities of generative AI such as ChatGPT, it’s increasingly likely employees will be tricked. After all, you’re unlikely to question a request if it looks like it’s legitimate, and it comes from your superior.
“When it comes from your boss, people sometimes don’t ask questions,” says Andrew Rose, CSO at SoSafe.
Amplifying existing attack techniques
Deep fakes are effective because they amplify an existing attack technique – imitation and authority, says Rose. “People still have trust in real-time video or voice, more than email and text message. Criminals can leverage this to launch more effective attacks.”
In the latest deep fake attacks, AI-generated videos and voices are being used to impersonate executives, vendors and other key people, says Akhil Mittal, senior manager at the Synopsys Software Integrity Group. “Imagine a fake CEO on a video call authorising a wire transfer, or a voice message instructing financial decisions. These aren’t hypothetical scenarios, they’re happening now, and these attacks are far more sophisticated than phishing scams we have dealt with for years.”
Currently, adversaries often use deep fakes in business email compromise (BEC) style attacks, says Luke Dash, CEO OF ISMS.online. Other potential uses include information or credential theft, causing reputational harm, or circumventing facial and voice recognition authentication, he says.
But what’s especially alarming is how deep fakes are being used creatively, says Mittal. He cites the example of a hacker impersonating a company leader during a board meeting. “They could push through decisions that end up costing the company millions.”
The increasingly sophisticated technology at the heart of deep fakes is making the problem worse. The technology behind deep fakes is advancing faster than most people realise, says Mittal.” What used to be amateurish now looks almost real, and it can even mimic real-time conversations.”
One example of deep fake technology in action is the “face swaps technique”, which allows cyber criminals to falsify a person’s identity by mapping two faces and inverting them, says Lovro Persen, director document management and fraud at IDnow. He cites the example of Tom Cruise’s videos on TikTok.
Deep fakes are also becoming audio-visual, says Persen. “Visual quality is constantly improving, and attackers can now convert audio from a video. With the help of deep learning and the creation of specific models, it’s possible to reproduce someone else’s voice.”
When attacks are this convincing, they’re much more difficult to spot; and while detection tools are improving, they’re still not always able to identify deep fakes. “AI and machine learning can help flag inconsistencies in videos or audio, but they struggle with real-time fakes,” says Mittal
Tackling deep fakes
Deep fakes are a growing issue, so it’s no surprise that CISOs are concerned about their impact.
The forecast for deep fakes and a CISO’s organisational security programme is “not good”, says Ian Thornton-Trump, CISO at Cyjax.
AI and deep fake technology will make cyber-criminal attacks more effective at successfully social engineering victims – which could cause “significant and costly business damage”, he says. “Technology that makes social engineering based attacks more successful could massively impact cybercrime damage, even with an increased spend on security.”
Taking this into account, it’s important to incorporate deep fakes into your security strategy. As part of this, it helps to know the “red flag” indicators that point to deep fake attacks. It's not as simple as spotting misspellings or logos that look off, says Morgan Wright, chief security advisor at SentinelOne. “Deep fakes zero in on specific individuals or groups, using machine learning algorithms to create extremely realistic images, audio and videos.”
To identify deep fakes, look out for “urgent, out-of-the-blue” requests asking for sensitive information, funds or assistance, or pressure to act quickly to avoid severe consequences, he says.
In high-stakes situations involving financial and legal matters, implementing multi-factor authentication and rigorous verification processes is “crucial”, says Wright.
At the same time, educating staff is key. “People must be aware that these attacks are real, and the patterns of how they play out,” says Rose.
Deep fakes are no longer a distant concern and it isn’t just a tech problem; it’s a people issue, says Mittal. “The first thing is making sure employees know what’s out there and how these fakes can be used against them.
“Then, it’s about putting in place solid verification processes, especially for high-stakes transactions or sensitive communications. Until detection technology catches up, the best defence is a mix of scepticism, strong policies and always verifying at every step.”
Written by
Kate O'Flaherty
Cybersecurity and privacy journalist