A third of UK businesses already hit by evolving threat – survey.
Deepfakes – uncanny but believable synthetic content – have stopped being an online parlor trick and become a threat to business as a vector for social engineering attacks.
Deepfakes make use of artificial intelligence to swap faces, create new identities, mimic voices, or generate entirely fictional content that’s increasingly hard to distinguish from the real thing.
A third (32%) of UK businesses have experienced a deepfake security incident in the last 12 months, according to research by information security compliance specialist ISMS.online. Only malware infections emerged as a bigger problem in a survey of 502 UK information security professionals.
Faking It
Attackers have begun to use AI-powered voice and video-cloning technology to trick prospective marks into making corporate fund transfers as a component of business email compromise or similar scams.
In the most high profile example to date, a finance worker at British engineering firm Arup was duped into paying $200 million Hong Kong dollars (£20M) in January after a video call with a deepfake ‘chief financial officer’.
Deepfakes also have the potential to be abused in credential theft, to cause reputation damage or even as a potential mechanism to circumvent facial and voice recognition authentication, ISMS.online warns.
Getting Better
Deepfakes have been around for years, but with the constant improvements to AI (artificial intelligence) they are becoming more convincing every day.
Kev Breen, senior director cyber threat research at Immersive Labs, comments: “With modern GenAI systems and products, all that is required is a single photo or 30 seconds of voice recording to create deepfakes.
“These deepfakes are still relatively easy to spot, and generating high-quality, convincing videos of individuals still requires more than a simple point-and-click service, which is why we don't see them being used daily.”
Other security experts disagreed with this assessment, arguing that deepfakes are already well on the way to becoming a mainstream component of spear phishing and social engineering attacks, particularly against consumers.
Consumer-focused deepfake fraud attempts jumped 3000% year-on-year between 2022 and 2023, according to Onfido, a division of credit reference agency Entrust.
Jonathan Miles, principal threat response analyst at Mimecast, told SC UK: “Unfortunately, deepfakes are very much already in circulation, and a popular method used by cyber-criminals often for phishing scams, identity theft, disinformation campaigns, and blackmail.”
Adam Pilton, cybersecurity consultant at CyberSmart and former Detective Sergeant investigating cybercrime at Dorset Police, comments: “We are seeing deepfakes becoming increasingly sophisticated, making it harder to distinguish them from real videos or audio recordings. This makes them very convincing tools for social engineers.”
Stay Sharp
Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University, offered tips for detecting deepfakes.
“Typical red flags include inconsistencies in audio and video quality, like unnatural facial expressions, lip movements not syncing with audio or distortions in the media files,” Curran told SC UK.
“Employees should also be wary of unusual requests or instructions – especially those that contradict established protocols or policies, come from unexpected sources or involve urgent demands for sensitive information or financial transactions without proper verification.”
Curran went on to add that suspicious email attachments, an inability to provide clarifying details, pressure tactics attempting to bypass standard processes and diversion from usual communication patterns can often indicate deepfake scam attempts.
Training Set
Encouraging employees to question and verify multimedia content and training them on how to identify potential deepfake manipulation is crucial to mitigating risks.
Employee education alone may not be enough. Despite a heightened focus on training, the ISMS research found that employee errors continue with even well-trained employees facing challenges in identifying deepfakes.
Dr Thea Mannix, a neuro scientist and director of research at Praxis Security Labs, explains that human perception inherently makes it harder to spot deepfakes than dodgy phishing emails.
“We can train people to scrutinize emails for red flags that are fairly static and indicative of something wrong (e.g. poor grammar, different reply-to email),” Dr Mannix explained.
“Deepfakes are exceptionally difficult in this regard because red flags for video and voice communications such as visual or audio differences are often attributed to poor connections or other innocent causes.”
A multi-faceted approach combining stricter legislation, enhanced technology, and comprehensive education is needed because simply improving security controls is, at least for the immediate future, not an option.
Jake Moore, global cybersecurity advisor, ESET, commented: “The technology to counteract deepfakes remains unfortunately substandard in confidently spotting them or even attempting an alert. There is no simple code or technological feature that makes deepfakes standout quickly so it means an extensive human approach is still necessary for the foreseeable – much like with spotting misinformation.”
Paul Holland, chief exec of Beyond Encryption, argued that a multi-faceted approach to mitigating the risk posed by deepfakes is required.
“Businesses must educate employees and customers about deep fake technology and its potential risks, establish robust verification processes for any critical communications and invest in advanced AI-based cybersecurity tools to detect and prevent deep fakes,” Holland advised.
Written by