Can any AI-generated content be certified as genuine?
There is a need for AI products to carry a certification of safety for people to trust it.
Speaking at a Kaspersky event in Athens, principal security researcher David Emm said that with a watermark, people can trust it more. The panel Emm was acknowledged that there are schemes to try and market something as “certified legitimate content,” this could help eradicate fake news and deepfake videos, and general AI poisoning.
Speaking to SC UK, Emm acknowledges that this could lead to the case of people trying to spoof the certification process, and another issue is that if AI-generated content does not carry the watermark, the person featured it could deny that it is genuine “and lead to some level of plausible deniability.”
The other element, Emm says, is that consumers may not care if content is certified as being legitimate. “Sometimes it doesn’t matter if it is true or not, if it is out there,” he says. “A percentage of the audience do not care as it is an echo of what they feel.”
Emm said that there can be instances where deepfakes are used in a positive theme, referencing the David Beckham anti-malaria campaign where the former footballer David Beckham was able to
speak nine different languages.
Written by
Dan Raywood is a B2B journalist with 25 years of experience, including covering cybersecurity for the past 17 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.
He has spoken at events including 44CON, Infosecurity Europe, RANT Forum, BSides Scotland, Steelcon and the National Cyber Security Show, and served as editor of SC Media UK, Infosecurity Magazine and IT Security Guru. He was also an analyst with 451 Research and a product marketing lead at Tenable.