Ease of creating an infostealer demonstrated in LLM jailbreaking technique.
Popular generative AI (GenAI) tools were tricked into developing malware that can steal login credentials from Google Chrome.
According to research by Cato Networks, a rookie researcher created a detailed fictional world where each GenAI tool played role, with assigned tasks and challenges. Using this, they were able to bypass the security controls and effectively normalised restricted operations.
Ultimately, the researcher succeeded in convincing the GenAI tools to write Chrome infostealers.
“Infostealers play a significant role in credential theft by enabling threat actors to breach enterprises,” said Vitaly Simonovich, threat intelligence researcher at Cato Networks.
“Our new LLM jailbreak technique, which we’ve uncovered and called Immersive World, showcases the dangerous potential of creating an infostealer with ease. We believe the rise of the zero-knowledge threat actor poses high risk to organisations because the barrier to creating malware is now substantially lowered with GenAI tools.”
Cato said it reached out to GenAI providers with its LLM jailbreak technique. DeepSeek was unresponsive, and Microsoft and OpenAI acknowledged receipt. Google acknowledged receipt but declined to review the code of the Chrome infostealer.
Written by
Dan Raywood is a B2B journalist with 25 years of experience, including covering cybersecurity for the past 17 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.
He has spoken at events including 44CON, Infosecurity Europe, RANT Forum, BSides Scotland, Steelcon and the National Cyber Security Show, and served as editor of SC Media UK, Infosecurity Magazine and IT Security Guru. He was also an analyst with 451 Research and a product marketing lead at Tenable.